JavaBeat

  • Home
  • Java
    • Java 7
    • Java 8
    • Java EE
    • Servlets
  • Spring Framework
    • Spring Tutorials
    • Spring 4 Tutorials
    • Spring Boot
  • JSF Tutorials
  • Most Popular
    • Binary Search Tree Traversal
    • Spring Batch Tutorial
    • AngularJS + Spring MVC
    • Spring Data JPA Tutorial
    • Packaging and Deploying Node.js
  • About Us
    • Join Us (JBC)
  • Privacy
  • Contact Us

JQuery UI Draggable Example

November 13, 2014 by itadmin Leave a Comment

The drag is a common feature which is an event, enables dragging by allowing DOM elements to be moved using the mouse. It is something like, when we grab an object and drag it to a different location. It is an intuitive way for user to interact with website or application. Once the element is draggable, we can drag the element anywhere within the viewport by clicking the mouse. A drag operation could be performed using mouse events and the drop operation could be triggered by the mouse being released. This drop event occurs, when the dragged data is dropped to a different location in the viewport.

  • Create Buttons using jQueryUI
  • How to detect jQuery version
  • How To Find CSS Class in a Element

The drag operation could be used to perform some tasks such as moving email messages or any contents to the folders, rearranging list of items etc. The draggable method can be used in following forms:

  • $(selector, context).draggable(options)
  • $(selector, context).draggable(“actions”, [params])

The following is a simple example of draggable widget:

[code lang=”xml”]
<!DOCTYPE html>
<head>
<link href="http://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css" rel="stylesheet">
<script src="http://code.jquery.com/jquery-1.10.2.js"></script>
<script src="http://code.jquery.com/ui/1.10.4/jquery-ui.js"></script>
<style type="text/css">
#DragMe {
width: 150px;
height:150px;
padding:0.5em;
background:orange;
text-align:center;
}
</style>
<script type="text/javascript">
$(function () {
$(‘#DragMe’).draggable();
});
</script>
</head>
<body>
<div id="DragMe">
<p>Drag me</p>
</div>
</body>
</html>
[/code]

The above script uses draggable() method which moves elements to a different location in the viewport. We are using the id selector “DragMe” within the div element and providing different CSS styles using this selector for displaying result as specified values.

JQueryUI-Draggable-SimpleExample-Demo

JQueryUI Draggable Simple Example

Draggable Widget Options

The draggable method contains following options:

Option Description Default Value
addClasses It possible to prevent ui-draggable class from being added, set this option with false in the list of DOM elements. true
appendTo It appends the specified element while dragging. parent
cursor It specifies the mouse pointer when an element moves. auto
containment It drags the element within bounds of the region. false
axis It defines the axis which the dragged elements moves on the values such as horizontal or vertical axis. null
cancel It is used to cancel the dragging operation on the specified element. input, textarea, button, select, option
cursorAt It specifies the offset to the mouse pointer false
delay It specifies the delay time in milliseconds when the movement of mouse is taken into account . 0
disabled It disables the movement of elements when it is set to true i.e. it stops draggable. 1
distance It determines the displacement before moving the cursor in the form of pixels. 1
grid It drags the elements to grid system with x and y pixels in the form of [x,y]. false
handle The handle that start the draggable. false
helper It provides helper element for dragging display. original
opacity It provides opacity of the element when moving. false
revert It reverts the element back to its original position after completion of move. false
revertDuration It determines the duration in milliseconds when element revert back to its original position. 500
scope It defines sets of draggable and droppable of items. default
scroll It scrolls the element when it is moved outside the viewport of window. true
scrollSpeed It displays the scrolling speed. 20
snap It display the elements which are being moved on the other elements. false
snapMode It determines adjustment between the moved element and options.snap . both
snapTolerance It specifies maximum number of pixels to establish the adjustment. 20
stack It brings the matched element from the set of elements to the fron. false
zIndex It initialize the Z-index for the helper while being dragged. false

Example using Options

[code lang=”xml”]
<!DOCTYPE html>
<head>
<link href="http://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css" rel="stylesheet">
<script src="http://code.jquery.com/jquery-1.10.2.js"></script>
<script src="http://code.jquery.com/ui/1.10.4/jquery-ui.js"></script>
<style type="text/css">
#DragMe { width: 200px; height: 200px; background:orange;}
#DragHelper { width: 200px; height: 200px; background: red; }
</style>
<script type="text/javascript">
$( init );
function init() {
$(‘#DragMe’).draggable( {
cursor: ‘move’,
containment: ‘document’,
helper: myHelper
} );
}
function myHelper( event ) {
return ‘<div id="DragHelper">Please drag me!!!</div>’;
}
</script>
</head>
<body>
<div id="mydemo">
<div id="DragMe">Drag to see helper element</div>
</div>
</body>
</html>
[/code]

JQueryUI-Draggable-Options-Demo
JQueryUI Draggable Options Example

Draggable Widget Methods

The following table shows some of the methods which are used with draggable widget:

Method Description
destroy() It removes the draggable functionality.
disable() This method disables the drag action.
enable() This method enables the drag action.
instance() It creates the draggable instance object.
options() It returns the options property. It sets draggable option with specified option name.
widget() It defines draggable element object.

Example using Methods

[code lang=”xml”]
<!DOCTYPE html>
<head>
<link href="http://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css" rel="stylesheet">
<script src="http://code.jquery.com/jquery-1.10.2.js"></script>
<script src="http://code.jquery.com/ui/1.10.4/jquery-ui.js"></script>
<style>
#mydiv1
{
width:60%;
height:40px;
background:orange;
}
#mydiv2
{
width:60%;
height:40px;
background:pink;
}
</style>
</head>
<body>
<div id="mydiv1" style="border:1px solid red;">
<p>This is disabled element.</p>
</div>
<div id="mydiv2" style="border:1px solid green;">
<p>This is enabled element.</p>
</div>
<script>
$("#mydiv1 p").draggable ();
$("#mydiv1 p").draggable (‘disable’);
$("#mydiv2 p").draggable ();
$("#mydiv2 p").draggable (‘enable’);
</script>
</body>
</html>
[/code]

JQueryUI-Draggable-Methods-Demo

JQueryUI Draggable Methods Example

Draggable Widget Events

The following table shows events which are used with draggable widget:

Event Description
create It fires when draggable element is created.
start It calls when starts dragging element.
drag It is called when mouse is moved during the dragging.
stop It fires when a drag is ended.

Example using Events

[code lang=”xml”]
<!DOCTYPE html>
<head>
<link href="http://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css" rel="stylesheet">
<script src="http://code.jquery.com/jquery-1.10.2.js"></script>
<script src="http://code.jquery.com/ui/1.10.4/jquery-ui.js"></script>
</head>
<body>
<div id="mydiv1" style="border:1px solid red;">
<p>Welcome to JQuery UI!!!</p>
</div>
<script type="text/javascript">
$(‘#mydiv1 p’).draggable( {
cursor: ‘move’,
stop:function(event, ui){
alert("Drag is Ended!!!");
}
});
</script>
</body>
</html>
[/code]

JQueryUI-Draggable-Events-Demo

JQueryUI Draggable Events Example

Filed Under: jQuery Tagged With: JQuery Events

JQuery ajaxStart Example

November 13, 2014 by itadmin Leave a Comment

The ajaxStart() method triggers when an Ajax request begins and there is no other requests are in active process. This method is used to show the graphics, when the data is loading from the server. When AJAX request begins, then the AjaxStart() method add to the function.

  • Create Buttons using jQueryUI
  • How to detect jQuery version
  • How To Find CSS Class in a Element

jQuery ajaxStart Syntax

[code lang=”xml”]
$(selector).ajaxStart(callback())
[/code]

In the above syntax ‘callback’ is used as parameter. It is used to represents a function to be invoked. This function runs when the Ajax request is start. ajaxStart() method is a Ajax event.

jQuery ajaxStart Example

[code lang=”xml”]
<html>
<head>
<title>ajaxStart method</title>
<script src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
</head>
<script type="text/javascript">
$(document).ready(function () {
$("div").ajaxStart(function(){

});
$("button").click(function () {
$("div").load("demo.txt");
});
});
</script>
<body>
<div><h2>ajaxStart() Method Example</h2></div>
<button>Click</button>
</body>
</html>
[/code]

  • As shown in the above program, we have used the code inside $(document).ready which is an event which fires up when document is ready. It will run once the page document object model(DOM) is ready for JavaScript code to execute.
  • The ‘.ajaxStart()’ method triggers when an Ajax request begins.
  • $(“button”).click(function()) line defines the click method which occurs when an element is clicked.
  • $(“div”).load(“demo.txt”); statement loads the text file from the server and it returns data into the selected element.

When you run the above example, you would get the following output :
jQuery ajaxStart Method Example

When you click on load button it loads the file and displays the following output:
jQuery ajaxStart Method Example1

Filed Under: jQuery Tagged With: JQuery AJAX

Introduction to Node.js

June 10, 2014 by itadmin Leave a Comment

What is Node.js

Node.js is a platform created by Ryan Dahl, for developing applications for better scalability. Node.js is based on event driven programming model. We can develop server-side and networking application using Node.js software platform. The primary objective of Node.js is to maximize the throughput of the applications, and Node.js uses non-blocking I/O. Node.js promote asynchronous event handling. The programming language used in Node.js to develop applications is JavaScript.

  • How to install Bower?
  • Bootstrap Setup

Blocking I/O and Multi-Threaded Programming Models

Before we proceed with even-driven programming style, let us revisit the two programming models and their disadvantages. This way we can better appreciate the value of even-driven programming.

Blocking IO: The traditional blocking I/O programming model has its own disadvantages. In the older days, when doing I/O operations on time-sharing systems, each process corresponds to one user so that users are isolated from each other. In those systems, user needs to finish one operation before proceeding to the next operation. The biggest problem with this kind of model is scalability. The operating system has to take the burden of managing all the processes, and context switching is expensive. Moreover, the system performance degrades after the processes reach certain number.

Multi-threaded Programming: To mitigate the problems associated with traditional blocking I/O programming model, Multi-Threading was introduced. In multi-threaded programming, multiple threads are spawned from one single process which shares memory with other threads within the same original process. The idea behind this model is that, when one thread is waiting for I/O, another thread can take some other task to execute. When the I/O operation finishes, the thread which is waiting for I/O can wake up and resumes processing the tasks. The problem with this multi-threaded programming model is that, programmers do not know the exact behavior of threads and their shared memory state. Programmers have to rely on locking, synchronization and semaphores to control access to data and resources.

Event Driven Programming

A style of programming in which events will determine the flow of execution. Event handlers or event callbacks will handle the events. An event callback is a function that is invoked when something significant happens. For example, we can have an event callback function to be invoked when a new message is available in the messaging queue, or for user triggered events like clicking a button etc.

For example, consider the old fashioned blocking I/O programming way, if we want to retrieve and process data from a database, we’ll do something like:

[code]myData = query(‘SELECT * FROM employees where empId = 1234’);
processMyData(myData);
[/code]

In the above example, the current thread has to wait until the database finishes retrieving the results.

In the event-driven programming style, the same scenario can be achieved like this:

[code]myDataRetrieved = function(myData) {
processMyData(myData);
}
query(‘SELECT * FROM employees where empId = 1234’, myDataRetrieved);
[/code]

In the above event-driven example, we are first defining what is going to happen once the query is finished, and we are storing the query results in a function named ‘myDataRetrieved’. Then we are passing that function as an argument to the query. When the query execution is finished, the query will invoke the function ‘myDataRetrieved’ (which essentially processes the data), instead of simply returning the result.

In this style of programming, instead of simply returning the results, we define functions that are invoked by the system when significant events occur (in our case data is retrieved and ready to use). This style of programming is called ‘event-driven programming model’ or ‘asynchronous programming’. This model is the fundamental aspect of the Node.js. The core idea behind this programming model is that, current process (or thread) will not block when it is doing I/O. Therefore, multiple I/O operations can occur in parallel, and each operation has its own callback function which will be invoked once respective operation completes.

How event-driven programming is achieved

The event-driven programming is achieved through a concept of ‘event loop’. Essentially, an event loop performs two operations in a continuous loop. These two operations are event detection and event handling. In each run of the loop, it has to detect what are all the events fired, and then determining respective event callback and invoking the callback.

This event loop runs as a single thread inside a single process. Due to this fact, programmers can relax the synchronization requirements and do not have to worry about concurrent threads accessing common resources and sharing same memory state.

How clients handle asynchronous requests

Consider the following example of jQuery performing an Ajax request using XMLHttp-Request (XHR):

[code]$.post(‘/myData.json’, function(data) {
console.log(data);
});
[/code]

In the above code example, the I/O operation doesn’t block execution. This program performs an HTTP request for myData.json. When the response comes back, an anonymous function is called (the callback in this context) containing the argument ‘data’, which is the data received from that request.

The response for myData.json would be stored in the ‘data’ variable when it is ready and that the console.log function will NOT execute until then.

The I/O operation (the Ajax request) would block script execution from continuing until ready. Because the browser is single-threaded, if this request took 500 milliseconds to return, any other events happening on that page would wait until then before execution. User experience will be bad if an animation was paused or the user was trying to interact with the page during this waiting period.

In this case, fortunately things are not blocked. When I/O happens in the browser, it happens outside of the event loop (outside the main script execution) and then an event is emitted when the I/O is finished, which is handled by a function (often called the callback).

The I/O happens asynchronously and doesn’t block the script execution, allowing the event loop to respond to whatever other interactions or requests are being performed on the page. This enables the browser to be responsive to the client and to handle a lot of interactivity on the page.

There few exceptions that block execution in the browser, those are alert, prompt, confirm and synchronous XHR. Usage of these is not recommended unless application really demands.

How server handles asynchronous events

The following is a PHP example of traditional I/O blocking model.

[code]$result = mysql_query(‘SELECT * FROM myTable’);
print_r($result);
[/code]

In the above example, program execution blocks until database query completes execution. This code does some I/O, and the process is blocked from continuing until all the data has come back. Though this model is fine for many applications, but the process has state, or memory, and is essentially doing nothing until the I/O is completed. That could take anywhere from 20ms to minutes depending on the latency of the I/O operation.

Typically, while waiting for I/O the server does nothing. One solution to this problem is to use multithreading. But the multithreaded applications are complex and hard to code and manage, also expensive in terms of CPU utilization and execution.

In Node, I/O is performed outside of the main event loop, allowing the server to stay efficient and responsive. This makes it much harder for a process to become I/O-bound because I/O latency isn’t going to crash your server or use the resources it would if you were blocking. It allows the server to be lightweight on what are typically the slowest operations a server performs.

DIRTy applications and Node.js

Node.js is specifically designed for Data Intensive Real-Time applications or simply ‘DIRT’ applications. A Node server is asynchronous and event driven, holds number of I/O connections open, while simultaneously handling many requests with low memory footprint. Node applications are lightweight on I/O, and highly responsive. Node is a powerful platform for data intensive, highly responsive and real-time applications. Node core is small, simple and contains building blocks for I/O based applications. There are many third party modules built upon core module to offer greater abstractions. ‘Express’ is a popular Node.js framework.

Installing Node.js

Installing Node.js is pretty straightforward on most operating systems. Node can be installed either using package installers or using command line tools. Of course, command line installation is easy for Unix/Linus platforms. As learners, we always wanted to install on our Windows systems. So in this article, let us install Node in Windows. However, from time to time, we need to use Node Package Manager (npm) to find and install required add-ons.

Node standalone installers are available here. Please download the appropriate version for your Windows (32bit/64bit) and double clicks on the installers. The installation process is simple and self-explanatory (You can find installation instructions in our bower tutorial). Once Node is installed successfully, we should be able to run Node and npm from the command prompt.

Verifying Node.js installation

In order to verify whether Node is successfully installed, please go to command prompt and execute the following commands. The first command displays the Node version installed and the second command displays sample test output to screen.

Assuming we installed Node under D:\work\nodejs …

1

Press Ctrl+C couple of times to exit from the Node prompt.

Also, the folder structure should look similar to the following screen capture once the Node is successfully installed on the Windows.

NodeJS Directory

Node.js Application By Creating Node HTTP Server

Building servers is very common in Node. We can create different types of servers using Node. This is slightly strange especially if we come from the server background where we use servers like Apache or Nginx etc. However, in Node we can logically think that server and application are same. The following code example creates a HTTP server which responds to requests.

It is always best option to keep application specific files or applications separate from Node installation. Since we already installed Node under D:\work\nodejs, let us create a separate folder under D:\work called node-apps. Inside node-apps create an application folder called hello-node. Then, create a file called server.js and type the following code in the server.js file.

[code lang=”java”]
var http = require(‘http’);
http.createServer(function (request, response) {
response.writeHead(200, {‘Content-Type’:’text/plain’});
response.end(‘Welcome to Node platform.\n’);
}).listen(3000);
console.log(‘Server running at http://localhost:3000/’);
[/code]

Once our server.js file is ready, go to command prompt to location where the server.js file resides (D:\work\node-apps in this case), and execute the following command to start the server.

[code]D:\work\node-apps\hello-node&gt;node server.js[/code]

The Node HTTP server should start listening on port 3000 as shown in the following screen capture.

Running NodeJS

Now, open any browser and type the following to test our first Node application.

[code]http://localhost:3000/[/code]

We should be able to see the sample output as shown below.

NodeJS Example

So whenever a HTTP request arrives on the configured port (3000 in our example above), the call back function is triggered with request and response as arguments. Inside this function we are specifying the HTTP Status code to returned (200), and Contet-type as text/plain in the response object. Finally, we are ending the response with a message which displays on the browser.

Filed Under: NodeJS Tagged With: Node Programming, NodeJS Basics

UITableView in iOS

May 20, 2014 by itadmin Leave a Comment

UITableView is a type of a view used to display the data in the form of a table. The UI control consists of one column and the number of rows can be specified by user. It inherits from UIView, UIResponder and NSObject classes. UITableView control is present in the object library in Xcode. It consists of cell of type UITableViewCell which forms the rows. The cell consists of content view, accessory view which helps us to display data and perform certain action. Currently there are 2 different styles of tableview, Plain and Grouped table view.

In this tutorial we will see how to use UITableView in a singleView application, populate the data to the table view by specifying number of rows and sections, perform some action when a row is selected.The tableview style is Plain.

The following concepts are covered in this document.

  1. Save data in Collections(here we use NSMutableDictionary)
  2. UITableViewDatasource protocol methods for populating data.
  3. UITableViewDelegate protocol methods for performing an action when a cell is selected.

Lets take an example of groceries list which we have to buy from a shop. List the items in the tableview and once we buy the product we mark the product in the tableview by using a check mark, check mark is accessory which can be used in UITableView.

Single View Application in XCode

  • create a single view application in Xcode, name the project as “UITableViewDemo”.
  • In Main.storyboard, a view is present in the viewcontroller, drag and drop a UITableView from the object library.

addTableView

Declare the properties

  • Connect the table view present in interface to the property.
  • Declare the remaining properties required.

[code]
@interface ViewController : UIViewController
@property (weak, nonatomic) IBOutlet UITableView *itemsTableView;
@property (strong,nonatomic) NSMutableDictionary *itemDetailsList;
@property  (strong,nonatomic) NSArray *allKeys;
@end
[/code]

Save data in NSMutableDictionary

  • We need to have some data to display in the tableview. In this example we store data in NSMutableDictionary. NSMutableDictionary is a dictionary with key-value pair that allows to modify the data present in the dictionary. It inherits from NSDictionary with modification operations. In the below code we store the item name as the key and “YES” or “NO” as the bool value to identify whether the item is purchased or not.
  • We also assign all the keys of the dictionary to NSArray, as we have to display only keys in the tableview.

[code]
self.itemDetailsList = [[NSMutableDictionary alloc]init];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:@"Whole-wheat bread"];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:@"Brown rice"];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:@"Tomato sauce"];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:@"Red-wine vinegar"];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:@"Mustard"];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:@"Apples"];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:@"Cakes"];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:@"eggs"];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:@"Sunscreen"];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:@"broccoli"];

self.allKeys = [[NSArray alloc]initWithArray:[self.itemDetailsList allKeys]];
[/code]

UITableViewDataSource Protocol

UitableViewDataSource is a protocol which is used to construct and modify the tableview.  The protocol consists of many methods, the 3 important methods are:

  1. numberOfSectionsInTableView:   the method is used to specify the number of sections in the tableview, in our example we have only one section. Based on the number of sections the tableview is divided vertically.  The method is optional.
  2. tableView:numberOfRowsInSection: the method is used to specify the number of rows in each section. It is a required method in the protocol.
  3. tableView:cellForRowAtIndexPath:  the method is used to fill the cells with data. The method is called of reach row, and we are responsible to provide the appropriate  data for each cell. It is a required method.

When the tableview is reloaded or refreshed, the datasource is responsible to populate the tableview with the data. First the number of section is called to get the sections and then the no. of rows in a section  and for each row,  cellForRowAtIndexPath is called.

In order to use the protocol, we need to conform to the protocol UITableViewDataSource in .h file and set tableview’s datasource to current instance of the class(self), the code is given below.

In ViewController.h

[code]
@interface ViewController : UIViewController&lt;UITableViewDataSource&gt;
[/code]

In ViewController.m(the code should be present under viewDidLoad method)

[code]
self.itemsTableView.dataSource = self;
[/code]

Implementing UITableViewDataSource protocol methods

  • Since we are having only one section in the tableview, integer value 1 is returned.

[code]
-(NSInteger)numberOfSectionsInTableView:(UITableView *)tableView{
return 1;
}
[/code]

  • The number of rows is the items present in the dictionary.The count of the dictionary is returned as the number of rows.

[code]
-(NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section{
return [self.itemDetailsList count];
}
[/code]

  • Load the cells with data.

[code]
-(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{
static NSString *CellIdentifier = @"Cell";
UITableViewCell *cell = nil;
cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];

if(cell == nil){
cell = [[UITableViewCell alloc]initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier];
}

// Configure the cell…
cell.textLabel.text =[self.allKeys objectAtIndex:indexPath.row];
return cell;
}
[/code]

  • Create an instance of UITableViewCell and set it to nil. In the above method of loading the cells, we try to reuse the cells which are already created, so that the memory is not allocated for the cell again.
  • In our example, when the tableview is displayed for the first time, new tableviewcell is created for the cells which are visible in the simulator initially, memory is also allocated for all the cells and the cell’s content is set.when the table view is scrolled, the same cells are reused  by just changing the content of the cell thus by saving the memory.In this way we can effectively manage the memory.
  • Create an identifier of the cell of type NSString, which is used to identify a type of a cell that you want to create.TableView checks for the cell, whether the cell is already created by giving cellIdentifier as the parameter, using the method “dequeueReusablecellWithIdentifier”. The method returns the cell if it is already created before, otherwise it returns nil.
  • If it is nil, then, allocate memory for a cell by specifying the cell style and by giving the identifier.
  • Now that we have the cell, we can set the properties of the cell.Here we are setting the text for the label which is present in the cell.
  • Normally cell contains ContentView and accessory view.You can also set other properties for the cell namely textlabel for the title, detailTextLabel for the subtitle and an imageview for the thumbnail in content view.

Run the application. We can see the tableview with the list of items.

tableviewpopulated

Implementing UITableViewDelegate protocol methods

  • Our next requirement is to check and uncheck the items ,on selection of the cell which tells us that the item is being purchased or not. If the item is purchased then the value of the item in the dictionary is set to YES otherwise the value is set to NO. To achieve this we have to make use of the accessory view present in the cell. We have to set the accessory for the cell when the user taps or selects the cell in the tableview.
  • To perform some action when we interact with the tableview, we make use of UITableViewDelegate protocol. Conform to the protocol in .h file as shown below and set the delegate to current instance(self) of ViewController class.

In ViewController.h

[code]
@interface ViewController : UIViewController&lt;UITableViewDataSource,UITableViewDelegate&gt;[/code]

In ViewController.m (the code should be present under viewDidLoad method)

[code]
self.itemsTableView.delegate = self;
[/code]

  • There are several methods present in the UITablveiwDeleagte protocol, the method “didSelectRowAtIndexpath” is called when the user taps on the cell. We will implement the method to achieve the functionality which we require.

[code]
-(void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath{
UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath];

if(cell.accessoryType == UITableViewCellAccessoryCheckmark){
[cell setAccessoryType:UITableViewCellAccessoryNone];
[self.itemDetailsList setObject:[NSNumber numberWithBool:NO] forKey:[self.allKeys objectAtIndex:indexPath.row]];
}
else{
[cell setAccessoryType:UITableViewCellAccessoryCheckmark];
[self.itemDetailsList setObject:[NSNumber numberWithBool:YES] forKey:[self.allKeys objectAtIndex:indexPath.row]];
}
}
[/code]

  • In the above code, on selecting the cell, identify whether the cell is having checkmark, if it contains,  set its accessory type to none and update the dictionary by setting the item value to “NO”, which tells us that the item is not purchased.If the cell does not have checkmark, then set the accessory view to accesorycheckmark and the update the dictionary accordingly.
  • One last change should be made in “didSelectRowAtIndexPath” to make our application work correctly. Since we reuse the cell, we should also update the accessory type of the cell along with the textLabel while loading. The complete code for “didSelectRowAtIndexPath” is given below.

[code]
-(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath{

static NSString *CellIdentifier = @"Cell";
UITableViewCell *cell = nil;
cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];

if(cell == nil){
cell = [[UITableViewCell alloc]initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier];
}

// Configure the cell.

cell.textLabel.text =[self.allKeys objectAtIndex:indexPath.row];

//Set the accessory type for the cell

BOOL selectedVal = [[self.itemDetailsList objectForKey:[self.allKeys objectAtIndex:indexPath.row]]boolValue];
if(selectedVal == YES) {
cell.accessoryType = UITableViewCellAccessoryCheckmark;
}
else{
cell.accessoryType = UITableViewCellAccessoryNone;
}

return cell;
}
[/code]

UITableView in iOS Demo

finalApp

Filed Under: Apple Tagged With: iOS, XCode

IOS Tutorial : Hello World Application using Xcode 5.0

May 9, 2014 by itadmin Leave a Comment

Xcode 5.0 is an IDE used for software development. It is a tool developed by Apple. Xcode 5.0 and later helps you to design and develop iOS application for iOS 7 and Mac OSx. You can download the IDE from apple developer website for Mac OS.

Let us start developing our first application in Xcode,  “Hello World” application, which displays a message “Hello World” in  iPhone simulator. The below application is a single view application, the application consists of interface(.xib file) which contains a label that displays a message “Hello World”. It is a very simple application that gives us an idea of how to use Xcode to create, run and debug the application.

1. Create XCode Project

Create new Xcode project. Open Xcode application, click on File->New->Project, a window opens which is shown below. Choose iOS from left panel and “Single View Application” from right panel and click Next.

Create XCode Project

Type your project name “HelloWorldApp” and you can give your organization name and company identifier. These details helps you to create “Bundle Identifier” which is used while submitting your app to app store and it should be unique from other applications present in your app store.

XCode Options

2. Files in Project Navigator

Now that the project is created, we can look into few files present in the project navigator.

  • “main.m” file is present under “Supporting File” group, calls the “AppDelagate” class which in turn launches the application.
  • AppDelegate.h and AppDelegate.m are the application delegate class, which helps to launch the application, “didFinishLaunchingWithOptions” function can be used to perform any action while launching the application. The class also helps us to manage the application when it goes to background or comes to foreground etc., there are delegate methods present in the AppDelegate class.
  • Main.Storyboard consists of a view controller. The view controller contains a view where we can design our UI.
  • ViewController.h and ViewController.m files are used to control the view, by default the view in Main.Storyboard is mapped to the ViewController class.You can check it out in the class attribute in Identity Inspector(Snapshot is given below).

Files in Project Navigator

XCode Files Navigator

3. Designing User Interface

Drag and drop a label to the view from Object library from the right panel as shown below.

XCode Designing User Interface

4. Connecting UILabel to ViewController

Connect the outlet UILabel to the viewcontroller class so that we can acces theUILabel and modify the properties of UILabel.

  • Click on show assistant editor in the tool bar to display its controller class (ViewController.h) on the right side.

Connecting UILabel to ViewController

  • Select the label in the view, cntrl- click and drag the control to ViewController.h and insert an outlet with the name for UILabel as “messageLabel” and click “Connect” as shown below.

Snapshot6

5. XCode Example Application

Once we have created a outlet and connected to the viewcontroller class, we can start coding.

  • The following code will be present in ViewController.h class when we connect the outlet UILabel.

[code]#import &lt;UIKit/UIKit.h&gt;
@interface ViewController : UIViewController
@property (weak, nonatomic) IBOutlet UILabel *messageLabel;
@end
[/code]

  • Write the following code in ViewController.m class to display the message “Hello World” when we run the application.
  • On loading the view, we are setting the text property of the label. The code is written in “ViewDidLoad” method.

[code]@interface ViewController ()
@end
@implementation ViewController

– (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.

self.messageLabel.text = @"Hello World";
}

– (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
@end

[/code]

6. Build and Run The XCode Application

  • Now we have completed the application, let us build and run the application.
  • In the menu bar, click Product->Build or Command+B to build the application.
  • In the menu bar, click Product->Run or Command+r or symbol  RunSymbol  to run the application.
  • We can also select the simulator by clicking on “set Active scheme”

XCode Run Application

When we run the application, the application looks like below,

XCode Hello World Application

7. Debugging

To debug, put a break point as shown below and build. The control stops at the break point when we run the application and we can resume, or step into or step over the function by using the options shown in the below snap shot.

XCode Debug

Snapshot10


Filed Under: Apple Tagged With: iOS, XCode 5

How to create EJB project in NetBeans 7.0?

June 13, 2011 by itadmin Leave a Comment

NetBeans IDE 7 CookbookWelcome to the NetBeans Cookbook.

NetBeans is a Java Integrated Development Environment, IDE, which enables fast application development with the most adopted frameworks, technologies, and servers.

also read:

  • Java EE Tutorials
  • EJB Interview Questions
  • EJB 3 Web Services
  • Annotation and Dependency Injection in EJB 3
  • Query API in EJB 3

Different than other IDEs, NetBeans comes already pre-packaged with a wide range of functionality out of the box, such as support for different frameworks, servers, databases, and mobile development.

This book does require a minimal knowledge of Java platform, more specifically the language ifself. But the book might as well be used by either beginners, who are trying to dip their toes in new technology, or more experienced developers, who are trying to switch from other IDEs but want to decrease their learning curve of a new environment. NetBeans integrates so many different technologies, many of which are present in this book, that it is beyond the scope of this book to cover all of them in depth. We provide the reader with links and information where to go when further knowledge is required.

What This Book Covers

Chapter 1, NetBeans Head First introduces the developer to the basics of NetBeans by creating basic Java projects and importing Eclipse or Maven projects.

Chapter 2, Basic IDE Usage covers the creation of packages, classes, and constructors, as well as some usability feature.

Chapter 3, Designing Desktop GUI Applications goes through the process of creating a desktop application, then connecting it to a database and even modifying it to look more professional.

Chapter 4, JDBC and NetBeans helps the developer to setup NetBeans with the most common database systems on the market and shows some of the functionality built-in to NetBeans for handling SQL.

Chapter 5, Building Web Applications introduces the usage of web frameworks such as JSF, Struts, and GWT.3

Chapter 6, Using JavaFX explains the basic of JavaFX application states and connecting our JavaFX app to a web service interface.

Chapter 7, EJB Application goes through the process of building an EJB application which supports JPA, stateless, and stateful beans and sharing a service through a web service interface.

Chapter 8, Mobile Development teaches how to create your own CLDC or CDC applications with the help of NetBeans Visual Mobile Designer.

Chapter 9, Java Refactoring lets NetBeans refactor your code to extract classes, interfaces, encapsulate fields, and other options.

Chapter 10, Extending the IDE includes handy examples on how to create your own panels and wizards so you can extend the functionality of the IDE.

Chapter 11, Profiling and Testing covers NetBeans Profiler, HTTP Monitor, and integration with tools that analyze code quality and load generator.

Chapter 12, Version Control shows how to configure NetBeans to be used with the most common version control systems on the market.

EJB ApplicationIn this chapter, we will cover:

  • Creating an EJB project
  • Adding JPA support
  • Creating Stateless Session Bean
  • Creating Stateful Session Bean
  • Sharing a service through Web Service
  • Creating a Web Service client

Introduction

Enterprise Java Beans (EJB) is a framework of server-side components that encapsulates business logic.

These components adhere to strict specifications on how they should behave. This ensures that vendors who wish to implement EJB-compliant code must follow conventions, protocols, and classes ensuring portability.

The EJB components are then deployed in EJB containers, also called application servers, which manage persistence, transactions, and security on behalf of the developer.

If you wish to learn more about EJBs, visit http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book.

For our EJB application to run, we will need the application servers. Application servers are responsible for implementing the EJB specifications and creating the perfect environment for our EJBs to run in.

Some of the capabilities supported by EJB and enforced by Application Servers are:

  • Remote access
  • Transactions
  • Security Scalability

NetBeans 6.9, or higher, supports the new Java EE 6 platform, making it the only IDE so far to bring the full power of EJB 3.1 to a simple IDE interface for easy development.

NetBeans makes it easy to develop an EJB application and deploy on different Application Servers without the need to over-configure and mess with different configuration files. It’s as easy as a project node right-click.

Creating EJB project

In this recipe, we will see how to create an EJB project using the wizards provided by NetBeans.

Getting ready

It is required to have NetBeans with Java EE support installed to continue with this recipe.

If this particular NetBeans version is not available in your machine, then you can download it from http://download.netbeans.org.

There are two application servers in this installation package, Apache Tomcat or GlassFish, and either one can be chosen , but at least one is necessary.

In this recipe, we will use the GlassFish version that comes together with NetBeans 7.0 installation package.

How to do it…

  1. Lets create a new project by either clicking File and then New Project, or by pressing
    Ctrl+Shift+N.
  2. In the New Project window, in the categories side, choose Java Web and in Projects
    side
    , select WebApplication, then click Next.
  3. In Name and Location, under Project Name, enter EJBApplication.
  4. Tick the Use Dedicated Folder for Storing Libraries option box.
  5. Now either type the folder path or select one by clicking on browse.
  6. After choosing the folder, we can proceed by clicking Next.
  7. In Server and Settings, under Server, choose GlassFish Server 3.1.
  8. Tick Enable Contexts and Dependency Injection.
  9. Leave the other values with their default values and click Finish.

The new project structure is created.

How it works…

NetBeans creates a complete file structure for our project.

It automatically configures the compiler and test libraries and creates the GlassFish deployment descriptor.

The deployment descriptor filename specific for the GlassFish web server is glassfish-web.xml.

Adding JPA support

The Java Persistence API (JPA) is one of the frameworks that equips Java with object/ relational mapping. Within JPA, a query language is provided that supports the developers abstracting the underlying database.

With the release of JPA 2.0, there are many areas that were improved, such as:

  • Domain Modeling
  • EntityManager
  • Query interfaces
  • JPA query language and others

We are not going to study the inner workings of JPA in this recipe. If you wish to know more about JPA, visit http://jcp.org/en/jsr/detail?id=317 or http://download.oracle.com/javaee/5/tutorial/doc/bnbqa.html.

NetBeans provides very good support for enabling your application to quickly create entities annotated with JPA.

In this recipe, we will see how to configure your application to use JPA. We will continue to expand the previously-created project.

Getting ready

We will use GlassFish Server in this recipe since it is the only server that supports Java EE 6 at the moment.

We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. Another source of installed Java DB is the JDK installation directory. If you wish to learn how to configure Java DB, please refer to Chapter 4, JDBC and NetBeans.

It is not necessary to build on top of the previous recipe, but it is imperative to have a database schema. Feel free to create your own entities by following the steps presented in this recipe.

How to do it…

    1. Right-click on EJBApplication node and select New Entity Classes from Database….
    2. In Database Tables: Under Data Source, select jdbc/sample and let the IDE initialize
      Java DB.
    3. When Available Tables is populated, select MANUFACTURER, click Add, and then
      click Next.

  1. In Entity Classes: leave all the fields with their default values and only in Package, enter entities and click Finish.

How it works…

NetBeans then imports and creates our Java class from the database schema, in our case the Manufacturer.java file placed under the entities package.

Besides that, NetBeans makes it easy to import and start using the entity straightaway. Many of the most common queries, for example find by name, find by zip, and find all, are already built into the class itself.

The JPA queries, which are akin to normal SQL queries, are defined in the entity class itself. Listed below are some of the queries defined in the entity class Manufacturer.java:

[code lang=”java”] @Entity @Table(name = "MANUFACTURER") @NamedQueries({ @NamedQuery(name = "Manufacturer.findAll", query = "SELECT m FROM Manufacturer m"), @NamedQuery(name = "Manufacturer.findByManufacturerId", query = "SELECT m FROM Manufacturer m WHERE m.manufacturerId = :manufacturerId"), [/code]

The @Entity annotation defines that this class, Manufacturer.java, is an entity and when followed by the @Table annotation , which has a name parameter, points out the table in the Database where the information is stored.

The @NamedQueries annotation is the place where all the NetBeans-generated JPA queries are stored. There can be as many @NamedQueries as the developer feels necessary. One of the NamedQueries we are using in our example is named Manufacturer.findAll, which is a simple select query. When invoked, the query is translated to:

[code lang=”java”] SELECT m FROM Manufacturer m[/code]

On top of that, NetBeans implements the equals, hashCode, and toString methods. Very useful if the entities need to be used straight away with some collections, such as HashMap. Below is the NetBeans-generated code for both hashCode and the toString methods:

[code lang=”java”] @Override public int hashCode() { int hash = 0; hash += (manufacturerId != null ? manufacturerId.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning – this method won’t work in the case the id fields are not set if (!(object instanceof Manufacturer)) { return false; } Manufacturer other = (Manufacturer) object; if ((this.manufacturerId == null && other.manufacturerId != null) || (this.manufacturerId != null && !this.manufacturerId. equals(other.manufacturerId))) { return false; } return true; } [/code]

NetBeans also creates a persistence.xml and provides a Visual Editor, simplifying the management of different Persistence Units (in case our project needs to use more than one); thereby making it possible to manage the persistence.xml without even touching the
XML code. A persistence unit , or persistence.xml , is the configuration file in JPA which is placed under the configuration files, when the NetBeans view is in Projects mode. This file defines the data source and what name the persistence unit has in our example:

[code lang=”java”] <persistence-unit name="EJBApplicationPU" transaction-type="JTA"> <jta-data-source>jdbc/sample</jta-data-source> <properties/> </persistence-unit> [/code]

The persistence.xml is placed in the configuration folder, when using the Projects view. In our example, our persistence unit name is EJBApplicationPU , using the jdbc/sample as the data source.

To add more PUs, click on the Add button that is placed on the uppermost right corner of the Persistence Visual Editor.

This is an example of adding another PU to our project:

Creating Stateless Session Bean

A Session Bean encapsulates business logic in methods, which in turn are executed by a client. This way, the business logic is separated from the client.

Stateless Session Beans do not maintain state. This means that when a client invokes a method in a Stateless bean, the bean is ready to be reused by another client. The information stored in the bean is generally discarded when the client stops accessing the bean.

This type of bean is mainly used for persistence purposes, since persistence does not require a conversation with the client.

It is not in the scope of this recipe to learn how Stateless Beans work in detail. If you wish to learn more, please visit: http://jcp.org/en/jsr/detail?id=318

or

ttps://www.packtpub.com/developer-guide-for-ejb3/book

In this recipe, we will see how to use NetBeans to create a Stateless Session Bean that retrieves information from the database, passes through a servlet and prints this information on a page that is created on-the-fl y by our servlet.

Getting ready

It is required to have NetBeans with Java EE support installed to continue with this recipe.

If this particular NetBeans version is not available in your machine, please visit http://download.netbeans.org.

We will use the GlassFish Server in this recipe since it is the only Server that supports Java EE 6 at the moment.

We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. If you wish to learn how to configure Java DB refer to the Chapter 4, JDBC and NetBeans.

It is possible to follow the steps on this recipe without the previous code, but for better understanding we will continue to build on the top of the previous recipes source code.

How to do it…

  1. Right-click on EJBApplication node and select New and Session Bean….
  2. For Name and Location: Name the EJB as ManufacturerEJB.
  3. Under Package, enter beans.
  4. Leave Session Type as Stateless.
  5. Leave Create Interface with nothing marked and click Finish.

Here are the steps for us to create business m ethods:

  1. Open ManufacturerEJB and inside the c lass body, enter:
    [code lang=”java”] @PersistenceUnit EntityManagerFactory emf; public List findAll(){ return emf.createEntityManager().createNamedQuery("Manufacturer. findAll").getResultList(); } [/code]
  2. Press Ctrl+Shift+I to resolve the following imports:
    [code lang=”java”] java.util.List; javax.persistence.EntityManagerFactory; javax.persistence.PersistenceUnit; [/code]

Creating the Servlet:

  1. Right-click on the EJBApplication node and select New and Servlet….
  2. For Name and Location: N ame the servlet as ManufacturerServlet.
  3. Under package, enter servlets.
  4. Leave all the other fields with their default values and click Next.
  5. For Configure Servlet Deployment: Leave all the default values and click Finish.

With the ManufacturerServlet open:

After the class declaration and before the processRe quest method, add:

[code lang=”java”] @EJB ManufacturerEJB manufacturerEJB; [/code]

Then inside the processRequest method, first line after the try statement, add:

[code lang=”java”] List<Manufacturer> l = manufacturerEJB.findAll();[/code]

Remove the /* TODO output your page here and also */.

And finally replace:

[code lang=”java”] out.println("<h1>Servlet ManufacturerServlet at " + request. getContextPath () + "</h1>"); [/code]

With:

[code lang=”java”] for(int i= 0; i< 10; i++ ) out.pr intln("<b>City</b> " + l.get(i).getCity() + ", <b>State</b> " + l.get(i).getState() +"<br>" ); [/code]

Resolve all the import errors and save the file.

How it works…

To execute the code produced in this recipe, right-click on the EJBApplication node and select Run.

When the browser launches append to the end of the URL/ManufacturerServlet, hit Enter. Our application will return City and State names.

One of the coolest features in Java EE 6 is that usage of web.xml can be avoided if annotating the servlet. The following code does exactly that:

[code lang=”java”] @WebServlet(name="ManufacturerServlet", urlPatterns={"/ ManufacturerServlet"}) [/code]

Since we are working on Java EE 6, our Stateless bean does not need the daunting work of creating interfaces, the @Stateless annotation takes care of that, making it easier to develop EJBs.

We then add the persistence unit, represented by the EntityManagerFactory and inserted by the @PersistenceUnit annotation.

Finally we have our business method that is used from the servlet. The findAll method uses one of the named queries from our entity to fetch information from the database.

Creating Stateful Session Beans

If Stateless Session Beans do not maintain state, it is easy to guess what Stateful Session Beans do. Yes, they maintain the state.

When a client invokes a method in a stateful bean, the variables (state) of that request are kept in the memory by the bean. When more requests come in, the container makes sure that the same bean is used for the same client. This type of bean is useful when multiple requests
are required and several steps are necessary for completing a task.

Stateful Beans also enjoy the ease of development introduced by Java EE 6, meaning that they can be created by annotating a POJO with @Stateful.

It is not in the scope of this recipe to learn how Stateful Beans work in detail. If you wish to learn more, please visit: http://jcp.org/en/jsr/detail?id=318

Or

https://www.packtpub.com/developer-guide-for-ejb3/book

In this recipe, we will see how to use NetBeans to create a stateful session bean that holds a counter of how many times a request for a method was executed.

Getting ready

Please find the software requirements and configuration instructions for this recipe in the first Getting ready section of this chapter. This recipe builds on the sources of the previous s recipes.

How to do it…

  1. Right-click on the EJBApplication node and select New Session Bean….
  2. For Name and Location: Name the EJB as CounterManufacturerEJB.
  3. Under Package, enter beans.
  4. Mark Session Type as Stateful.
  5. Leave Create Interface with nothing marked and click Finish.

Creating the business method

With CounterManufacturerEJB open, add the following variable:

[code lang=”java”] private int counter = 0;[/code]

Then right-click inside the class body and select Insert Code… (or press Alt+Insert) and select Add Business Method….

When the Add Business Method… window opens:

  1. Name it as counter and for Return Type, enter String.
  2. Click OK.

Replace the code inside the counter method with:

[code lang=”java”] counter++; return ""+counter; [/code]

Save the file.

Open ManufacturerServlet and after the class declaration and before the processRequest method:

  1. Right-click and select Insert Code… or press Alt+Insert.
  2. Select Call Enterprise Bean….
  3. In the Call Enterprise Bean window, expand the EJB Application node.
  4. Select CounterManufacturerEJB and click OK.

Below we see how the bean is injected using annotation:

[code lang=”java”] @EJB CounterManufacturerEJB counterManufacturerEJB; [/code]

Resolve the import errors by pressing Ctrl+Shift+I. Then add to the process request:

[code lang=”java”] out.println("<b>Number of times counter was accessed<b> " + counterManufacturerEJB.counter() + "<br><br>" ); [/code]

Save the file.

How it works…

NetBeans presents the user with a very easy-to-use wizard for creating beans. As with the stateless bean, we are presented with the different options for creating a bean. This time we select the Stateful Bean. When clicking Finish, the IDE will create the EJB POJO class, place it in the beans package, and annotate, with @Stateful, the class signifying that we have created a Stateful Session Bean.

We then proceed to add the logic in our EJB. Through another wizard, NetBeans makes it easy to add a business method. After pressing Alt+Insert, we are presented with the choices of what can be done in that context. After adding the code, we are ready to integrate our EJB
with the servlet.

Again, pressing Alt+Insert comes in handy when we want to create a reference to our EJB. After the correct bean is selected in the Call Enterprise Bean window, NetBeans creates the code:

[code lang=”java”] CounterManufacturerEJB counterManufacturerEJB = lookupCounterManufacturerEJBBean(); [/code]

And also:

[code lang=”java”] private CounterManufacturerEJB lookupCounterManufacturerEJBBean() { try { Context c = new InitialContext(); return (CounterManufacturerEJB) c.lookup("java:global/ EJBApplication/CounterManufacturerEJB!beans.CounterManufacturerEJB"); } catch (NamingException ne) { Logger.getLogger(getClass().getName()).log(Level.SEVERE, "exception caught", ne); throw new RuntimeException(ne); } } [/code]

This boatload of code is created by the IDE and enables the developer to fine-tune things like logging over exceptions and other customizations. In fact, this is the way that EJB was called prior to annotations being introduced to Java EE. The method is simply calling the application server context with the lookup method, along with the Remote Method Invocation (RMI) naming conventions used to define our EJB and assign the reference to the object itself. Notice that all this code could be simplified to:

[code lang=”java”] @EJB CounterManufacturerEJB counterManufacturerEJB; [/code]

But we tried to show how much liberty and options the developer has in NetBeans.

There’s more…

Disabling GlassFish alive sessions.

GlassFish and sessions

To keep sessions alive in our Application Server GlassFish, we need to navigate to the Services window:

  1. There we will need to expand the Servers node.
  2. Right-click on GlassFish and select Properties.
  3. Click on Preserve Sessions Across Redeployment if you do not want this feature.

This option preserves the HTTP sessions even when GlassFish has been redeployed. If the data has been stored in a session, it will be available next time a redeployment occurs.

Sharing a service through Web Service

Web services are APIs which, in the case of this recipe, access some data over a network from any platform and using any programming language.

In the world of cloud computing, web services have become an increasingly popular way for companies to let developers create applications using their data. A good example of this is Twitter. Thanks to exposition of Twitter data through web services, it has been possible to
create numerous Twitter clients on virtually all platforms. In this recipe, we will create a web service that returns information from a database table; we will see that this information can be transferred either in XML or JavaScript Object Notation (JSON) format. JSON provides the
user with data access simplicity, when compared to XML, since it does not need a bunch of tags and nested tags to work Getting ready

It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, please visit: http://netbeans.org

We will use the GlassFish Server in this recipe since it is the only server that supports Java EE 6 at the moment.

We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. If you wish to learn how to configure Java DB, refer to Chapter 4, JDBC and NetBeans.

It is possible to create this recipe if an existing database schema and an EJB application exists. However, for the sake of brevity, we will use the sources from the previous recipes.

How to do it…

Right-click on the EJBApplication node, select New then Other then Web Services and RESTFul Web Services from Entity Class….

  1. For Entity Classes: On Available Entity classes, select Manufacturer, click Add, and click Next.
  2. For Generated Classes: Leave all the fields with their default values and click Finish.

A new dialog, REST Resources Configuration, pops-up; select the first option and click OK.

How it works…

The REST resources configuration asks the user which way the RESTful resources should be accessed, presenting the user with three different options. We have chosen to use javax.ws.rs.core.Application because it is the standard in Java EE 6 and, thus, increases the portability of the application, instead of the web.xml option. The second option allows the developer to code their way through registering the resources and choosing the service path.

To take a look at the generated files, expand the service package. Two java files are present: AbstractFacade.java and manufacturerFacadeREST.java.

Opening the ManufacturerFacadeREST.java will show that this file is actually a stateless EJB created by the IDE that is used to interface with the database and retrieve information from it.

NetBeans also automatically generates a converter for our ManufacturerResource. This converter is used for creating a resource representation from the corresponding entity instance. Those classes can be found in the converter package.

There’s more…

Using NetBeans to test the web services.

Testing the web service

Now that we have created a RESTful web service, we need to know if everything is working correctly or not.

To test our web service, right-click EJBApplication and select Test RESTful Web Service. NetBeans will be launched; deploy our application in GlassFish and then point the browser to the web service.

When the Test RESTful Web Service page opens, click on the Test button on the right side.

Upon clicking Test, the test request is sent to the server. The results can be seen in the response section.

Under Tabular View, it is possible to click in the URI and get the XML response from the server.

Raw View, on the other hand, returns the entire response, as it would be handled by an application.

It is also possible to change the format in which the response is generated. Simply click on the drop-down Choose method to test from GET(applicatio n/xml) to GET(application/json) and click Test. Then click on Raw View to get a glimpse of the response.

Creating a web service client

In this recipe, we will use Google Maps to show how NetBeans enables developers to quickly create an application using web services provided by third parties.

Getting ready

It is require d to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, please visit: http://netbeans.org

We will use the GlassFish Server in this recipe, since it is the only server that supports Java EE 6 at the moment.

For our recipe to work, we will need a valid key for the Google Maps API. The key can be found at: http://code.google.com/apis/maps/signup.html

On the site, we will generate the key. Tick the box that says I have read and agree with the terms and conditions, after reading and agreeing of course.

Under My website URL, enter: http://localhost:8080

Or the correct port in which GlassFish is registered. Then click on Generate API key.

The generated key looks something like:

[code lang=”java”] ABQIAFDAc4cEkV3R2yqZ_ooaRGXD1RT8M0brOpm-All5BF9Po1KBxRWWERQsusT9yyKEXQ AGcYfTLTyArx88Uw [/code]

Save this key, we will be using it later.

How to do it…

Creating the Java Web Project

  1. Click File and then New Project or Press Ctrl+Shift+N.
  2. For New Project: On the Categories side, choose Java Web and on the Projects side,
    select WebApplication.
  3. Click Next.
  4. For Name and Location, under Project Name, enter WebServiceCl ient.
  5. Tick the box on Use Dedicated Folder for Storing Libraries.
  6. Now, either type the folder path or select one by clicking on browse.
  7. After choosing the folder, we can proceed by clicking Next.
  8. For Server and Settings: Under Server, choose GlassFish Server 3.1.
  9. Leave the other options with their default values and click Finish.

Creating Servlet

Right-click on the WebServiceClient project, and select New and then Servlet….

  1. For New Servlet: Under Class Name, enter WSClientServlet.
  2. And under package, enter servlet.
  3. Click Finish.

When the WSClientServlet opens in the editor, remove the code starting with:

/* TODO output your page here

And ending with:

*/

And save the file.

Adding a Web Service

Navigate to the Services window and expand the Web Services node, followed by Google, and finally Map Service.

Accepting a security certificate is required to access this service and to continue with the recipe. Please refer the following screenshot:

Drag and drop getGoogleMap into our Servlets processRequest method.

A new window, Customize getGoogleMap SaaS Service, pops-up.

  1. Under Input Parameters, double-click the cell on the address row under the Default Value column, to change the value to the desired address (or keep it default if the provided one is okay).
  2. Click OK.

When the new block of code is written by NetBeans, uncomment the following line: //out.println(“The SaasService returned: “+result.getDataAsString());

Remember the key generated in the Getting Ready section?

In the Projects window, expand the Source Packages node and the package org.netbeans.saas.google, and double-click on googlemapservice.properties.

Paste the key after the = operator.

The line should look like:

[code lang=”java”]api_key=ABQIAFDAc4cEkV3R2yqZ_ooaRGXD1RT8M0brOpm-All5BF9Po1KBxRWWERQsu
sT9yyKEXQAGcYfTLTyArx88Uw[/code]

Save file, open WSClientServlet and press Shift+F6. When the Set Servlet Execution URI window pops-up, click OK. The browser will open with our application path already in place and it will display this:

How it works…

After dragging and dropping the Google Web Service to our class, a folder structure is created by NetBeans:

Let’s check what is in our folder structure:

  • GoogleMapsService.java: Responsible for checking the coordinates given by the developer, and checks and reads the key from the properties file.
    • Returns HTML text to access GoogleMap.
  • RestConnection.java: Responsible for establishing the connection to the Google servers.
  • RestResponse.java: Holds the actual data returned from Google.
  • GoogleMapsService: The class that our Servlet uses to interact with the other classes and Google.

There’s more…

Discovering other web services bundled with the IDE.

Other services

There are many other web services available in the Web Service section of the IDE.

Services such as:

  • Amazon: EC2 and S3
  • Flickr
  • WeatherBug

It is just a matter of checking the documentation of the service provider, and starting to code your own implementation. Try it out!

Filed Under: Java EE Tagged With: EJB, EJB 3

Error starting modern compiler in Ant and Eclipse

April 15, 2011 by itadmin Leave a Comment

Eclipse and ANT Build Tool Error

You would have come across this compiler error (error starting modern compiler) in many times during the developement. This error occirs when the actual Java runtime loaded by the environment and another tool is pointing to the differenet Java run time. This tips explains with example if this error occirs when you are trying to run the Ant build tool inside eclipse. The following are the screenshots which explains where you need to check and change it.

Here the jre selection must be the same as your eclipse is used for running the application. If its is different, you will get error starting modern compiler

Filed Under: Apache Ant Tagged With: Eclipse

JBoss AS 5 Performance Tuning

January 29, 2011 by itadmin Leave a Comment

JBoss AS 5 Performance TuningJBoss AS 5 Performance Tuning will teach you how to deliver fast applications on the JBoss Application Server and Apache Tomcat, giving you a decisive competitive advantage over your competitors. You will learn how to optimize hardware resources, meeting your application requirements with less expenditure.

also read:

  • WebLogic Interview Questions
  • JBoss Portal Server Development
  • Tomcat Interview Questions

The performance of Java Enterprise applications is the sum of a set of components including the Java Virtual Machine configuration, the application server configuration (in our case,JBoss AS), the application code itself, and ultimately the operating system. This book will show you how to apply the correct tuning methodology and use the tuning tools that will help you to monitor and address any performance issues.

By looking more closely at the Java Virtual Machine, you will get a deeper understanding of what the available options are for your applications, and how their performance will be affected. Learn about thread pool tuning, EJB tuning, and JMS tuning, which are crucial parts of enterprise applications.

The persistence layer and the JBoss Clustering service are two of the most crucial elements which need to be configured correctly in order to run a fast application. These aspects are covered in detail with a chapter dedicated to each of them.

Finally, Web server tuning is the last (but not least) topic covered, which shows how to configure and develop web applications that get the most out of the embedded Tomcat web server.

What This Book Covers

Chapter 1, Performance Tuning Concepts, discusses correct tuning methodology and how it fits in the overall software development cycle.

Chapter 2, Installing the Tools for Tuning, shows how to install and configure the instruments for tuning, including VisualVM, JMeter, Eclipse TPTP Platform, and basic OS tools.

Chapter 3, Tuning the Java Virtual Machine, provides an in-depth analysis of the JVM heap and garbage collector parameters, which are used to start up the application server.

Chapter 4, Tuning the JBoss AS, discusses the application server’s core services including the JBoss System Thread Pool, the Connection Pool, and the Logging Service.

Chapter 5, Tuning the Middleware Services, covers the tuning of middleware services including the EJB and JMS services.

Chapter 6, Tuning the Persistence Layer, introduces the principles of good database design and the core concepts of Java Persistence API with special focus on JBoss‘s implementation (Hibernate).

Chapter 7, JBoss AS Cluster Tuning, covers JBoss Clustering service covering the lowlevel details of server communication and how to use JBoss Cache for optimal data replication and caching.

Chapter 8, Tomcat Web Server Tuning, covers the JBoss Web server performance tuning including mod_jk, mod_proxy, and mod_cluster modules.

Chapter 9, Tuning Web Applications on JBoss AS, discusses developing fast web applications using JSF API and JBoss richfaces libraries.

JBoss AS Cluster Tuning

6th Circle of Hell: Heresy. This circle houses administrators who accurately set up
a cluster to use Buddy Replication. Without caring about steady sessions.

Clustering allows us to run applications on several parallel instances (also known as cluster nodes). The load is distributed across different servers, and even if any of the servers fails, the application is still accessible via other cluster nodes. Clustering is crucial for scalable Enterprise applications, as you can improve performance by simply adding more nodes to the cluster.
In this chapter, we will cover the basic building blocks of JBoss Clustering with the following schedule:

  • A short introduction to JBoss Clustering platform
  • In the next section we will cover the low level details of the JGroups library, which is used for all clustering-related communications between nodes
  • In the third section we will discuss JBoss Cache, which provides distributed cache and state replication services for the JBoss cluster on top of the JGroups library

Introduction to JBoss clustering

Clustering plays an important role in Enterprise applications as it lets you split the load of your application across several nodes, granting robustness to your applications. As we discussed earlier, for optimal results it’s better to limit the size of your JVM to a maximum of 2-2.5GB, otherwise the dynamics of the garbage collector will decrease your application’s performance.

Combining relatively smaller Java heaps with a solid clustering configuration can lead to a better, scalable configuration plus significant hardware savings.

The only drawback to scaling out your applications is an increased complexity in the programming model, which needs to be correctly understood by aspiring architects.

JBoss AS comes out of the box with clustering support. There is no all-in-one library that deals with clustering but rather a set of libraries, which cover different kinds of aspects. The following picture shows how these libraries are arranged:

The backbone of JBoss Clustering is the JGroups library, which provides the communication between members of the cluster. Built upon JGroups we meet two building blocks, the JBoss Cache framework and the HAPartition service. JBoss Cache handles the consistency of your application across the cluster by means of a replicated and transactional cache.

On the other hand, HAPartition is an abstraction built on top of a JGroups Channel that provides support for making and receiving RPC invocations from one or more cluster members. For example HA-JNDI (High Availability JNDI) or HA Singleton (High Availability Singleton) both use HAPartition to share a single Channel and multiplex RPC invocations over it, eliminating the configuration complexity and runtime overhead of having each service create its own Channel. (If you need more information about the HAPartition service you can consult the JBoss AS documentation http://community.jboss.org/wiki/jBossAS5ClusteringGuide.). In the next section we will learn more about the JGroups library and how to configure it to reach the best performance for clustering communication.

Configuring JGroups transport

Clustering requires communication between nodes to synchronize the state of running applications or to notify changes in the cluster definition. JGroups (http://jgroups.org/manual/html/index.html) is a reliable group communication toolkit written entirely in Java. It is based on IP multicast, but extends by providing reliability and group membership.

Member processes of a group can be located on the same host, within the same Local Area Network (LAN), or across a Wide Area Network (WAN). A member can be in turn part of multiple groups. The following picture illustrates a detailed view of JGroups architecture:

A JGroups process consists basically of three parts, namely the Channel, Building blocks, and the Protocol stack. The Channel is a simple socket-like interface used by application programmers to build reliable group communication applications. Building blocks are an abstraction interface layered on top of Channels, which can be used instead of Channels whenever a higher-level interface is required. Finally we have the Protocol stack, which implements the properties specified for a given channel.

In theory, you could configure every service to bind to a
different Channel. However this would require a complex thread
infrastructure with too many thread context switches. For this
reason, JBoss AS is configured by default to use a single Channel
to multiplex all the traffic across the cluster.

The Protocol stack contains a number of layers in a bi-directional list. All messages sent and received over the channel have to pass through all protocols. Every layer may modify, reorder, pass or drop a message, or add a header to a message. A fragmentation layer might break up a message into several smaller messages, adding a header with an ID to each fragment, and re-assemble the fragments on the receiver’s side.

The composition of the Protocol stack (that is, its layers) is determined by the creator of the channel: an XML file defines the layers to be used (and the parameters for each layer).

Knowledge about the Protocol stack is not necessary when
just using Channels in an application. However, when an
application wishes to ignore the default properties for a Protocol
stack, and configure their own stack, then knowledge about what
the individual layers are supposed to do is needed.

In JBoss AS, the configuration of the Protocol stack is located in the file, \deploy\cluster\jgroups-channelfactory.sar\META-INF\jgroupschannelfactory-stacks.xml.

The file is quite large to fit here, however, in a nutshell, it contains the following basic elements:

The first part of the file includes the UDP transport configuration. UDP is thedefault protocol for JGroups and uses multicast (or, if not available, multiple unicast messages) to send and receive messages.

A multicast UDP socket can send and receive datagrams from multiple
clients. The interesting and useful feature of multicast is that a client
can contact multiple servers with a single packet, without knowing the
specific IP address of any of the hosts.

Next to the UDP transport configuration, three protocol stacks are defined:

  • udp: The default IP multicast based stack, with flow control
  • udp-async: The protocol stack optimized for high-volume asynchronous RPCs
  • udp-sync: The stack optimized for low-volume synchronous RPCs

Thereafter, the TCP transport configuration is defined . TCP stacks are typically used when IP multicasting cannot be used in a network (for example, because it is disabled) or because you want to create a network over a WAN (that’s conceivably possible but sharing data across remote geographical sites is a scary option from the performance point of view).

You can opt for two TCP protocol stacks:

  • tcp: Addresses the default TCP Protocol stack which is best suited to high-volume asynchronous calls.
  • tcp-async: Addresses the TCP Protocol stack which can be used for low-volume synchronous calls.

If you need to switch to TCP stack, you can simply include the following
in your command line args that you pass to JBoss:
-Djboss.default.jgroups.stack=tcp
Since you are not using multicast in your TCP communication, this
requires configuring the addresses/ports of all the possible nodes in the
cluster. You can do this by using the property -Djgroups.tcpping.
initial_hosts. For example:
-Djgroups.tcpping.initial_hosts=host1[7600],host2[7600]

Ultimately, the configuration file contains two stacks which can be used for optimising JBoss Messaging Control Channel (jbm-control) and Data Channel (jbm-data).

How to optimize the UDP transport configuration

The default UDP transport configuration ships with a list of attributes, which can be tweaked once you know what they are for. A complete reference to the UDP transport configuration can be found on the JBoss clustering guide (http://docs. jboss.org/jbossclustering/cluster_guide/5.1/html/jgroups.chapt. html); for the purpose of our book we will point out which are the most interesting ones for fine-tuning your transport. Here’s the core section of the UDP transport configuration:

The biggest performance hit can be achieved by properly tuning the attributes concerning buffer size (ucast_recv_buf_size, ucast_send_buf_size, mcast_recv_buf_size, and mcast_send_buf_size ).
[code lang=”java”] <UDP
singleton_name="shared-udp"
mcast_port="${jboss.jgroups.udp.mcast_port:45688}"
mcast_addr="${jboss.partition.udpGroup:228.11.11.11}"
tos="8"
ucast_recv_buf_size="20000000"
ucast_send_buf_size="640000"
mcast_recv_buf_size="25000000"
mcast_send_buf_size="640000"
loopback="true"
discard_incompatible_packets="true"
enable_bundling="false"
max_bundle_size="64000"
max_bundle_timeout="30"
. . . .
/>
[/code]
As a matter of fact, in order to guarantee optimal performance and adequate reliability of UDP multicast, it is essential to size network buffers correctly. Using inappropriate network buffers the chances are that you will experience a high frequency of UDP packets being dropped in the network layers, which therefore need to be retransmitted.

The default values for JGroups’ UDP transmission are 20MB and 64KB for unicast transmission and respectively 25MB and 64KB for multicast transmission. While these values sound appropriate for most cases, they can be insufficient for applications sending lots of cluster messages. Think about an application sending a thousand 1KB messages: with the default receive size, we will not be able to buffer all packets, thus increasing the chance of packet loss and costly retransmission.

Monitoring the intra-clustering traffic can be done through the jboss.jgroups domain Mbeans. For example, in order to monitor the amount of bytes sent and received with the UDP transmission protocol, just open your jmx-console and point at the jboss.jgroups domain. Then select your cluster partition. (Default the partition if you are running with default cluster settings). In the following snapshot (we are including only the relevant properties) we can see the amount of Messages sent/received along with their size (in bytes).

Besides increasing the JGroups’ buffer size, another important aspect to consider is that most operating systems allow a maximum UDP buffer size, which is generally ower than JGroups’ defaults. For completeness, we include here a list of default maximum UDP buffer size:

So, as a rule of thumb, you should always configure your operating system to take advantage of the JGroups’ transport configuration. The following table shows the command required to increase the maximum buffer to 25 megabytes. You will need root privileges in order to modify these kernel parameters:

Another option that is worth trying is enable_bundling, which specifies whether to enable message bundling. If true, the transport protocol would queue outgoing messages until max_bundle_size bytes have accumulated, or max_bundle_time milliseconds have elapsed, whichever occurs first.

The advantage of using this approach is that the transport protocol would send bundled queued messages in one single larger message. Message bundling can have significant performance benefits for channels using asynchronous high volume messages (for example, JBoss Cache components configured for REPL_ASYNC. JBoss Cache will be covered in the next section named Tuning JBoss Cache).

On the other hand, for applications based on a synchronous exchange of RCPs, the introduction of message bundling would introduce a considerable latency so it is not recommended in this case. (That’s the case with JBoss Cache components configured as REPL_SYNC).

How to optimize the JGroups’ Protocol stack

The Protocol stack contains a list of layers protocols, which need to be crossed by the message. A layer does not necessarily correspond to a transport protocol: for example a layer might take care to fragment the message or to assemble it. What’s important to understand is that when a message is sent, it travels down in the stack, while when it’s received it walks just the way back.

For example, in the next picture, the FLUSH protocol would be executed first, then the STATE, the GMS, and so on. Vice versa, when the message is received, it would meet the PING protocol first, them MERGE2, up to FLUSH.

Following here, is the list of protocols triggered by the default UDP’s Protocol stack.
[code lang=”xml”]<stack name="udp"
description="Default: IP multicast based stack, with flow
control.">
<config>
<PING timeout="2000" num_initial_members="3"/>
<MERGE2 max_interval="100000" min_interval="20000"/>
<FD_SOCK/>
<FD timeout="6000" max_tries="5" shun="true"/>
<VERIFY_SUSPECT timeout="1500"/>
<pbcast.NAKACK use_mcast_xmit="false" gc_lag="0"
retransmit_timeout="300,600,1200,2400,4800"
discard_delivered_msgs="true"/>
<UNICAST timeout="300,600,1200,2400,3600"/>
<pbcast.STABLE stability_delay="1000"
desired_avg_gossip="50000"
max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="3000"
shun="true"
view_bundling="true"
view_ack_collection_timeout="5000"/>
<FC max_credits="2000000" min_threshold="0.10"
ignore_synchronous_response="true"/>
<FRAG2 frag_size="60000"/>
<pbcast.STATE_TRANSFER/>
<pbcast.FLUSH timeout="0"/>
</config>
</stack>[/code]
The following table will shed some light on the above cryptic configuration:

While all the above protocols play a role in message exchanging, it’s not necessary that you know the inner details of all of them for tuning your applications. So we will focus just on a few interesting ones.

The FC protocol, for example can be used to adapt the rate of messages sent with the rate of messages received. This has the advantage of creating an homogeneous rate of exchange, where no sender member overwhelms receiver nodes, thus preventing potential problems like filling up buffers causing packet loss. Here’s an example of FC configuration:
[code lang=”xml”] <FC max_credits="2000000"
min_threshold="0.10"
ignore_synchronous_response="true"/>[/code]
The message rate adaptation is done with a simple credit system in which each time a sender sends a message a credit is subtracted (equal to the amount of bytes sent). Conversely, when a receiver collects a message, a credit is added.

  • max_credits specifies the maximum number of credits (in bytes) and should obviously be smaller than the JVM heap size
  • min_threshold specifies the value of min_credits as a percentage of the max_credits element
  • ignore_synchronous_response specifies whether threads that have carried messages up to the application should be allowed to carry outgoing messages back down through FC without blocking for credits

The following image depicts a simple scenario where HostA is sending messages (and thus its max_credits is reduced) to HostB and HostC, which increase their max_credits accordingly.

The FC protocol, while providing a control over the flow of messages, can be a bad choice for applications that are issuing synchronous group RPC calls. In this kind of applications, if you have fast senders issuing messages, but some slow receivers across the cluster, the overall rate of calls will be slowed down. For this reason, remove FD from your protocol list if you are sending synchronous messages or just switch to the udpsync protocol stack.

Besides JGroups, some network interface cards (NICs) and switches
perform ethernet flow control (IEEE 802.3x), which causes overhead
to senders when packet loss occurs. In order to avoid a redundant flow
control, you are advised to remove ethernet flow control. For managed
switches, you can usually achieve this via a web or Telnet/SSH interface.
For unmanaged switches, unfortunately the only chance is to hope that
ethernet flow control is disabled, or to replace the switch.
If you are using NICs, you can disable ethernet flow control by means of
a simple shell command, for example, on Linux with the ethtool:
/sbin/ethtool -A eth0 autoneg off tx on rx on
If you want simply to verify if ethernet flow control is off:
/sbin/ethtool -a eth0

One more thing you must be aware of is that, by using JGroups, cluster nodes must store all messages received for potential retransmission in case of a failure. However, if we store all messages forever, we will run out of memory. The distributed garbage collection service in JGroups periodically removes messages that have been seen by all nodes from the memory in each node. The distributed garbage collection service is configured in the pbcast.STABLE sub-element like so:
[code lang=”xml”] <pbcast.STABLE stability_delay="1000"
desired_avg_gossip="5000"
max_bytes="400000"/>
[/code]
The configurable attributes are as follows:

  • desired_avg_gossip: Specifies the interval (in milliseconds) between garbage collection runs. Setting this parameter to 0 disables this service.
  • max_bytes: Specifies the maximum number of bytes to receive before triggering a garbage collection run. Setting this parameter to 0 disables this service.

You are advised to set a max_bytes value if you have a high-traffic cluster.

Tuning JBoss Cache

JBoss Cache provides the foundation for many clustered services, which need to synchronize application state information across the set of nodes.

The cache is organized as a tree, with a single root. Each node in the tree essentially contains a map, which acts as a store for key/value pairs. The only requirement placed on objects that are cached is that they implement java.io.Serializable.

 Actually EJB 3 Stateful Session Beans, HttpSessions, and Entity/Hibernate rely on JBoss Cache to replicate information across the cluster. We have discussed thoroughly data persistence in Chapter 6, Tuning the Persistence Layer, so we will focus in the next sections on SFSB and HttpSession cluster tuning.

The core configuration of JBoss Cache is contained in the JBoss Cache Service. In JBoss AS 5, the scattered cache deployments have been replaced with a new CacheManager service, deployed via the /deploy/cluster/jbosscache-manager.sar/META-INF/jboss-cache-manager-jboss-beans.xml.

The CacheManager acts as a factory for creating caches and as a registry for JBoss Cache instances. It is configured with a set of named JBoss Cache configurations. Here’s a fragment of the standard SFSB cache configuration:
[code lang=”xml”] <b><entry><key>sfsb-cache</key></b>
<value>
<bean name="StandardSFSBCacheConfig"
class="org.jboss.cache.config.Configuration">
<property
name="clusterName">${jboss.partition.name:DefaultPartition}-
SFSBCache</property>
<property
name="multiplexerStack">${jboss.default.jgroups.stack:udp}</property>
<property name="fetchInMemoryState">true</property>
<property name="nodeLockingScheme">PESSIMISTIC</property>
<property name="isolationLevel">REPEATABLE_READ</property>
<property name="useLockStriping">false</property>
<property name="cacheMode">REPL_SYNC</property>
. . . . .
</bean>
</value>
</entry>
[/code]
Services that need a cache ask the CacheManager for the cache by name, which is specified by the key element; the cache manager creates the cache (if not already created) and returns it.

The simplest way to reference a custom cache is by means of the org.jboss.ejb3. annotation.CacheConfig annotation. For example, supposing you were to use a newly created Stateful Session Bean cache named custom_sfsb_cache:
[code lang=”java”] @Stateful
@Clustered
@CacheConfig(name="custom_sfsb_cache")
public Class SFSBExample {
}
[/code]
The CacheManager keeps a reference to each cache it has created, so all services that request the same cache configuration name will share the same cache. When a service is done with the cache, it releases it to the CacheManager. The CacheManager keeps track of how many services are using each cache, and will stop and destroy the cache when all services have released it.

Understanding JBoss Cache configuration

In order to tune your JBoss Cache, it’s essential to learn some key properties. In particular we need to understand

    • How data can be transmitted between its members. This is controlled by the cacheMode property.
  • How the cache handles concurrency on data between cluster nodes. This is handled by nodeLockingScheme and isolationLevel configuration attributes.

Configuring cacheMode

The cacheMode property determines how JBoss Cache keeps in sync data across all nodes. Actually it can be split in two important aspects: how to notify changes across the cluster and how other nodes accommodate these changes on the local data.

As far data notification is concerned, there are the following choices:

    • Synchronous means the cache instance sends a notification message to other nodes and before returning waits for them to acknowledge that they have applied the same changes. Waiting for acknowledgement from all nodes adds delay. However, if a synchronous replication returns successfully, the caller knows for sure that all modifications have been applied to all cache instances.

    • Asynchronous means the cache instance sends a notification message andthen immediately returns, without any acknowledgement that changes have been applied. The Asynchronous mode is most useful for cases like session replication (for example, Stateful Session Beans), where the cache sending data expects to be the only one that accesses the data. Asynchronous messaging adds a small potential risk that a fail over to another node may generate stale data, however, for many session-type applications this risk is acceptable given the major performance benefits gained.

    • Local means the cache instance doesn’t send a message at all. You should use this mode when you are running JBoss Cache as a single instance, so that it won’t attempt to replicate anything. For example, JPA/Hibernate Query Cache uses a local cache to invalidate stale query result sets from the second level cache, so that JBoss Cache doesn’t need to send messages around the cluster for a query result set cache.

As far as the second aspect is concerned (what should the other caches in the cluster do to refl ect the change) you can distinguish between:

Replication: means that the cache replicates cached data across all cluster nodes. This means the sending node needs to include the changed state, increasing the cost of the message. Replication is necessary if the other nodes have no other way to obtain the state.

 Invalidation means that you do not wish to replicate cached data but simply inform other caches in a cluster that data under specific addresses are now stale and should be evicted from memory. Invalidation reduces the cost of the cluster update messages, since only the cache key of the changed state needs to be transmitted, not the state itself.

By combining these two aspects we have a combination of five valid values for the cacheMode configuration attribute:

Should I use invalidation for session data?
No, you shouldn’t. As a matter of fact, data invalidation it is an
excellent option for a clustered JPA/Hibernate Entity cache,
since the cached state can be re-read from the database in case
of failure. If you use the invalidation option, with SFSBs or
HttpSession, then you lose failover capabilities. If this matches
with your project requirements, you could achieve better
performance by simply turning off the cache.

Configuring cache concurrency

JBoss Cache is a thread-safe caching API, and uses its own efficient mechanisms of controlling concurrent access. Concurrency is configured via the nodeLockingScheme and isolationLevel configuration attributes.

There are three choices for nodeLockingScheme:

    • Pessimistic locking involves threads/transactions acquiring locks on nodes before reading or writing. Which is acquired depends on the isolationLevel but in most cases a non-exclusive lock is acquired for a read and an exclusive lock is acquired for a write. Pessimistic locking requires a considerable overhead and allows lesser concurrency, since reader
      threads must block until a write has completed
      and released its exclusive lock (potentially a long time if the write is part of a transaction). The drawbacks include the potential for deadlocks, which are ultimately solved by a TimeoutException.
    • Optimistic locking seeks to improve upon the concurrency available with Pessimistic by creating a workspace for each request/transaction that accesses the cache. All data is versioned; on completion of non-transactional requests or commits of transactions the version of data in the workspace is compared to the main cache, and an exception is raised if there are inconsistencies. This eliminates the cost of reader locks but, because of the cost associated with the parallel workspace, it carries a high memory overhead and low scalability.
    • MVCC is the new locking schema that has been introduced in JBoss Cache 3.x (and packed with JBoss AS 5.x). In a nutshell, MVCC reduces the cost of slow, and synchronization-heavy schemas with a multi-versioned concurrency control, which is a locking scheme commonly used by modern database implementations to control concurrent access to shared data.

    The most important features of MVCC are:

    1. Readers don’t acquire any locks.
    2. Only one additional version is maintained for shared state, for a single writer.
    3. All writes happen sequentially, to provide fail-fast semantics.

    How does MVCC can achieve this?

    For each reader thread, the MVCC’s interceptors wraps state in a lightweight container object, which is placed in the thread’s InvocationContext (or TransactionContext if running in a transaction). All subsequent operations on the state are carried out on the container object using Java references, which allow repeatable read semantics even if the actual state changes simultaneously.

    Writer threads, on the other hand, need to acquire a lock before any writing can start. Currently, lock striping is used to improve the memory performance of the cache, and the size of the shared lock pool can be tuned using the concurrencyLevel attribute of the locking element.

    After acquiring an exclusive lock on a cache Full Qualified Name, the writer thread then wraps the state to be modified in a container as well, just like with reader threads, and then copies this state for writing. When copying, a reference to the original version is still maintained in the container (for rollbacks). Changes are then made to the copy and the copy is finally written to the data structure
    when the write completes.

    Should I use MVCC with session data too?
    While MVCC is the default and recommended choice for JPA/Hibernate
    Entity caching, as far as Session caching is concerned, Pessimistic is still the
    default concurrency control. Why? As a matter of fact, accessing the same
    cached data by concurrent threads it’s not the case with a user’s session. This is
    strictly enforced in the case of SFSB, whose instances are not accessible
    concurrently. So don’t bother trying to change this property for session data.

    Configuring the isolationLevel

    The isolationLevel attribute has two possible values, READ_COMMITTED and REPEATABLE_READ which correspond in semantics to database-style isolation levels. Previous versions of JBoss Cache supported all database isolation levels, and if an unsupported isolation level is configured, it is either upgraded or downgraded to the closest supported level.

    REPEATABLE_READ is the default isolation level, to maintain compatibility with previous versions of JBoss Cache. READ_COMMITTED, while providing a slightly weaker isolation, has a significant performance benefit over REPEATABLE_READ.

    Tuning session replication

    As we have learnt, the user session needs replication in order to achieve a consistent state of your applications across the cluster. Replication can be a costly affair, especially if the amount of data held in session is significant. There are however some available strategies, which can mitigate a lot the cost of data replication and thus improve the performance of your cluster:

    • Override isModified method: By including an isModified method in your SFSBs, you can achieve fine-grained control over data replication. Applicable to SFSBs only.
    • Use buddy replication. By using buddy replication you are not replicating the session data to all nodes but to a limited set of nodes. Can be applicable both to SFSBs and HttpSession.
    • Configure replication granularity and replication trigger. You can apply custom session policies to your HttpSession to define when data needs to be replicated and which elements need to be replicated as well. Applicable to HttpSession.

    Override SFSB’s isModified method

    One of the simplest ways to reduce the cost of SFSBs data replication is implementing in your EJB a method with the following signature: public boolean isModified ();

    Before replicating your bean, the container will detect if your bean implements this method. If your bean does, the container calls the isModified method and it only replicates the bean when the method returns true. If the bean has not been modified (or not enough to require replication, depending on your own preferences), you can return false and the replication will not occur.

    If your session does not hold critical data (such as financial information), using the isModified method is a good option to achieve a substantial benefit in terms of performance. A good example could be a reporting application, which needs session management to generate aggregate reports through a set of wizards. Here’s a graphical view of this process:

    The following benchmark is built on exactly the use case of an OLAP application, which uses SFSBs to drive some session data across a four step wizard. The benchmark compares the performance of the wizard without including isModified and by returning true to isModified at the end of the wizard.

    Ultimately, by using the isModified method to propagate the session data at wider intervals you can improve the performance of your application with an acceptable risk to re-generate your reports in case of node failures.

    Use buddy replication

    By using buddy replication, sessions are replicated to a configurable number of backup servers in the cluster (also called buddies), rather than to all servers in the cluster. If a user fails over from the server that is hosting his or her session, the session data is transferred to the new server from one of the backup buddies. Buddy replication provides the following benefits:

    • Reduced memory usage
    • Reduced CPU utilization
    • Reduced network transmission

    The reason behind this large set of advantages is that each server only needs to store in its memory the sessions it is hosting as well as those of the servers for which it is acting as a backup. Thus, less memory required to store data, less CPU to elaborate bits to Java translations, and less data to transmit.

    For example, in an 8-node cluster with each server configured to have one buddy, a server would just need to store 2 sessions instead of 8. That’s just one fourth of the memory required with total replication.

    In the following picture, you can see an example of a cluster configured for buddy replication:

    Here, each node contains a cache of its session data and a backup of another node. For example, node A contains its session data and a backup of node E. Its data is in turn replicated to node B and so on.

    In case of failure of node A, its data moves to node B which becomes the owner of both A and B data, plus the backup of node E. Node B in turn replicates (A + B) data to node C.

    In order to configure your SFSB sessions or HttpSessions to use buddy replication you have just to set to the property enabled of the bean BuddyReplicationConfig inside the /deploy/cluster/jboss-cache-manager.sar/META-INF/jboss-cache-manager-jboss-beans.xml configuration file, as shown in the next code fragment:
    [code lang=”xml”]
    <property name="buddyReplicationConfig">
    <bean
    class="org.jboss.cache.config.BuddyReplicationConfig">
    <b><property name="enabled">true</property></b>
    . . .
    </bean>
    </property>
    [/code]
    In the following test, we are comparing the throughput of a 5-node clustered web application which uses buddy replication against one which replicates data across all members of the cluster.

    In this benchmark, switching on buddy replication improved the application throughput of about 30%. No doubt that by using buddy replication there’s a high potential for scaling because memory/CPU/network usage per node does not increase linearly as new nodes are added.

    Advanced buddy replication

    With the minimal configuration we have just described, each server will look for one buddy across the network where data needs to be replicated. If you need to backup your session to a larger set of buddies you can modify the numBuddies property of the BuddyReplicationConfig bean. Consider, however, that replicating the session to a large set of nodes would conversely reduce the benefits of buddy replication.

    Still using the default configuration, each node will try to select its buddy on a different physical host: this helps to reduce chances of introducing a single point of failure in your cluster. Just in case the cluster node is not able to find buddies on different physical hosts, it will not honour the property ignoreColocatedBuddies and fall back to co-located nodes.

    The default policy is often what you might need in your applications, however if you need a fine-grained control over the composition of your buddies you can use a feature named buddy pool. A buddy pool is an optional construct where each
    instance in a cluster may be configured to be part of a group- just like an “exclusive club membership”.

    This allows system administrators a degree of fl exibility and control over how buddies are selected. For example, you might put two instances on separate physical servers that may be on two separate physical racks in the same buddy pool. So rather than picking an instance on a different host on the same rack, the BuddyLocators would rather pick the instance in the same buddy pool, on a separate rack which may add a degree of redundancy.

    Here’s a complete configuration which includes buddy pools:
    [code lang=”xml”] <property name="buddyReplicationConfig">
    <bean class="org.jboss.cache.config.BuddyReplicationConfig">
    <b><property name="enabled">true</property>
    <property name="buddyPoolName">rack1</property></b>
    <property name="buddyCommunicationTimeout">17500</property>
    <property name="autoDataGravitation">false</property>
    <property name="dataGravitationRemoveOnFind">true</property>
    <property name="dataGravitationSearchBackupTrees">true</property>
    <property name="buddyLocatorConfig">
    <bean
    class="org.jboss.cache.buddyreplication.NextMemberBuddyLocatorConfig">
    <b><property name="numBuddies">1</property>
    <property name="ignoreColocatedBuddies">true</property></b>
    </bean>
    </property>
    </bean>
    </property>
    [/code]
    In this configuration fragment, the buddyPoolName element, if specified, creates a logical subgroup and only picks buddies who share the same buddy pool name. If not specified, this defaults to an internal constant name, which then treats the entire cluster as a single buddy pool.

    If the cache on another node needs data that it doesn’t have locally, it can ask the other nodes in the cluster to provide it; nodes that have a copy will provide it as part of a process called data gravitation. The new node will become the owner of the data, placing a backup copy of the data on its buddies.

    The ability to gravitate data means there is no need for all requests for data to occur on a node that has a copy of it; that is, any node can handle a request for any data. However, data gravitation is expensive and should not be a frequent occurrence; ideally it should only occur if the node that is using some data fails or is shut down, forcing interested clients to fail over to a different node.

    The following optional properties pertain to data gravitation:

    • autoDataGravitation: Whether data gravitation occurs for every cache miss. By default this is set to false to prevent unnecessary network calls.
    • DataGravitationRemoveOnFind: Forces all remote caches that own the data or hold backups for the data to remove that data, thereby making the requesting cache the new data owner. If set to false, an evict is broadcast instead of a remove, so any state persisted in cache loaders will remain. This is useful if you have a shared cache loader configured. (See next section about Cache loader). Defaults to true.
    • dataGravitationSearchBackupTrees: Asks remote instances to search through their backups as well as main data trees. Defaults to true. The resulting effect is that if this is true then backup nodes can respond to data gravitation requests in addition to data owners.

    Buddy replication and session affinity

    One of the pre-requisites to buddy replication working well and being a real benefit is the use of session affinity, also known as sticky sessions in HttpSession replication speak. What this means is that if certain data is frequently accessed, it is desirable that this is always accessed on one instance rather than in a “round-robin” fashion as this helps the cache cluster optimise how it chooses buddies, where it stores data, and minimises replication traffic.

    If you are replicating SFSBs session, there is no need to configure anything since SFSBs, once created, are pinned to the server that created them.

    When using HttpSession, you need to make sure your software or hardware load balancer maintain the session on the same host where it was created.

    By using Apache’s mod_jk, you have to configure the workers file (workers. properties) specifying where the different node and how calls should be load-balanced across them. For example, on a 5-node cluster:
    [code lang=”java”] worker.loadbalancer.balance_workers=node1,node2,node3,node4,node5
    worker.loadbalancer.sticky_session=1[/code]
    Basically, the above snippet configures mod_jk to perform round-robin load balancing with sticky sessions (sticky_session=1) across 5 nodes of a cluster.

    Configure replication granularity and replication trigger

    Applications that want to store data in the HttpSession need to use the methods setAttribute to store the attributes and getAttribute to retrieve them. You can define two kind of properties related to HttpSessions:

    • The replication-trigger configures when data needs to be replicated.
    • The replication-granularity defines which part of the session needs
      to be replicated.

    Let’s dissect both aspects in the following sections:

    How to configure the replication-trigger

    The replication-trigger element determines what triggers a session replication and can be configured by means of the jboss-web.xml element (packed in the WEB-INF folder of your web application). Here’s an example:
    [code lang=”xml”] <jboss-web>
    <replication-config>
    <b><replication-trigger>SET</replication-trigger></b>
    </replication-config>
    </jboss-web>
    [/code]
    The following is a list of possible alternative options:

      • SET_AND_GET is conservative but not performance-wise; it will always replicate session data even if its content has not been modified but simply accessed. This setting made (a little) sense in AS 4 since using it was a way to ensure that every request triggered replication of the session’s timestamp. Setting max_unreplicated_interval to 0 accomplishes the same thing at much lower cost.
      • SET_AND_NON_PRIMITIVE_GET is conservative but will only replicate if an object of a non-primitive type has been accessed (that is, the object is not of a well-known immutable JDK type such as Integer, Long, String, and so on.)This is the default value.
      • SET assumes that the developer will explicitly call setAttribute on the session if the data needs to be replicated. This setting prevents unnecessary replication and can have a major beneficial impact on performance.

      In all cases, calling setAttribute marks the session as dirty and thus triggers replication.

      For the purpose of evaluating the available alternatives in performance terms, we have compared a benchmark of a web application using different replication-triggers:

      In the first benchmark, we are using the default rule (SET_AND_NON_PRIMITIVE_GET). In the second we have switched to SET policy, issuing a setAttribute on 50% of the requests. In the last benchmark, we have formerly populated the session with the required attributes and then issued only queries on the session via the getAttribute method.

      As you can see the benefit of using the SET replication trigger is obvious, especially if you follow a read-mostly approach on non-primitive types. On the other hand, this requires very good coding practices to ensure setAttribute is always called whenever a mutable object stored in the session is modified.

      How to configure the replication-granularity

      As far as what data needs to be replicated is concerned, you can opt for the following choices:

      • SESSION indicates that the entire session attribute map should be replicated when any attribute is considered modified. Replication occurs at request end. This option replicates the most data and thus incurs the highest replication cost, but since all attributes values are always replicated together it ensures that any references between attribute values will not be broken when the session is deserialized. For this reason it is the default setting.
      • ATTRIBUTE indicates that only attributes that the session considers to be potentially modified are replicated. Replication occurs at request end. For sessions carrying large amounts of data, parts of which are infrequently updated, this option can significantly increase replication performance.
      • FIELD level replication only replicates modified data fields inside objects stored in the session. Its use could potentially drastically reduce the data traffic between clustered nodes, and hence improve the performance of the whole cluster. To use FIELD-level replication, you have to first prepare (that is bytecode enhance) your Java class to allow the session cache to detect when fields in cached objects have been changed and need to be replicated. 

      In order to change the default replication granularity, you have to configure the desired attribute in your jboss-web.xml configuration file:
      [code lang=”xml”] <jboss-web>
      <replication-config>
      <b><replication-granularity>FIELD</replication-granularity></b>
      <replication-field-batch-mode>true</replication-field-batchmode>
      </replication-config>
      </jboss-web>
      [/code]
      In the above example, the replication-field-batch-mode element indicates whether you want all replication messages associated with a request to be batched into one message.

      Additionally, if you want to use FIELD level replication you need to perform a bit of extra work. At first you need to add the @org.jboss.cache.pojo.annotation. Replicable annotation at class level:
      [code lang=”java”] @Replicable
      public class Person { … }[/code]

      If you annotate a class with @Replicable, then all of its subclasses will
      be automatically annotated as well.

      Once you have annotated your classes, you will need to perform a post-compiler processing step to bytecode enhance your classes for use by your cache. Please check the JBoss AOP documentation (http://www.jboss.org/jbossaop) for the usage of the aoc post-compiler. The JBoss AOP project also provides easy to use ANT tasks to help integrate those steps into your application build process.

      As proof of concept, let’s build a use case to compare the performance of ATTRIBUTE and FIELD granularity policies. Supposing you are storing in your HttpSession an object of Person type. The object contains references to an Address, ContactInfo, and PersonalInfo objects. It contains also an ArrayList of WorkExperience.

      A prerequisite to this benchmark is that there are no references between
      the field values stored in the Person class (for example between the
      contactInfo and personalInfo fields), otherwise the references will
      be broken by ATTRIBUTE or FIELD policies.

      By using the SESSION or ATTRIBUTE replication-granularity policy, even if just one of these fields is modified, the whole Person object need to be retransmitted. Let’s compare the throughput of two applications using respectively the ATTRIBUTE and FIELD replication-granularity.

      In this example, based on the assumption that we have a single dirty field of Person’ class per request, by using FIELD Replication generate a substantial 10% gain.

      Tuning cache storage

      Cache loading allows JBoss Cache to store cached data in a persistent store and is used mainly for HttpSession and SFSB sessions. Hibernate and JPA on the other hand, have already their persistence storage in the database so it doesn’t make sense to add another storage.

      This data can either be an overflow, where the data in the persistent store has been evicted from memory. Or it can be a replication of what is in memory, where everything in memory is also refl ected in the persistent store, along with items that have been evicted from memory.

      The cache storage used for web session and EJB3 SFSB caching comes into play in two circumstances:

      • Whenever a cache element is accessed, and that element is not in the cache (for example, due to eviction or due to server restart), then the cache loader transparently loads the element into the cache if found in the backend store.
      • Whenever an element is modified, added or removed, then that modification is persisted in the backend store via the cache loader (except if the ignoreModifications property has been set to true for a specific cache loader). If transactions are used, all modifications created within a transaction are persisted as well.

      Cache loaders are configured by means of the property cacheLoaderConfig of session caches. For example, in the case of SFSB cache:
      [code lang=”xml”]
      <b><entry><key>sfsb-cache</key></b>
      <value>
      <bean name="StandardSFSBCacheConfig"
      class="org.jboss.cache.config.Configuration">
      . . . . .
      <b><property name="cacheLoaderConfig"></b>
      <bean class="org.jboss.cache.config.CacheLoaderConfig">
      <b><property name="passivation">true</property>
      <property name="shared">false</property></b>
      <property name="individualCacheLoaderConfigs">
      <list>
      <bean
      class="org.jboss.cache.loader.FileCacheLoaderConfig">
      <property
      name="location">${jboss.server.data.dir}${/}sfsb</property>
      <property name="async">false</property>
      <property name="fetchPersistentState">true</property>
      <property name="purgeOnStartup">true</property>
      <property name="ignoreModifications">false</property>
      <property
      name="checkCharacterPortability">false</property>
      </bean>
      </list>
      </property>
      </bean>
      . . . ..
      </entry>
      [/code]
      The passivation property , when set to true, means the persistent store acts as an overflow area written to when data is evicted from the in-memory cache.

      The shared attribute indicates that the cache loader is shared among different cache instances, for example where all instances in a cluster use the same JDBC settings to talk to the same remote, shared database. Setting this to true prevents repeated and  unnecessary writes of the same data to the cache loader by different cache instances. The default value is false.

      Where does cache data get stored?

      By default, the Cache loader uses a filesystem implementation based on the class org.jboss.cache.loader.FileCacheLoaderConfig, which requires the location property to define the root directory to be used.

      If set to true, the async attribute read operations are done synchronously, while write (CRUD – Create, Remove, Update, and Delete) operations are done asynchronously. If set to false (default), both read and writes are performed synchronously.

      Should I use an async channel for my Cache Loader?
      When using an async channel, an instance of org.jboss.cache.
      loader.AsyncCacheLoader is constructed which will act as an
      asynchronous channel to the actual cache loader to be used. Be aware
      that, using the AsyncCacheLoader, there is always the possibility of
      dirty reads since all writes are performed asynchronously, and it is thus
      impossible to guarantee when (and even if) a write succeeds. On the
      other hand the AsyncCacheLoader allows massive writes to be written
      asynchronously, possibly in batches, with large performance benefits.
      Checkout the JBoss Cache docs for further information http://docs.
      jboss.org/jbosscache/3.2.1.GA/apidocs/index.html.

      fetchPersistentState determines whether or not to fetch the persistent state of a cache when a node joins a cluster and conversely the purgeOnStartup property evicts data from the storage on startup, if set to true.

      Finally, checkCharacterPortability should be false for a minor performance improvement.

      The FileCacheLoader is a good choice in terms of performance, however it has some limitations, which you should be aware of before rolling your application in a production environment. In particular:

      1. Due to the way the FileCacheLoader represents a tree structure on disk (directories and files) traversal is “inefficient” for deep trees.
      2. Usage on shared filesystems such as NFS, Windows shares, and others should be avoided as these do not implement proper file locking and can cause data corruption.
      3. Filesystems are inherently not “transactional”, so when attempting to use your cache in a transactional context, failures when writing to the file (which happens during the commit phase) cannot be recovered.

      As a rule of thumb, it is recommended that the FileCacheLoader not
      be used in a highly concurrent, transactional. or stressful environment,
      and, in this kind of scenario consider using it just in the testing
      environment.

      As an alternative, consider that JBoss Cache is distributed with a set of different Cache loaders which can be used as alternative. For example:

      • The JDBC-based cache loader implementation that stores/loads nodes’ state into a relational database. The implementing class is org.jboss.cache. loader.JDBCCacheLoader.
      • The BdbjeCacheLoader, which is a cache loader implementation based on the Oracle/Sleepycat’s BerkeleyDB Java Edition (note that the BerkeleyDB implementation is much more efficient than the filesystem-based implementation, and provides transactional guarantees, but requires a commercial license if distributed with an application (see http://www.oracle.com/database/berkeley-db/index.html for details).
      • The JdbmCacheLoader, which is a cache loader implementation based on the JDBM engine, a fast and free alternative to BerkeleyDB.
      • Finally, S3CacheLoader, which uses the Amazon S3 solution (Simple Storage Solution http://aws.amazon.com/) for storing cache data. Since Amazon S3 is remote network storage and has fairly high latency, it is really best for
        caches that store large pieces of data, such as media or files.

      When it comes to measuring the performance of different Cache Loaders, here’s a benchmark executed to compare the File CacheLoader, the JDBC CacheLoader (based on Oracle Database) and Jdbm CacheLoader.

      In the above benchmark we are testing cache insertion and cache gets of batches of 1000 Fqn each one bearing 10 attributes. The File CacheLoader accomplished the overall best performance, while the JBDM CacheLoader is almost as fast for Cache gets.

      The JDBC CacheLoader is the most robust solution but it adds more overhead to the Cache storage of your session data.

      Summary

      Clustering is a key element in building scalable Enterprise applications. The infrastructure used by JBoss AS for clustered applications is based on JGroups framework for the nodes inter-communication and JBoss Cache for keeping the cluster data synchronized across nodes.

      • JGroups can use both UDP and TCP as communication protocol. Unless you have network restriction, you should stay with the default UDP that uses multicast to send and receive messages.
      • You can tune the transmission protocol by setting an appropriate buffer size with the properties mcast_recv_buf_size, mcast_send_buf_size, ucast_recv_buf_size, and ucast_send_buf_size. You should as well increase your O/S buffer size, which need to be adequate to accept JGroups’ settings.
      • JBoss Cache provides the foundation for robust clustered services.
      • By configuring the cacheMode you can choose if your cluster messages will be synchronous (that is will wait for message acknowledgement) or asynchronous. Unless you need to handle cache message exceptions, stay with the asynchronous pattern, which provides the best performance.
      • Cache messages can trigger as well cluster replication or cluster invalidation. A cluster replication is needed for transferring the session state across the cluster while invalidation is the default for Entity/Hibernate, where state
        can be recovered from the database.
      • The cache concurrency can be configured by means of the nodeLockingScheme property. The most efficient locking schema is the MVCC, which reduces the cost of slow, or synchronization-heavy schemas of Pessimistic and Optimistic schemas.
      • Cache replication of sessions can be optimised mostly in three ways:
      • By overriding the isModified method of your SFSBs you can achieve a fine-grained control over data replication. It’s an optimal quick-tuning option for OLAP applications using SFSBs.
      • Buddy replication is the most important performance addition to your session replication. It helps to increase the performance by reducing memory and CPU usage as well as network traffic. Use buddy replication pools to achieve a higher level of redundancy for mission critical applications.
      • Clustered web applications can configure replication-granularity and replication-trigger:
      • As far as the replication trigger is concerned, if you mostly read immutable data from your session, the SET attribute provides a substantial benefit over the default SET_AND_PRIMITIVE_GET.
      • As far as replication granularity is concerned, if your sessions are generally small, you can stay with the default policy (SESSION). If your session is larger and some parts are infrequently accessed, ATTRIBUTE replication will be more effective. If your application has very big data objects in session attributes and only fields in those objects are frequently modified, the FIELD policy would be the best.

Filed Under: Servers Tagged With: WildFly

Domino 7 Lotus Notes Application Development

October 26, 2010 by itadmin Leave a Comment

Domino 7 Lotus Notes Application Development

If you’re reading this book, you’re probably already familiar with the Domino server. You know about all the powerful productivity features offered by this product and you know how much your company relies on it to communicate, collaborate, and manage its collective store of corporate knowledge.

This book is intended to help you with developing applications on the latest release of the Domino platform. This book has been written by Notes/Domino ‘insiders’. Collectively, we possess decades of Notes/Domino experience; we’ve been with the product since Notes 1.0, and since then have worked directly with customers to help them with their Notes/Domino upgrade and deployment issues.

What This Book Covers

Chapters 1 and 2 will help you understand the new features in Notes and Domino 7.
Chapter 3 shows how to use DB2 as a data store for Domino databases so as to bring the
scalability features of DB2 and the flexibility of SQL into Domino applications. The chapter shows how to install, configure, map, and then access Domino data stored in DB2.
Chapter 4 will show you how to make the best use of new features added in Domino Designer 7 to better manage Lotus Notes and Domino applications. Specifically we will be covering Autosave, Agent Profiling, and remote Java debugging.
Chapter 5 shows how to ensure that critical applications continue to run smoothly after you upgrade your Notes/Domino installation, while taking advantage of the new features and functionality release 7 has to offer.
Chapter 6 will tackle issues you need to consider when upgrading your @Formula language to Notes/Domino. We first detail a backup strategy and then take a tour through the new Notes/Domino @Formulas and the potential upgrade issues they raise.
Chapter 7 runs through the process of upgrading Domino-based agents and LotusScript; we also cover the use of TeamStudio Analyzer, which is a third-party tool to assist with your upgrade. The second half of the chapter runs through the new features available to LotusScript developers in Domino Designer 7.
Chapter 8 examines Domino-based web services and you will see the Java implementation of one such web service. We cover the various tools Domino Designer 7 provides for interacting with WSDL and finish by examining the role UDDI plays in facilitating the adoption of web services.
Chapter 9 covers using best practices to optimize your Domino applications for performance;specifically we will see how to efficiently code database properties, views, and forms/agents to work well in a Domino environment.
In Chapter 10, you will learn to use the new programming features offered in Lotus Notes/Domino 7 by actually implementing them in code.
In Chapter 11, we will examine two important new features, Domino Domain Monitoring (DDM) and Agent Profiles, which are critical for troubleshooting your Notes/Domino applications. Additionally, the chapter runs through several tips and techniques for identifying and correcting problems in your Notes/Domino 7 applications.

In Appendix A, we review several vendor tools that you can use to help upgrade your applications to Lotus Notes/Domino 7. These include Angkor by Atlantic Decisions, PistolStar Password Power 8 Plug-ins by PistolStar, Inc, CMT Inspector from Binary Tree, and FT Search Manager from IONET.

Upgrading Domino Applications

This chapter takes a closer look at several new features in Lotus Notes 7 and Lotus Domino Designer 7 clients that raise specific upgrade issues. In this chapter, we will identify some of those new features, show you how to implement them, and what to watch out for or think about. For a complete description of all the new features in Domino Designer 7, see Chapter 4.

When upgrading applications, you should keep two goals in mind. The first is to ensure interoperability; that is making sure that your clients can use your applications at least as well after upgrading as before. The second goal is to identify the critical handful of features whose implementation will add enough sizzle or functionality to your application for your users to applaud the upgrade (and thus be motivated to ensure the upgraded applications are quickly accepted and adopted). For your users, this mitigates the nuisance of upgrading.

Notes/Domino 7 offers some tremendous back-end improvements over previous releases. On the user-visible front-end, the improvements are more incremental. This is good news in that your users won’t need extensive retraining, but of course, it also narrows the field in terms of finding those sharp new features that will make them excited to upgrade.

To help you identify which features offer the most visible and immediate value to your users, we’ll take a quick tour of several features that we feel offer the most “bang for the buck” from the perspective of an end-user. First, let’s examine several high-profile Lotus Notes client features added in release 7.

Lotus Notes 7 Client Features

The following list describes several of the more user-visible features that have been added or enhanced in the Lotus Notes 7 client. These features can comprise a compelling reason for your users to upgrade:

  • AutoSave saves your work without user intervention. For example, with AutoSave enabled, if your computer crashes, you will be able to reboot and recommence working at roughly the same point where you left off in any open documents.
  • Mail and the Resource & Reservations database are enhanced but not radically changed. On the back-end, however, the Resource & Reservations database has been dramatically upgraded to better avoid scheduling conflicts.
  • Message Disclaimer, a highly sought after feature, allows users and/or administrators to add a message disclaimer to every outgoing email. This is done through policy documents. The disclaimer is added after the user sends the outgoing email message, as opposed to a signature that the user sees before sending.
  • Save/Close all windows lets you close all window tabs from the File menu (via the option Close All Open Window Tabs). You can also save the state of you open windows, either from the File menu (manually) or as a default setting under User Preferences (which makes it automatic). This means that when you reopen Notes, all these window tabs will be loaded for you. Note that it is only the tab window references that are loaded, not the actual views or documents. So when you click on one of these tabs, there may be a slight delay as the view is refreshed or the document is reloaded. The alternative would be that you would have to wait for all of these views and documents to be loaded just to get into Notes, which would be unreasonable.

Of the preceding features, AutoSave in particular is likely to be of interest to your users, so we will look at it in a bit more detail later in this chapter.

New Domino Designer Client Features

Some of the important and valuable new/upgraded features for the Domino Designer 7 include the following:

  • AutoSave (mentioned above, and described in more detail later in this chapter).
  • Agent Profiler allows a developer to see how long every call in their agent is taking, and how many times each of those calls occurs. This is an invaluable tool for performance monitoring and troubleshooting, and we’ll look at it in more detail in Chapter 11, Troubleshooting Applications.
  • Domino Domain Monitoring (DDM) is perhaps the single most important feature in Notes/Domino 7. It provides an extensive list of troubleshooting and performance monitoring tools, a subset of which is relevant for application developers. We will examine this in more detail in Chapter 11.
  • Significant improvements to referencing profile documents in views have been made. In addition to changing the color scheme for a view, you can now develop applications that are much more user-defined and dynamic. This will be described in detail later in this chapter.
  • Right-clicks in views brings up the action bar buttons. This is an incremental improvement, but in terms of overall usability in the product, it is a nice feature to have.
  • Shared view columns allow a developer to create complex and widely used view columns and then apply the shared design to multiple views. Any changes to that shared column design will automatically be reflected in all the views that reference that column.
  • In-place design element editing of labels is a very handy way for developers to change the names of forms, views, agents, and so on, without having to open and resave the design element. While highlighting a design element, you simply click to enter in-place editing, and you can use the Tab key to move along the row to edit different labels for the same design element. This feature works much the same as InViewEditing does for Notes clients.
  • Web Services are described in a later section.
  • DB2 design integration is also discussed in a separate section.

AutoSave

As mentioned previously, AutoSave allows you to automatically save your work periodically without having to do so manually. If your computer crashes, you’ll be able to resume your work at the point AutoSave last ran. This helps avoid the situation where you lose hours of work because you forgot to save as you went along, which has probably happened to everyone at least once!

Two things have to happen to enable the AutoSave feature: the user has to turn on the setting, and the application developer must enable AutoSave for the form the user is currently working on. For users to enable AutoSave on their clients, they must select File | Preferences | User Preferences. On the Basics tab in the Startup Options section, there is a new setting AutoSave every n minutes, where n is the number of minutes between auto-saves. This interval can be from 1 to 999 minutes. (You must close and reopen Notes for AutoSave to be enabled.)

For the developer, AutoSave must be enabled on a form-by-form basis. Open the form in Domino Designer, and select Design | Form Properties. Then check the new option Allow Autosave.

With AutoSave enabled, when you are working in a document with this form and experience a computer crash or power outage, you will see the following dialog box on restarting Notes:

If you choose Yes, then you’ll see a popup similar to the following:

Enough information is provided in the dialog box for you to make an informed decision about whether or not to recover the document(s).

To keep the AutoSave database itself clean and fast, and to prevent the user from receiving repeat warnings about the same old documents, AutoSave documents are held in a special Notes database (.nsf file) until they are no longer needed (because the documents have been successfully saved or recovered). The database is named with a convention that allows multiple users on the same machine to have different AutoSave databases. The filename of the database is as_ followed by the first initial and then the entire last name of the user. So for user Fred Merkle, the AutoSave database would be called as_fmerkle.nsf.

Things to Consider when Enabling AutoSave in a Form

For the developer, enabling AutoSave is very easy, but there are some potential issues that you need to think about first. For example, if you enable this feature on a complex form that has a lot of event-driven code, you may not get satisfactory results. For simple forms (or forms that are complex only because they have many fields), AutoSave should work very well.

To illustrate this point about complex forms not necessarily working properly with AutoSave, imagine this scenario. You have an application applying for security clearance. A user creates a document and saves it, and the document is then reviewed by a manager. That manager can change the status from Submitted to Reviewed, Accepted, Rejected, and so on. When the status changes, an email is sent
to all interested parties informing them of this change. The program tracks whether the status changes during a user session by way of a PostOpen event. It saves in memory the value of the status as the document was opened. Then, as the document is saved, the QuerySave event compares the current value to what is being held in memory (and this is the key). If the value is different, an email message is generated that says, for example, “Dear Fred Merkle, your request for security clearance Alpha has been reviewed and the status has been changed to REJECTED for the following
reason: [explanation follows]”.

If a manager experienced a crash while in the middle of reviewing a security requestin this application, and then rebooted Notes and used AutoSave to recover their document, edits would be preserved (which of course is how AutoSave is supposed to work). However, AutoSave cannot preserve the in-memory value of the status field. In our example, this would create a problem, because the notification email would not be sent out. But in many applications, the forms do not use sophisticated in-memory tracking, and so AutoSave will work smoothly. In fact, even in our example, AutoSave will save your work; it just won’t preserve the in-memory information. So although the work flow process would be compromised, at least your data would still be preserved.

The key here is that the developer needs to think about each form and whether or not the potential for data loss outweighs any potential compromises in how the application functions. In many cases, the answer is an easy yes, and so enabling this feature makes sense.

Referencing Profile Documents in Views

In Notes/Domino 6, you can reference a profile document from a view in order to determine the appropriate color choices. The mail template does this, and this allows a user to specify that emails from (for instance) Fred Merkle should appear in blue, while messages from Alice Merkle should appear in orange. This is a powerful feature for enhancing the user interface of any application.

Notes/Domino 7 takes this a step further and allows you to actually populate a view column with icons based on choices made in a profile document. We’ll go through a simple example, and then we’ll look at how to employ this feature.

Imagine that you’ve got a Sales Tracking application. There are many thousands of companies and customers and products. Each month, your management team chooses a small set of customers and products that should receive special attention. Wouldn’t it be nice if there were a column that had a special icon that would display whether the record referenced that customer or product? With this new feature, you can do exactly that.

Your first steps will be to create a profile document form, and then create the appropriate labels and fields within that form. In our simple example, we might have a Company field and a Product field. These two fields might be keyword lists that reference the list of companies and products respectively.

Next, you need to create a special computed text field in this profile form that resolves to the formula you’d like to see in your view column. For example, you might want the view column formula to read as follows:
[code lang=”java”]
@if(CompanyName = <company name chosen in profile document> |
ProductName = <product chosen in profile document>; 59; 0)
[/code]

This would display a dollar‑bill icon if the company name in a record was the same as the value chosen in your profile document, or if the product chosen matched the product value in your profile document.

To make this formula, your profile‑document formula might read as follows (and note the use of quotation marks):
[code lang=”java”]
"c := Company;
p := Product;

vCompanyName := \"" + CompanyName + "\";
vProductName := \"" + ProductName + "\";
@if(c = vCompanyName | p = vProductName; 59; 0)"
[/code]
In the preceding formula, Company and Product refer to the values in the documents in the view, CompanyName and ProductName (and therefore vCompanyName and vProductName) refer to the values in the profile document.

The final step is to create a column in your view that is marked with the following Column Properties settings:

  • User definable selected
  • Profile Document named, for example, (Profile-Interest)
  • Programmatic Name set to reference the computed text field from your profile document form, for example, $Interest
  • and in the formula box for this column, enter a formula such as @Random:


This will be overwritten with your profile‑document’s computed text formula, but it serves as a placeholder. If you substitute a simpler formula (such as 0), you will break the function, and all your documents in the view will display as replication/ save conflicts.

We’d like to mention two final notes about using profile documents in views (whether they are just for color or for displaying icons). First, these profile documents cannot be per user, they must be per database. Second, every time the profile document is updated, every view that references that profile document must be rebuilt (not just refreshed). So this is a very handy feature for an administrator or database owner to be able to use, but it might be dangerous if all 5,000 users of your application have access to a button that lets them update the profile document in question.

Web Services

In an ongoing effort to make Domino more compliant with open standards, Notes/ Domino 7 offers the ability to make your Domino server a hosted Web Service. This means that clients can make Web Service requests from your Domino server, just as they would from any Web Service.

In the Domino Designer 7 client, there is a new design element under Shared Code called Web Services. A Web Service design element is very similar in structure and function to an agent, and in fact, most of the user interface is the same. In particular, the Security tab is the same, providing restrictions on agent/Web Service activity, who can see the agent/Web Service, and whether or not this should run as a web user, as the signer of the design element, or under the authority of another user.

A significant feature of Web Services is that they return data to the requesting client as XML data.

DB2 Design Integration

In Notes/Domino 7, you have the ability to use a DB2 back-end to provide robust query capabilities across multiple Notes databases. There is some work required to set this up:

  • DB2 must be installed. This does not come with Domino; it is a separate install.
  • There is some Domino configuration required, mostly to set up as an authorized DB2 user account.

The Notes database(s) that you want to integrate with DB2 must be created as DB2- enabled. If you have an existing database that you want to convert, you can make a copy or replica copy and mark it as a DB2 database. (It is not possible to convert existing databases simply through your new server configuration, or through a server-console command.) In your Notes database, you can then create a DB2 Access View (DAV), which sets up the appropriate tables in the DB2 storage. Note that you may have to clean your data to avoid data-type conflicts. Also note that these will add to the size of your storage needs, but will not be reflected in the size of the database as recorded in any of the Domino housekeeping reports. However, this size is usually fairly nominal.

In your Notes database, you can now create views that use SQL statements to leverage these DAVs and which can display results from across multiple documents, even across multiple databases, for use within a single Notes view. A simple example would be a view that displays customer information as well as the internal sales rep’s name from SalesActivity documents in the SalesTracking database, and also displays the sales rep’s information, which comes from your SalesRepInfo database.

A table similar to the one below will display in your view, with the first six columns coming from SalesTracking and the right-most column (Rep Phone) coming from SalesRepInfo.

Company Customer City State Zip Rep Rep Phone
ACME, Inc. Alice Smith Boston MA 02100 Fred Merkle 617.555.5555
ACME, Inc. Betty Rogers Boston MA 02100 Jane Merkle 617.444.4444

Tips when Using DB2

There is no direct performance gain from DB2-enabling your Notes database(s). Although the file structure of a DB2 database is far more efficient than Domino for storing large quantities of well‑defined data, these gains cannot be realized by the combined setup of a DB2-enabled Domino database. On the other hand, if you have a business need for various views or reports that combine data from multiple sources (as with our simple example above), then you can consider DB2-enabling your databases as a very high-performance alternative to writing your own code to mimic this feature.

If you make a database copy (or replica copy) on a server or workstation that does not have DB2 installed, you will have to make this a regular NSF copy, and it will not have the capability to do the special SQL query views. However, your NSF data will be entirely preserved.

Template Management

When you upgrade your servers, you are likely to upgrade some or all of the standard templates: Domino Directory, Mail, Resource & Reservations, Discussion, and so on. There are three major steps you need to perform to ensure compatibility throughout your upgrade process:

  1. Review code
  2. Customize new templates
  3. Recompile script libraries

These steps are logical and sensible, but easily overlooked in the hustle and bustle of upgrading servers, clients, and your own customized application code.

Reviewing Code

The first step is to know what code has changed between your old templates and the new (release 7) templates, and simply to test your client mix (as most customers will not upgrade every single client simultaneously) against these new features/ templates. You can determine the code differences by running a utility that will break down all design changes. (See Appendix A for more information about tools.) After you determine what code has changed, you must perform some analysis to decide what is worth testing—it’s better to identify whatever errors or problems you may encounter before you upgrade.

Customizing New Templates

If you have customized your mail template, or any other standard template, you’ll need to take one further step. You should have documented all of your own customizations, and now you’ll need to analyze your customizations against the code changes in the new templates, and then apply your customizations appropriately to the new templates. In most cases, this presents no problems. However, sometimes design elements are added or removed, field names that you had reserved for your
own use are now used by the template, or subs/functions which had been used in the old template are completely missing in the new template. So this too needs careful analysis.

Recompiling Script Libraries

Finally, for any application that has your own code, whether partly or wholly customized, you’ll want to recompile all the LotusScript under Notes/Domino 7. To do this, open the Script Libraries view in Domino Designer, and select Tools | Recompile All Scripts. Depending upon the size of the application, this may take some time, as it has to parse through all your code twice, once to build up a dependency tree and again to find any code problems.

When the compiling is complete, you will be presented with a dialog box that lists the design elements with problems. You can go directly into that design element from the dialog box to fix the problems. In the following example, we have changed a script library sub that is referenced from two forms, Recompile Script1 and Recompile Script2:

Note that if you click on one of these design elements in the dialog box and click
OK, it will open that design element for you, but it won’t put you directly into the event, button, sub, and so on that needs fixing. You’ll still have to find that yourself. One way is to make a meaningless edit (such as inserting a space and then removing it) and then try to save. The form will now catch the error, so it will more helpfully point out where the error is coming from.

Note that after you finish with that form, and save and close it, you will not go back to the preceding dialog box. Instead, you’ll be back in the Script Libraries view. To return to the dialog box, you will have to select Tools | Recompile All Scripts again.

A Final Note about Templates

If you are not experienced with templates, be careful the first few times you work with them. Templates will push changes, by default, every night into all databases referencing the templates on that server. That means that if you make changes directly to the design elements in the application, you risk having your changes overwritten by the template. Worse, if you check off the setting to prohibit design template updates, then you risk having your design out of synch with template changes.

Under Design properties in the third tab, you can select the option Prohibit design refresh or replace to Modify. But this risks making your design out of synch with the template. Typically, you would do this for a troubleshooting view that does not need to reside in the template.

On the Design tab of Database properties, you can assign a template for this database. Doing so will refresh/replace all design elements in your database, as needed, from the template on that server every night.

Summary

In this chapter, we’ve discussed several new and enhanced Notes/Domino 7 features that raise particularly interesting application upgrade issues. These features include AutoSave, the ability to reference profile documents in views, Web Services, and DB2 integration. We also took a look at managing your Notes/Domino templates to accommodate the updates and enhancements made to them in Notes/Domino 7. This information will help ensure that your critical applications continue to run smoothly after you upgrade your Notes/Domino installation, while taking advantage of the new features and functionality release 7 has to offer.

Filed Under: Misc Tagged With: Lotus Notes

Service-Oriented Architecture— An Integration Blueprint

October 15, 2010 by itadmin Leave a Comment

Service-Oriented Architecture— An Integration Blueprint

With the widespread use of service-oriented architecture (SOA), the integration of different IT systems has gained a new relevance. The era of isolated business information systems—so-called silos or stove-pipe architectures—is finally over. It is increasingly rare to find applications developed for a specific purpose that do not need to exchange information with other systems. Furthermore, SOA is becoming more and more widely accepted as a standard architecture. Nearly all organizations and vendors are designing or implementing applications with SOA capability. SOA represents an end-to-end approach to the IT system landscape as the support function for business processes. Because of SOA, functions provided by individual systems are now available in a single standardized form throughout organizations, and even outside their corporate boundaries. In addition, SOA is finally offering mechanisms that put the focus on existing systems, and make it possible to continue to use them. Smart integration mechanisms are needed to allow existing systems, as well as the functionality provided by individual applications, to be brought together into a new fully functioning whole. For this reason, it is essential to transform the abstract concept of integration into concrete, clearly structured, and practical implementation variants.

also read:

  • What is UDDI?
  • Apache Axis 2 Web Services
  • RESTFul Java Web Services

The Trivadis Integration Architecture Blueprint indicates how integration architectures can be implemented in practice. It achieves this by representing common integration approaches, such as Enterprise Application Integration (EAI); Extract, Transform, and Load (ETL); event-driven architecture (EDA); and others, in a clearly and simply structured blueprint. It creates transparency in the confused world of product developers and theoretical concepts. The Trivadis Integration Architecture Blueprint shows how to structure, describe, and understand existing application landscapes from the perspective of integration. The process of developing new systems is significantly simplified by dividing the integration architecture into process, mediation, collection and distribution, and communication layers. The blueprint makes it possible to implement application systems correctly without losing sight of the bigger picture: a high performance, flexible, scalable, and affordable enterprise architecture.

What This Book Covers

Despite the wide variety of useful and comprehensive books and other publications on the subject of integration, the approaches that they describe often lack practical relevance.
The basic issue involves, on the one hand, deciding how to divide an integration solution into individual areas so that it meets the customer requirements, and on the other hand, how it can be implemented with a reasonable amount of effort. In this case, this means structuring it in such a way that standardized, tried-and-tested basic components can be combined to form a functioning whole, with the help of tools and products. For this reason, the Trivadis Integration Architecture Blueprint subdivides the integration layer into further layers. This kind of layering is not common in technical literature, but it has been proven to be very useful in practice. It allows any type of integration problem to be represented, including traditional ETL (Extract, Transform, and Load), classic EAI (Enterprise Application Integration), EDA (event-driven architecture), and grid computing. This idea is reflected in the structure of the book.
Chapter 1, Basic Principles, covers the fundamental integration concepts. This chapter is intended as an introduction for specialists who have not yet dealt with the subject of integration.
Chapter 2, Base Technologies, describes a selection of base technologies. By far the most important of these are transaction strategies and their implementation, as well as process
modeling. In addition, Java EE Connector Architecture (JCA), Java Business Integration (JBI), Service Component Architecture (SCA), and Service Data Objects (SDO) are explained. Many other base technologies are used in real-life integration projects, but these go beyond the scope of this book.
Chapter 3, Integration Architecture Blueprint, describes the Trivadis Integration
Architecture Blueprint. The process of layering integration solutions is fully substantiated, and each step is explained on the basis of the division of work between the individual layers. After this, each of the layers and their components are described.
Chapter 4, Implementation Scenarios, demonstrates how the Trivadis Integration Architecture Blueprint represents the fundamental integration concepts that have been described in Chapter 1. We will use the blueprint with its notation and visualization to understand some common integration scenarios in a mostly product-neutral form. We will cover traditional, as well as modern, SOA-driven integration solutions.
Chapter 5, Vendor Products for Implementing the Trivadis Blueprint, completes the book with a mapping of some vendor platforms to the Trivadis Integration Architecture Blueprint.

Integration Architecture Blueprint

The Trivadis Integration Architecture Blueprint specifies the building blocks needed for the effective implementation of integration solutions. It ensures consistent quality in the implementation of integration strategies as a result of a simple, tried-and-tested structure, and the use of familiar integration patterns (Hohpe, Wolf 2004).

Standards, components, and patterns used

The Trivadis Integration Architecture Blueprint uses common standardized techniques, components, and patterns, and is based on the layered architecture principle. A layered architecture divides the overall architecture into different layers with different responsibilities. Depending on the size of the system and the problem involved, each layer can be broken down into further layers. Layers represent a logical construct, and can be distributed across one or more physical tiers. In contrast to levels, layers are organized hierarchically, and different layers can be located on the same level. Within the individual layers, the building blocks can be strongly cohesive. Extensive decoupling is needed between the layers. The rule is that higher-level layers can only be dependent on the layers beneath them and not vice versa. Each building block in a layer is only dependent on building blocks in the same layer, or the layers beneath. It is essential to create a layer structure that isolates the most important cohesive design aspects from one another, so that the building blocks within the layers are decoupled.
The blueprint is process oriented, and its notation and structure are determined by the blueprint’s dependencies and information flow in the integration process. An explanation of how the individual layers, their building blocks, and tasks can be identified from the requirements of the information flow is given on the basis of a simple scenario. In this scenario, the information is transported from one source to another target system using an integration solution.
In the blueprint, the building blocks and scenarios are described using familiar design patterns from different sources:


  • (Hohpe, Wolf 2004)

  • (Adams et al. 2001)

  • (Coral8 2007)

  • (Russel et al. 2006)


These patterns are used in a shared context on different layers. The Trivadis Integration
Architecture Blueprint includes only the integration-related parts of the overall architecture, and describes the specific view of the technical integration domain in an overall architecture. It focuses on the information flow between systems in the context of domain-driven design.
Domain-driven design is a means of communication, which is based on a profound understanding of the relevant business domain. This is subsequently modeled specifically for the application in question. Domain models contain no technical considerations and are restricted exclusively to business aspects. Domain models represent an abstraction of a business domain, which aims to capture the exemplary aspects of a specific implementation for this domain. The objectives are:

  • To significantly simplify communication between domain experts and developers by using a common language (the domain model)

  • To enable the requirements placed on the software to be defined more accurately and in a more targeted way

  • It must be possible to describe, specify, and document the software more precisely and more comprehensibly, using a clearly defined language, which will make it easier to maintain


The technical aspects of architecture can be grouped into domains in order to create specific views of the overall system. These domains cover security, performance, and other areas. The integration of systems and information also represents a specific view of the overall system, and can be turned into a domain.
Integration domain is used to mean different things in different contexts. One widelyused meaning is “application domain”, in other words, a clearly defined, everyday problem area where computer systems and software are used. Enterprise architectures are often divided into business and technical domains:

  • Business domains may include training, resource management, purchasing, sales or marketing, for example.

  • Technical domains are generally areas such as applications, integration, network, security, platforms, systems, data, and information management.


The blueprint, however, sees integration as a technical domain, which supports business domains, and has its own views that can be regarded as complementary to the views of other architecture descriptions.
In accordance with Evans (Evans, 2004), the Trivadis Integration Architecture Blueprint is a ubiquitous language for describing integration systems. This and the structure of the
integration domain on which it is based, have been tried and tested in a variety of integration projects using different technologies and products. The blueprint has demonstrated that it offers an easy-to-use method for structuring and documenting implementation solutions. As domain models for integration can be formulated differently depending on the target platform (for example, an object-oriented system or a classic ETL solution), the domain model is not described in terms of object orientation.
Instead, the necessary functionality takes the form of building blocks (which are often identical with familiar design patterns) on a higher level of abstraction. This makes it possible to use the blueprint in a heterogeneous development environment with profitable results.
An architecture blueprint is based on widely-used, tried-and-tested techniques, components and patterns, which are grouped into a suitable structure to meet the requirements of the target domain.
The concepts, the functionality, and the building blocks to be implemented are described in an abstract form in blueprints. These are then replaced or fine-tuned by product specific building blocks in the implementation project. Therefore, the Trivadis Integration Architecture Blueprint has been deliberately designed to be independent of individual vendors, products, and technologies. It includes integration scenarios and proposals that apply to specific problems, and can be used as aids during the project implementation process. The standardized view of the integration domain and the standardized means of representation enable strategies, concepts, solutions, and products to be compared with one another more easily in evaluations of architectures.
The specifications of the blueprint act as guidelines. Differences between this model and reality may well occur when the blueprint is implemented in a specific project. Individual building blocks and the relationships between them may not be needed, or may be grouped together. For example, the adapter and mapper building blocks may be joined together to form one component in implementation processes or products.

Structuring the integration blueprint

The following diagram is an overview of the Trivadis Integration Architecture Blueprint. It makes a distinction between the application and information view and the integration view.

Insertt image 1049EN_03_01.png

The application and information view consists of external systems, which are to be connected together by an integration solution. These are source or target entities in the information flow of an integration solution. Generally one physical system can also take
on both roles. The building blocks belonging to the view, and the view itself, must be regarded as external to the integration system that is being described and, therefore, not the subject of the integration blueprint. The external systems can be divided into three main categories:


  • Transactional information storage: This includes classic relational database management systems (RDBMS) and messaging systems (queues, topics). The focus is on data integration.

  • Non-transactional information storage: This primarily file-based systems and non-relational data stores (NoSQL) with a focus on data integration.

  • Applications: Applications include transactional or non-transactional systems that are being integrated (ERP—Enterprise Resource Planning, CMS—Content Management System, and so on) and can be accessed through a standardized API (web service, RMI/IIOP, DCOM, and so on).
    The focus is on application and process integration.


The integration view lies at the heart of the integration blueprint and is divided (on the
basis of the principle of divide and conquer) into the following levels:

  • Transport level: The transport level encapsulates the technical details of communication protocols and formats for the external systems. It contains:


    • Communication layer: The communication layer is part of the transport level, and is responsible for transporting information. This layer links the integration solution with external systems, and represents a type of gateway to the infrastructure at an architectural level. It consists of transport protocols and formats.


  • Integration domain level: The integration domain level covers the classic areas of integration, including typical elements of the integration domain, such as adapters, routers, and filters. It is divided into:


    • Collection/distribution layer: This layer is responsible for connecting components. It is completely separate from the main part of the integration domain (mediation). The building blocks in this layer connect the mediation layer above with the communication layer below. The layer is responsible for encapsulating external protocols and their technical details from the integration application, and transforming external technical formats into familiar internal technical formats.

    • Mediation layer: This layer is responsible for forwarding information. Its main task is to ensure the reliable forwarding of information to business components in the process layer, or directly to output channels that are assigned to the collection/distribution layer, and that distribute data to the target systems. This is the most important functionality of the
      integration domain. In more complex scenarios, the information forwarding process can be enhanced by information transformation, filtering, and so on.


  • Application level: The application level encapsulates the integration management and process logic. It is an optional level and contains:


    • Process layer: The process layer is part of the application level, and is responsible for orchestrating component and service calls. It manages the integration processes by controlling the building blocks in the mediation layer (if they cannot act autonomously).


    The integration view contains additional functionality that cannot be assigned to any of the levels and layers referred to above. This functionality consists of so-called cross-cutting concerns that can be used by building blocks from several other layers. Cross-cutting concerns include:

    • Assembly/deployment: Contains configurations (often declarative or scripted) of the components and services. For example, this is where the versioning of Open Service Gateway initiative (OSGi) services is specified.

    • Transaction: Provides the transaction infrastructure used by the building blocks in the integration domain.

    • Security/management: This is the security and management infrastructure used by the building blocks in the integration domain. It includes, for example, libraries with security functionality, JMX agents and similar entities.

    • Monitoring, BAM, QoS: These components are used for monitoring operations. This includes ensuring compliance with the defined Service Level Agreements (SLA) and Quality of Service (QoS). Business Activity Monitoring (BAM) products can be used for monitoring purposes.

    • Governance: These components and artifacts form the basis for SLAs and QoS. The artifacts include business regulations, for example. In addition, this is where responsibilities, functional and non-functional requirements, and accounting rules for the services/capacities used are defined.

    Implementation scenarios

    Having understood the structure of the blueprint covered in Chapter 3, Integration Architecture Blueprint, this chapter will use individual scenarios to illustrate how the
    business pattern can be implemented using the Integration Architecture Blueprint.

    The scenarios shown in this chapter have been deliberately designed to be independent of specific vendor products, and are based solely on the building blocks that form part of the different layers of the blueprint. The symbols used have the same semantic meaning as described in Chapter 3.

    This chapter will:


    • Explain service-oriented integration scenarios

    • Use scenarios to show how data integration business patterns can be implemented

    • Present a description of scenarios for implementing the business patterns for EAI/EII integration

    • Look in detail at the implementation of event processing business patterns

    • Describe a scenario for implementing business patterns for grid computing and Extreme Transaction Processing (XTP)

    • Explain how an SAP ERP system can be combined with the integration blueprint

    • Explain how an existing integration solution can be modernized using SOA, and describe a scenario that has already been implemented in practice

    • Combine the integration blueprint with the other Trivadis Architecture Blueprints

    Service-oriented integration scenarios

    These scenarios show how the service-oriented integration business patterns described in Chapter 1 can be implemented. These business patterns are as follows:


    • Process integration: The process integration pattern extends the 1: N topology of the broker pattern. It simplifies the serial execution of business services, which are provided by the target applications.

    • Workflow integration: The workflow integration pattern is a variant of the serial process pattern. It extends the capability of simple serial process orchestration to include support for user interaction in the execution of individual process steps.


    Implementing the process integration business pattern


    In the scenario shown in the following diagram, the process integration business pattern
    is implemented using BPEL.

    Insertt iimage 1049EN_04_05.png

    Trigger:
    An application places a message in the queue. Primary flow:


    • The message is extracted from the queue through JMS and a corresponding JMS adapter.

    • A new instance of the BPEL integration process is started and the message is passed to the instance as input.

    • The integration process orchestrates the integration and calls the systems that are to be integrated in the correct order.

    • A content-based router in the mediation layer is responsible for ensuring that the correct one of the two systems is called. However, from a process perspective, this is only one stage of the integration.

    • In the final step, a “native” integration of an EJB session bean is carried out using an EJB adapter.


    Variant with externalized business rules in a rule engine


    A variant of the previous scenario has the business rules externalized in a rule engine, in order to simplify the condition logic in the integration process. This corresponds to the external business rules variant of the process integration business pattern, and is shown in the form of a scenario in the following diagram:

    Insert image 1049EN_04_06.png

    Trigger:
    The JEE application sends a SOAP request.
    Primary flow:


    1. The SOAP request initiates a new instance of the integration process.

    2. The integration process is implemented as before, with the exception that in this case, a rule engine is integrated before evaluating the condition. The call to the rule engine from BEPL takes the form of a web service call through SOAP.

    3. Other systems can be integrated via a DB adapter as shown here, for example to enable them to write to a table in an Oracle database.


    Variant with batch-driven integration process


    In this variant, the integration process is initiated by a time-based event. In this case, a
    job scheduler added before the BPEL process triggers an event at a specified time, which starts the process instance. The process is started by the scheduler via a web service call. The following diagram shows the scenario:

    Insert image 1049EN_04_07.png

    Trigger:


    • The job scheduler building block does a web service request at a specified time.


    Primary flow:

    1. The call from the job scheduler via SOAP initiates a new integration process instance.

    2. As in the previous variants, the BPEL process executes the necessary integration steps and, depending on the situation, integrates one system via a database adapter, and the other directly via a web service call.


    Implementing the workflow business pattern


    In this scenario, additional user interaction is added to the integration process scenario. As a result, the integration process is no longer fully automated. It is interrupted at a specific point by interaction with the end user, for example, to obtain confirmation for a certain procedure. This scenario is shown in the image below.

    Insert image 1049EN_04_08.png

    Trigger:
    An application places a message in the queue.
    Primary flow:


    1. The message is removed from the queue by the JMS adapter and a new instance of the integration process is started.

    2. The user interaction takes place through the asynchronous integration of a task service. It creates a new task, which is displayed in the user’s task list.

    3. As soon as the user has completed the task, the task service returns a callback to the relevant instance of the integration process, and by that, informs the process of the user’s decision.

    4. The integration process responds to the decision and executes the remaining steps.

    Modernizing an integration solution

    This section uses an example to illustrate how an existing integration solution that has grown over time can be modernized using SOA methods, and the scenarios from the previous sections.
    The example is a simplified version of a specific customer project in which an existing solution was modernized with the help of SOA.
    The task of the integration solution is to forward orders entered in the central ERP system to the external target applications.

    Initial situation


    The current solution is primarily based on a file transfer mechanism that sends the new and modified orders at intervals to the relevant applications, in the form of files in two possible formats (XML und CSV). The applications are responsible for processing the files independently.
    At a later date, another application (IT app in the following diagram) was added to the system using a queuing mechanism, because this mechanism allowed for the guaranteed exchange of messages with the application by reading new orders, and sending appropriate messages through the queue in the form of a transaction.
    The following diagram shows the initial situation before the modernization process took place:

    Insert image 1049EN_04_21.png

    The extraction and file creation logic is written in PL/SQL. A Unix shell script is used to send the files through the File Transfer Protocol (FTP), as no direct FTP call was possible in PL/SQL. Both a shell script and the PL/SQL logic are responsible for orchestrating the integration process.
    Oracle Advanced Queuing (AQ) is used as the queuing infrastructure. As PL/SQL supports sending of AQ messages through an API (package), it was possible to implement this special variant of the business case entirely in PL/SQL, without a call to a shell script being needed. In this case, the integration is bi-directional. This means that when the order has been processed by the external system, the application must send a feedback message to the ERP system. A second queue, which is implemented in the integration layer using PL/SQL, is used for this purpose.

    Sending new orders


    Trigger:

    The job scheduler triggers an event every 30 minutes for each external system that has to be integrated.

    Flow:


    1. The event triggered by the job scheduler starts a shell script, which is responsible for part of the orchestration.

    2. The shell script first starts a PL/SQL procedure that creates the files, or writes the information to the queue.

    3. The PL/SQL procedure reads all the new orders from the ERP system’s database, and enriches them with additional information about the product ordered and the customer.

    4. Depending on the external target system, a decision is made as to whether the information about the new order should be sent in the form of files, or messages in queues.

    5. The target system can determine in which format (XML or CSV) the file should be supplied. A different PL/SQL procedure is called depending on the desired format.

    6. The PL/SQL procedure writes the file in the appropriate format using a PL/SQL tool (in other words, the built-in package UTL_FILE) to the database server. The database server is used only for interim storage of the files, as these are uploaded to the target systems in the next step.

    7. The main shell script starts the process of uploading the files to the external system, and another shell script completes the task.

    8. The files are made available on the external system and are processed in different ways depending on the application in question.

    9. A PL/SQL procedure is called to send the order information through the queue. The procedure is responsible for formatting and sending the message.

    10. The document is now in the output queue (send) ready to be consumed.

    11. The application (IT app) consumes the messages from the queue immediately and starts processing the order.

    12. When the order has been processed, the external application sends a message to the feedback queue (receive).


    Receiving the confirmation


    Trigger:

    The job scheduler triggers an event every 15 minutes.

    Flow:


    1. The job scheduler event starts a PL/SQL procedure, which processes the feedback message.

    2. The message is consumed from the feedback queue (receive).

    3. A SQL UPDATE command updates the status of the order in the ERP database.


    Evaluation of the existing solution


    By evaluating the existing solution we came to the following conclusions:

    • This is an integration solution that has grown up over time using a wide variety of different technologies.

    • A batch solution which does not allow real-time integration or which makes this very difficult.

    • Exchanging information in files is not really a state-of-the-art solution.


      • Data cannot be exchanged reliably, as FTP does not support transactions.

      • Error handling and monitoring are difficult and time-consuming. (It’s not easy to determine if the IT app does not send a response.)

      • Files must be read and processed by the external applications, all of which use different methods.


    • Integrating new distribution channels (such as web services) is difficult, as neither PL/SQL nor shell scripts are the ideal solution in this case.
    • Many different technologies are used. The integration logic is distributed, which makes maintenance difficult:

      • Job scheduler (for orchestration)

      • PL/SQL (for orchestration and mediation)

      • Shell script (for orchestration and mediation)


    • Different solutions are used for files and queues.


    Many of these disadvantages are purely technical. From a business perspective, only the first disadvantage represents a real problem. The period of a maximum of 30 minutes between the data being entered in the ERP system, and the external systems being updated, is clearly too long. From a technical point of view, it is not possible to reduce this amount of time, as the batch solution overhead is significant and, in the case of shorter cycles, the total overhead would be too large. Therefore, the decision was made to modernize the existing integration solution and to transform it into an event-driven, service-oriented integration solution based on the
    processing of individual orders.

    Modernizing—integration with SOA

    The main objective of the modernization process, from a business perspective, is the realtime
    integration of orders. From a technical standpoint, there are other objectives, including the continued use of the batch mode through file connections. This means that the new solution must completely replace the old one, and the two solutions should not be left running in parallel. A further technical objective is that of improved support as a result of the introduction of a suitable infrastructure. On the basis of these considerations, a new SOA-based integration architecture was proposed and implemented, as shown in the following diagram:

    Insert image 1049EN_04_22.png

    Trigger:

    Each new order is published to a queue in the ERP database, using the change data capture functionality of the ERP system.

    Flow:


    1. The business event is consumed from the queue by an event-driven consumer building block in the ESB. The corresponding AQ adapter is used for this purpose.

    2. A new BPEL process instance is started for the integration process. This instance is responsible for orchestrating all the integration tasks for each individual order.

    3. First, the important order information concerning the products and the customer must be gathered, as the ERP system only sends the primary key for the new order in the business event. A service is called on the ESB that uses a database adapter to read the data directly from the ERP database, and compiles it into a message in canonical format.

    4. A decision is made about the system to which the order should be sent, and about whether feedback on the order is expected.

    5. In the right-hand branch, the message is placed in the existing output queue (send). A message translator building block converts the order from the canonical format, to the message format used so far, before it is sent. The AQ adapter supports the process of sending the message. The BPEL process instance will be paused until the callback from the external applications is received.

    6. The message is processed by the external application in the same way as before. The message is retrieved, the order is processed and, at a specified time, a feedback message is sent to the feedback queue (receive).

    7. The paused BPEL process instance is reactivated and consumes the message from the feedback queue.

    8. An invoke command is used to call another service on the ESB, which modifies the status of the ERP system in a similar way to the current solution. This involves a database adapter making direct modifications to a table or record in the ERP database.

    9. In the other case, which is shown in the branch on the left, only a message is sent to the external systems. Another service is called on the ESB for this purpose, which determines the target system and the target format based on some information passed in the header of the message.

    10. The ESB uses a header-based router to support the content-based forwarding of the message.

    11. Depending on the target system, the information is converted from the canonical format to the correct target format.

    12. The UK app already has a web service, which can be used to pass the order to the system. For this reason, this system is connected via a SOAP adapter.

    13. The two other systems continue to use the file-based interface. Therefore, an FTP adapter creates and sends the files through FTP in XML or CSV format.

    14. In order to ensure that the external application (labeled GE app in the diagram) still receives the information in batch mode, with several orders combined in one file, an aggregator building block is used. This collects the individual messages over a specific period of time, and then sends them together in the form of one large message to the target system via the FTP adapter.

    15. An aggregation process is not needed for the interface to the other external application (labeled CH app in the image), as this system can also process a large number of small files.


    Evaluation of the new solution


    An evaluation of the new solution shows the following benefits:

    • The orchestration is standardized and uses only one technology.

    • One BPEL instance is responsible for one order throughout the entire integration process.

    • This simplifies the monitoring process, because the instance continues running until the order is completed; in other words, in one of the two cases until the feedback message from the
      external system has been processed.

    • The orchestration is based only on the canonical format. The target system formats are generated at the last possible moment in the mediation layer.

    • Additional distribution channels can easily be added on the ESB, without having to modify the orchestration process.

    • The solution can easily support other protocols or formats that are not yet known, simply by adding an extra translator building block.

Filed Under: WebServices Tagged With: SOA

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • …
  • 6
  • Next Page »

Follow Us

  • Facebook
  • Pinterest

As a participant in the Amazon Services LLC Associates Program, this site may earn from qualifying purchases. We may also earn commissions on purchases from other retail websites.

JavaBeat

FEATURED TUTORIALS

Answered: Using Java to Convert Int to String

What is new in Java 6.0 Collections API?

The Java 6.0 Compiler API

Copyright © by JavaBeat · All rights reserved