JavaBeat

  • Home
  • Java
    • Java 7
    • Java 8
    • Java EE
    • Servlets
  • Spring Framework
    • Spring Tutorials
    • Spring 4 Tutorials
    • Spring Boot
  • JSF Tutorials
  • Most Popular
    • Binary Search Tree Traversal
    • Spring Batch Tutorial
    • AngularJS + Spring MVC
    • Spring Data JPA Tutorial
    • Packaging and Deploying Node.js
  • About Us
    • Join Us (JBC)
  • Privacy
  • Contact Us

Java 6.0 Features Part – 2 : Pluggable Annotation Processing API

June 6, 2007 by Krishna Srinivasan Leave a Comment

1) Introduction

The first part of this article listed out the major new features of Java 6 (Mustang) related to areas like Common Annotations (JSR 250), Scripting Language for the Java Platform (JSR 223) and JDBC 4.0. This article assumed that Readers have got sufficiently fair bit of knowledge in the various concepts of Java 5.0. First-time Readers of Java 6 are strongly encouraged to read the first part of this article titled “Introduction to Java 6.0 New Features, Part–I”. This article covers the left-over features of Part-I. More specifically, it will cover the Pluggabable Annotation Processing API (JSR 269), Java API for XML Binding (JSR 222) and Streaming API for XML (JSR 173).

also read:

  • New Features in Java 6.0 – Par 1
  • New Features in Java 6.0 – Par 2
  • Java 6.0 Compiler API

2) Pluggable Annotation Processing API

2.1) Introduction to Annotation

Annotations have been there in the Java World from Java 5.0. Java Annotations are a result of the JSR 175 which aimed in providing a Meta-Data Facility to the Java Programming Language. It can be greatly used by the Build-time Tools and Run-time Environments to do a bunch of useful tasks like Code Generation, Validation and other valuable stuffs. Java 6 has introduced a new JSR called JSR 269, which is the Pluggable Annotation Processing API. With this API, now it is possible for the Application Developers to write a Customized Annotation Processor which can be plugged-in to the code to operate on the set of Annotations that appear in a Source File.
Let us see in the subsequent sections how to write a Java File which will make use of Custom Annotations along with a Custom Annotation Processor to process them.

2.2) Writing Custom Annotations

This section provides two Custom Annotations which will be used by a Sample Java File and a Custom Annotation Processor. One is the Class Level Annotation and the other is the Method Level Annotation. Following is the listing for both the Annotation Declarations. See how the Targets for the Annotations ClassLevelAnnotation.java and MethodLevelAnnotation.java are set to ElementType.TYPE and ElementType.METHOD respectively.
ClassLevelAnnotation.java

[code lang=”java”]package net.javabeat.articles.java6.newfeatures.customannotations;

import java.lang.annotation.*;

@Target(value = {ElementType.TYPE})
public @interface ClassLevelAnnotation {
}
[/code]

MethodLevelAnnotation.java

[code lang=”java”]package net.javabeat.articles.java6.newfeatures.customannotations;

import java.lang.annotation.*;

@Target(value = {ElementType.METHOD})
public @interface MethodLevelAnnotation {
}
[/code]

AnnotatedJavaFile.java

[code lang=”java”]package net.javabeat.articles.java6.newfeatures.customannotations;

@ClassLevelAnnotation()
public class AnnotatedJavaFile {

@MethodLevelAnnotation
public void annotatedMethod(){
}
}[/code]
The above is a Sample Java File that makes use of the Class Level and the Method Level Annotations. Note that @ClassLevelAnnotation is applied at the Class Level and the @MethodLevelAnnotation is applied at the method Level. This is because both the Annotation Types have been defined to be tagged to these respective Elements only with the help of @Target Annotation.

2.3) Writing a Simple Custom Annotation Processor

TestAnnotationProcessor.java

[code lang=”java”]package net.javabeat.articles.java6.newfeatures.customannotations;

import java.util.*;
import javax.annotation.processing.*;
import javax.lang.model.*;
import javax.lang.model.element.*;

@SupportedAnnotationTypes(value= {"*"})
@SupportedSourceVersion(SourceVersion.RELEASE_6)

public class TestAnnotationProcessor extends AbstractProcessor {

@Override
public boolean process(
Set<?> extends TypeElement> annotations, RoundEnvironment roundEnv){

for (TypeElement element : annotations){
System.out.println(element.getQualifiedName());
}
return true;
}
}[/code]
Let us discuss the core points in writing a Custom Annotation Processor in Java 6. The first notable thing is that Test Annotation Processor class extends AbstractProcessor class which encapsulates an Abstract Annotation Processor. We have to inform what Annotation Types our Test Annotation Processor Supports. This is manifested through the Class-Level Annotation called @SupportedAnnotationTypes(). A value of “*” indicates that all types of Annotations will be processed by this Annotation Processor. Which version of Source Files this Annotation Processor supports is mentioned through @SupportedSourceVersion Annotation.
The javac compiler of Mustang has an option called ‘-processor’ where we can specify the Name of the Annotation Processor along with a Set of Java Source Files containing the Annotations. For example, in our case, the command syntax would be something like the following,

[code lang=”java”]javac -processor
net.javabeat.articles.java6.newfeatures.customannotations.TestAnnotationProcessor
AnnotatedJavaFile.java[/code]

The above command tells that the name of the Annotation Processor is net.javabeat.articles.java6.newfeatures.customannotations.TestAnnotationProcessor and it is going to process the AnnotatedJavaFile.java. As soon as this command is issued in the console, the TestAnnotationProcessor.process() method will be called by passing the Set of Annotations that are found in the Source Files along with the Annotation Processing Information as represented by RoundEnvironment. This TestAnnotationProcessor just list the various Annotations present in the Sample Java File (AnnotatedJavaFile.java) by iterating over it.
Following is the output of the above program

[code lang=”java”]net.javabeat.articles.java6.newfeatures.customannotations.ClassLevelAnnotation
net.javabeat.articles.java6.newfeatures.customannotations.MethodLevelAnnotation[/code]

3) Streaming API for XML

3.1) Introduction

Streaming API for XML, simply called StaX, is an API for reading and writing XML Documents. Why need another XML Parsing API when we already have SAX (Simple API for XML Parsing) and DOM (Document Object Model)? Both SAX and DOM parsers have their own advantages and disadvantages and StaX provides a solution for the disadvantages that are found in both SAX and DOM. It is not that StaX replaces SAX and DOM.
SAX, which provides an Event-Driven XML Processing, follows the Push-Parsing Model. What this model means is that in SAX, Applications will register Listeners in the form of Handlers to the Parser and will get notified through Call-back methods. Here the SAX Parser takes the control over Application thread by Pushing Events to the Application. So SAX is a Push-Parsing model. Whereas StaX is a Pull-Parsing model meaning that Application can take the control over parsing the XML Documents by pulling (taking) the Events from the Parser.
The disadvantage of DOM Parser is, it will keep the whole XML Document Tree in memory and certainly this would be problematic if the size of the Document is large. StaX doesn’t follow this type of model and it also has options for Skipping a Portion of a large Document during Reading.
The core StaX API falls into two categories and they are listed below. They are

  1. Cursor API
  2. Event Iterator API

Applications can any of these two API for parsing XML Documents. Let us see what these APIs’ are in detail in the following sections.

3.2) Cursor API

The Cursor API is used to traverse over the XML Document. Think of a Cursor as some kind of Pointer pointing at the start of the XML Document and then Forwarding the Document upon properly instructed. The working model of this Cursor API is very simple. When given a XML Document and asked to parse, the Parser will start reading the XML Document, and if any of the Nodes (like Start Element, Attribute, Text, End Element) are found it will stop and will give information about the Nodes to the Processing Application if requested. This cursor is a Forward only Cursor, it can never go backwards. Both Reading and Writing operations is possible in this cursor API.

3.3) Sample Application

Let us consider a sample Application which will read data from and to the XML Document with the help of the Cursor API. Following is the sample XML Document. The below XML File contains a list of Events for a person in his/her Calendar.
myCalendar.xml

[code lang=”xml”]<calendar>

<event type = "meeting">
<whom>With my Manager</whom>
<where>At the Conference Hall</where>
<date>June 09 2007</date>
<time>10.30AM</time>
</event>

<event type = "birthday">
<whom>For my Girl Friend</whom>
<date>01 December</date>
</event>

</calendar>[/code]

ReadingUsingCursorApi.java

[code lang=”java”]package net.javabeat.articles.java6.newfeatures.stax;

import java.io.*;
import javax.xml.stream.*;
import javax.xml.stream.events.*;

public class ReadingUsingCurorApi {

private XMLInputFactory inputFactory = null;
private XMLStreamReader xmlReader = null;

public ReadingUsingCurorApi() {
inputFactory = XMLInputFactory.newInstance();
}

public void read() throws Exception{

xmlReader = inputFactory.createXMLStreamReader(
new FileReader(".\\src\\myCalendar.xml"));

while (xmlReader.hasNext()){

Integer eventType = xmlReader.next();
if (eventType.equals(XMLEvent.START_ELEMENT)){
System.out.print(" " + xmlReader.getName() + " ");
}else if (eventType.equals(XMLEvent.CHARACTERS)){
System.out.print(" " + xmlReader.getText() + " ");
}else if (eventType.equals(XMLEvent.ATTRIBUTE)){
System.out.print(" " + xmlReader.getName() + " ");
}else if (eventType.equals(XMLEvent.END_ELEMENT)){
System.out.print(" " + xmlReader.getName() + " ");
}
}
xmlReader.close();
}

public static void main(String args[]){
try{
ReadingUsingCurorApi obj = new ReadingUsingCurorApi();
obj.read();
}catch(Exception exception){
exception.printStackTrace();
}
}
}[/code]
XMLInputFactory is the Factory Class for creating Input Stream objects which is represented by XMLStreamReader. An instance of type XMLStreamReader is created by calling XMLInputFactory.createXMLStreamReader() by passing the XML File to be parsed. At this stage, the Parser is ready to read the XML Contents if a combination call to XMLStreamReader.hasNext() and XMLStreamReader.next() is made. The entire Document is traversed in the while loop and the appropriate node’s value is taken by checking the various Element Types.

3.4) Event Iterator API

The Working Model of this Event Iterator API is no more different from the Cursor API. As the Parser starts traversing over the XML Document and if any of the Nodes are found, it will provide this information to the Application that is processing in the form of XML Events. Applications can loop over the entire Document, by requesting for the Next Event. This Event Iterator API is implemented on top of Cursor API.

3.5) Sample Application

Now let us take over a Sample Application using the Event Iterator API which is parsing on the XML Document myCalendar.xml.
ReadingUsingEventIterator.java

[code lang=”java”]package net.javabeat.articles.java6.newfeatures.stax;

import java.io.*;
import javax.xml.stream.*;
import javax.xml.stream.events.*;

public class ReadingUsingEventIteratorApi {

private XMLInputFactory inputFactory = null;
private XMLEventReader xmlEventReader = null;

public ReadingUsingEventIteratorApi() {
inputFactory = XMLInputFactory.newInstance();
}

public void read() throws Exception{

xmlEventReader = inputFactory.createXMLEventReader(
new FileReader(".\\src\\myCalendar.xml"));

while (xmlEventReader.hasNext()){

XMLEvent xmlEvent = xmlEventReader.nextEvent();
if (xmlEvent.isStartElement()){
System.out.print(" " + xmlEvent.asStartElement().getName() + " ");
}else if (xmlEvent.isCharacters()){
System.out.print(" " + xmlEvent.asCharacters().getData() + " ");
}else if (xmlEvent.isEndElement()){
System.out.print(" " + xmlEvent.asEndElement().getName() + " ");
}
}

xmlEventReader.close();
}

public static void main(String args[]){
try{
ReadingUsingEventIteratorApi obj = new ReadingUsingEventIteratorApi();
obj.read();
}catch(Exception exception){
exception.printStackTrace();
}
}
}[/code]
If XMLStreamReader class represents the Reader for stream reading the XML Contents, then XMLEventReader represents the class for reading the XML Document as XML Events (represented by javax.xml.stream.events.XMLEvent). The rest of the reading logic is the same as that of the ReadingUsingCurorApi.java.

4) Conclusion

This article covered the leftover features of the Part I New features of Java 6. Starting with Pluggable Annotation Processing API, it covered what Annotations are, then guided us how to write Custom Annotations and Custom Annotation Processor. Then it traversed over the Cursor API and the Event Iterator API of StaX such as how to read XML Documents along with well-defined samples. Finally through Java API for XML Binding, it details how to map Java and XML Documents with some sample applications.

Filed Under: Java Tagged With: Java 6.0, JAXB

Introduction to Java 6.0 New Features, Part–1

June 6, 2007 by Krishna Srinivasan Leave a Comment

1) Introduction

This article covers the various new features of Java 6, also known as Mustang. This article assumes that readers have sufficient knowledge over the concepts and terminologies in Java 5.0. For more information on Java 5.0, readers can vist the resources available in javabeat here. Though there is no significant changes at the Language Level, though Mustang comes with a bunch of enhancements in the other areas like Core, XML and Desktop. Most of the features are applicable both to J2SE and J2EE Platforms.

also read:

  • New Features in Java 6.0 – Par 1
  • New Features in Java 6.0 – Par 2
  • Java 6.0 Compiler API

2) Java 6 Features

A feature or an enhancement in Java is encapsulated in the form of a JSR. JSR, which stands for Java Specification Request is nothing but a formal proposal which details the need for a specific functionality to be available in the Java Platform that can be used by Applications. These JSR’s will be reviewed and released by a committee called Java Expert Groups (JEG). This article covers the following list of features (or JSRs’) that comes along with the Java 6 Platform.

  • Pluggable Annotation Processing API (JSR 269)
  • Common Annotations (JSR 250)
  • Java API for XML Based Web Services – 2.0 (JSR 224)
  • JAXB 2.0 (JSR 222)
  • Web Services Metadata (JSR 181)
  • Streaming API for XML (JSR 173)
  • XML Digital Signature (JSR 105)
  • Java Class File Specification Update (JSR 202)
  • Java Compiler API (JSR 199)
  • JDBC 4.0 (JSR 221)
  • Scripting in the Java Platform (JSR 223)

The JSRs’ that are covered in this article are Common Annotations, JDBC 4.0 and Scripting in the Java Platform. Rest of the JSRs’ will be covered in the subsequent article.

3) Common Annotations

The aim of having Common Annotations API in the Java Platform is to avoid applications defining their own Annotations which will result in having larger number of Duplicates. This JSR is targeted to cover Annotations both in the Standard as well the Enterprise Environments. The packages that contain the annotations are javax.annotation and javax.annotation.security . Let us discuss in brief the commonly used Annotations that are available in this JSR in the next subsequent sections.

3.1) @Generated Annotation

Not all the source files or the source code in an application is hand-written by Developers. With the increasing number of Tools and Frameworks, most of the common Boiler-Plate Code is generated by the Tools or the Frameworks itself if they have been properly instructed. Such Tool Generated Code can be marked with @Generated Annotation. Consider the following sample code snippet,
MyClass.java

[code lang=”java”]public class MyClass{

public void developerCode(){
}

@Generated(
value = "ClassNameThatGeneratedThisCode",
comments = "This is Tool Generated Code",
date = "5 June 2007"
)
public void toolGeneratedCode(){
}

}[/code]
The value for the @Generated Annotation would be usually the class name that generated this code. Optionally, comments and date can be given to add more clarity to the generated code. Note this @Generated Annotation is not limited to a method definition, it can also be defined for Package Declaration, Class Declaration, Interface Declaration, Local Variable Declaration, Field Declaration, Parameter Declaration etc.

3.2) @Resource and @Resources Annotation

Any Class or Component that provides some useful functionality to an Application can be thought of as a Resource and the same can be marked with @Resource Annotation. This kind of Annotation can be seen normally in J2EE Components such as Servlets, EJB or JMS. For example consider the following code snippet,

[code lang=”java”]@Resource(name = "MyQueue", type = javax.jms.Queue,
shareable = false,
authenticationType = Resource.AuthenticationType.CONTAINER,
description = "A Test Queue"
)
private javax.jms.Queue myQueue;[/code]
Queue is a class available as part of JMS API and it serves as a target for Asynchronous Messages being sent by Applications. The various properties to note in @Resource Annotation are: ‘name’ which is the JNDI Name of this resource, ‘type’ which is the type of the resource which usually points to the Fully Qualified Class Name, ‘shareable’ which tells whether this resource can be shared by other components in the Application or not, ‘authenticationType’ which indicates the type of authentication to be performed either by the Container or by the Application and the Possible values are AuthenticationType.CONTAINER and
AuthenticationType.APPLICATION and ‘description’ which is a string that describes the purpose of this resource.
When the Application containing the @Resource Annotations are deployed to a Server, the Container will scan for all the Resource references during the time of Application loading and then will populate the @Resource References by assigning new instances.
@Resources is nothing but a collection of @Resource entries. Following is a sample code that defines @Resources Annotation,
@Resources is nothing but a collection of @Resource entries. Following is a sample code that defines
@Resources Annotation,

[code lang=”java”]
@Resources({
@Resource(name = "myQueue" type = javax.jms.Queue),
@Resource(name = "myTopic" type = javax.jms.Topic),
})
[/code]

3.3) @PostConstruct and @PreDestroy

J2EE Components are usually created by the Container or the Framework on which they are deployed. Container creates new components by calling the Default or the No Argument Constructor. It is a very common need that a component needs to get initialized with some default values after it has been created. @PostConstruct Annotation serves that purpose. It is a Method-Level Annotation, meaning that this Annotation can be applied only to a Method and it will be fired immediately as soon the Component is created by invoking the Constructor.
Consider the following set of code,
MyDbConnectionComponent.java

[code lang=”java”]public class MyDbConnectionComponent{

public MyDbConnectionComponent(){
}

@PostConstruct()
public void loadDefaults(){

// Load the Driver Class.
// Get the Connection and Do other stuffs.

}
}[/code]
We can see that @PostConstruct Annotation is normally used to Initialize the Resources that are Context specific. The loadDefaults() method which is marked with @PostConstruct Annotation will be called immediately by the Container as soon as an instance of MyDbConnectionComponent is created. There are certain guidelines to be followed while defining the PostConstruct method such as: the method should not be marked as static, return type should be void, it cannot throw any CheckedExceptions etc.
The counterpart to @PostConstruct Annotation is the @PreDestroy Annotation. From the name of the Annotation itself, we can infer that the method that is marked with this Annotation will be called before an object is about to be removed or destroyed by the Container. Like the @PostConstruct Annotation, this is also a Method-Level Annotation and the following code snippet proves this,
MyDbConnectionComponent.java

[code lang=”java”]public class MyDbConnectionComponent{

public MyDbConnectionComponent(){
}

@PreDestroy()
public void releaseResources(){

// Close the Connection.
// Unload the Class Driver from the System

}
}
[/code]
The method releaseResources() will be called by the Container before the object is about to be Destroyed. Resource releasing code are ideal candidates to be placed in the @PreDestroy Annotation method.

3.4) Role Based Annotations

The following sections discuss the various Role-based Annotations that are very common in Applications that are very concerned about Security. A Typical Application is accessed by a wide range of Users and Users themselves fall into Several Roles. Considering an IT Organization, all Employees fall into the General Category of Roles namely Admin, Director, Manager, Engineer, Programmer etc. It is very common to see Applications following Role-Based Security Model. The Annotations @DeclareRoles, @RolesAllowed, @PermitAll, @DenyAll and @RunAs are Role-Based Annotations and are covered here.

3.4.1) @DeclareRoles Annotations

This is a Class-Level Annotation meaning that this Annotation is applicable only to a Class Declaration. If applied to a Class or a Component, it essentially declares the valid Set of Roles that are available for this Component. Consider the following code which will clarify this,
LeaveService.java

[code lang=”java”]@DeclareRoles(value = {"Director", "Manager", "Others" })
public class LeaveService{

@Resource
private Context context;

public void applyLeave(){

// Any employee can apply for leave. So no need for any
// conditional check.
}

public void grantLeave(){
if(checkUserInRole()){
// Grant Leave.
}
}

public void cancelLeave(){
if(checkUserInRole()){
// Cancel Leave.
}
}

private boolean checkUserInRole(){
if( (context.isCallerInRole("Director") )
|| (context.isCallerinRole("Manager")) ){
return true;
}

return false;
}
}[/code]
In the above example, the component LeaveService has been marked with @DeclareRoles Annotations with Role Name values namely Director and Manager. It has three services namely: applying for leave (applyLeave()), granting for leave (grantLeave()) and cancellation of leave (cancelLeave()). It is acceptable that only Employees in the Superior Role (Director or Manager) can grant or deny leaves to their sub-ordinates. So additional conditional checks are done to ensure that whether the User who is accessing the grantLeave() or the cancelLeave() service belongs to either of the defined Roles(Director or Manager). Since any employee in a company can apply for a leave, (whose Role Name is given as Others), no conditional checks are done in applyLeave() method.

3.4.2) @RolesAllowed Annotation

This is a Class/Method Level Annotation which is used to grant access to some Service(s) to the defined set of Users who are mentioned by their Role Names in the Annotation. Let us get into the following example straightaway,
LeaveService.java

[code lang=”java”]@DeclareRoles("A", "B", "C", "X", "Y", "Z")
@RolesAllowed("A", "B", "C")
public class MyServiceComponent{

@RolesAllowed(value = {"A", "X", "Y"} )
public void myService1(){
}

@RolesAllowed(value = {"B", "Y", "Z"} )
public void myService2(){
}

@RolesAllowed(value = {"X", "Y", "Z"} )
public void myService3(){
}

public void myService4(){
}
}[/code]
The above code declares various roles namely “A”, “B”, “C”, “X”, “Y” and “Z” for the component MyServiceComponent. The @RolesAllowed Annotation when applied to a method grant Access to Users who are in that Roles only. For example, only Users with Roles “A” or “X” or “Y” are allowed to access the method myService1(). In the case of myService2(), “B”, “Y” or “Z” role Users are allowed to access it and so on.
What happens in the case of myService4()??
No @RolesAllowed is specified for this method. The fact is that, if a method doesn’t have @RolesAllowed Annotation attached to it, then it will inherit this property from the class where it has been defined. So, in our case, Users in the Role “A”, “B” or “C” can access the method myService4() becuase these set of Roles have been defined at the Class Level. What if the Class Declaration itself doesn’t have the @RolesAllowed Annotation declared? The answer is simple: it will take all the Roles that are defined in @DeclareRoles.

3.4.3) @PermitAll and @DenyAll Annotation

These are Class/Method Level Annotations and if applied to a Class Declaration will affect all the methods in the class, and when applied to a method will affect that method only.
Consider the following sample,
MyClass.java

[code lang=”java”]@DeclareRoles(value = {"A", "B", "C"} )
class MyClass{

@PermitAll()
public void commonService(){
}

@DenyAll
public void confidentialService(){
}
}[/code]
From the above code, it is inferred that commonService() method can be accessible by all Users irrespective of their Roles as it is marked with @PermitAll() annotation and no one can access the confidentialService() because it has been marked with @DenyAll() annotation.

4) Scripting Language for the Java Platform

4.1) Introduction

Java 6 provides the Common Scripting Language Framework for integrating various Scripting Languages into the Java Platform. Most of the popular Scripting Languages like Java Script, PHP Script, Bean Shell Script and PNuts Script etc., can be seamlessly integrated with the Java Platform. Support for Intercommunication between Scripting Languages and Java Programs is possible now because of this. It means that Scripting Language Code can access the Set of Java Libraries and Java Programs can directly embed Scripting Code. Java Applications can also have options for Compiling and Executing Scripts which will lead to good performance, provided the Scripting Engine supports this feature. Let us have a high-level overview of this JSR before going in detail.
Following sections provide the two core components of the Scripting engine namely,

  • Language Bindings
  • The Scripting API

4.2) Language Bindings

The Language Bindings provides mechanisms for establishing communications between the Java Code and the Script Code. More specifically, it deals with how actually a Java Object creating by a Script Code is stored and maintained by the Scripting Engine, how the Script Arguments are converted back and forth, how the Calls that are made to the Java Methods from within the Scripting Code got translated and how the return values from the Java Code are made available to the Scripting code. To illustrate on this further, consider the JavaScript Code,

[code lang=”java”]var aJavaString = new String("A Test String");[/code]
This piece of code essentially creates a new Java String object and assigns it to a JavaScript object called aJavaString which is of var type. As soon as the Rhino Script Engine (which is the Scripting Engine name for JavaScript) parses this Script Code, it has to create a new instance of String object and should store it somewhere. The place where Java Objects and Script objects are stored are referred to as Bindings in Java Scripting Terminology. Here, aJavaString acts as a Proxy Object for the real java.lang.String object.
Again consider the following code,

[code lang=”java”]var aJavaString = new java.lang.String(‘A Test String’);
var length = aJavaString.substring(7, 13);[/code]
So many Micro-Level Tasks are involved if we analyse the above piece of code carefully. They are listed as follows,

  1. Script Arguments are Converted to Java Arguments
  2. Java Method to be invoked is identified
  3. Logic is performed on the invoked Java Method
  4. Return Values, if any, are sent back to the Scripting Code

The first line indicates that a new instance of java.lang.String object is created and it is stored in a Java Script object called aJavaString. Here aJavaString acts as a Proxy Object for the original java.lang.String object. After that we can see that the arguments 7 and 13 are passed from the Script Code to the method String.substring(int,int). The Engine will now convert the arguments that are to be sent to the Java method by using the Default Script-Java Mappings which is very specific to every Script Engine. After that, the Scripting Engine must ensure the availability of the method String.substring(int,int) at the run-time by abundantly depending on the Java Reflection API. Now the method is invoked and the logic gets executed. The return values are then converted using the Java-Script Mappings as defined by the Script Engine and will get populated in the length JavaScript object.

4.3) Scripting API

This API allows Java Programs to take the full advantage of Directing Embedding Scripting Code into the Applications. It also provides a framework wherein New Scripting Engines can be easily plugged-in. The entire API is available in the javax.script package.
ScriptEngineManager class provides mechanisms for Searching and Adding Scripting Engines into the Java Platform. Convenient methods are available for Discovering Existing Scripting Engines. For example, the following code will iterate and will list down all the Script Engines that are available in the Java 6 Distribution.

[code lang=”java”]List allFactories = ScriptEngineManager.getEngineFactories();

for(ScriptEngineFactory engineFactory : allFactories){
System.out.println("Engine Name" + engineFactory.getEngineName());
}
[/code]
The ScriptEngineFactory is the Factory Class for creating ScriptEngine objects. As such, two different ways are there to get instances of ScriptEngine classes. One way is to call the method ScriptEngineFactory.getScriptEngine(). The other direct way is to depend on the ScriptEngineManager itself to call getEngineByName().

[code lang=”java”]// First Way
ScriptEngineFactory rhinoFactory = getScriptEngineFactory();
ScriptEngine rhinoEngine = rhinoFactory.getScriptEngine();

// Second Way
String name = "javascript";
ScriptEngine rhinoEngine = ScriptEngineManager.getEngineByName(engineName);
[/code]
Execution of scripts can be done by calling the ScriptEngine.eval(String) method.
Bindings as represented by javax.script.Bindings, provide key/name information to ScriptEngines. Two types of Binding Scopes are available, one is the Global Scope and the other is the Engine Scope. By default, the ScriptEngineManager manages some set of Default bindings, which can be obtained by making a call to ScriptManager.getBindings(). The Bindings that are obtained in this manner (i.e from ScriptEngineManager) are called Global Bindings. Following is the code to get a reference to the Global-Bindings object and to populate it with application-specific values.

[code lang=”java”]Bindings globalBindings = scriptEngineManager.getBindings();
globalBindings.put("key1", "value1");
globalBindings.put("key2", "value2");[/code]

It is also possible to customize the Binding information on Engine-by-Engine basis. For example, to obtain Bindings that is very specific to a particular Engine, a call to ScriptEngine.getBindings() can be made. These Bindings that are obtained in this way are called Engine Bindings.

4.4) Sample Code

This following Sample Application demonstrates how to directly embed Scripting Code in Java Applications. Argument passing between the Java Code and the Script Code are illustrated in this Sample.

[code lang=”java”]package test;

import javax.script.*;

public class ScriptTest{

public static void main(String[] args){

try{
// Create an instance of the Scripting manager.
ScriptEngineManager manager = new ScriptEngineManager();

// Get the reference to the rhino scripting engine.
ScriptEngine rhinoEngine = manager.getEngineByName("javascript");

// Get the Binding object for this Engine.
Bindings bindings = rhinoEngine.getBindings(ScriptContext.ENGINE_SCOPE);

// Put the input value to the Binding.
bindings.put("strValue", "A Test String");

// Populate the script code to be executed.
StringBuilder scriptCode = new StringBuilder();
scriptCode.append("var javaString = new java.lang.String(strValue);");
scriptCode.append("var result = javaString.length();");

// Evaluate the Script code.
rhinoEngine.eval(scriptCode.toString());

// Take the output value from the script, i.e from the Bindings.
int strLength = (Integer)bindings.get("result");

System.out.println("Length is " + strLength);
}catch(Exception exception){
exception.printStackTrace();
}
}
}
[/code]
In the above code, a new instance of ScriptEngineManager is created in the very first line. Then, the Scripting Engine
that comes shipped with the Mustang, (i.e, Rhino Java Script Engine) is obtained by calling ScriptEngineManager.getEngineByName(“javascript”). Arguments are passed from and to the Java Code with the help of Bindings. The input string to be processed is added to the Bindings with the call to Bindings.put(“strValue”, “A Test String”). Notice how the input string is populated within the script code at run-time, var javaString = new java.lang.String(strValue). It means that at run-time the Script code becomes var javaString = new java.lang.String(‘A Test String’). Then the script is executed by calling the ScriptEngine.eval(String) method. The output which is the length of the input string is now in the Script variable called result. And as mentioned previously, since all the Script and the Java Objects will be maintained and controlled by the Bindings, it is possible to get the value of the Script object result directly by calling Bindings.get(“result”).

6) Conclusion

This article covered some of the new features available in Java 6. It started off with the JSR that aimed in defining the Common Set of Annotations that can be used by Application Programs. Each and every Annotation in this package is briefly explained and examples were given to make it more understandable. Then, the Common Scripting Language Framework for the Java Platform is given in depth discussion along with the concept of Bindings, Scripting API and with a Sample Program.

Filed Under: Java Tagged With: Java 6.0

The Java 6.0 Compiler API

April 1, 2007 by Krishna Srinivasan Leave a Comment

Introduction

One of the cool features available in Java 6.0 (Mustang) is the ‘Java Compiler API’. This API is a result of the JSR (Java Specification Request) 199 which proposes that there must be a standard way to compile java source files. The result of the JSR is the new ‘Java Compiler API’ and one can use this new feature to compile java source files from within java files. Previously developers were depending on the low-level issues like starting a process representing the javac.exe. Though this feature is not intended to every one, Editors or IDE (Integrated Development Environment) can make much use of this new feature for compiling Java source files in a better manner.

Compiler API

All the API (the client interfaces and the classes) needed by the developers for playing with the Java compiler API is available in the new javax.tools package. Not only this package represents classes and methods for invoking a java compiler, but it provides a common interface for representing any kind of Tool. A tool is generally a command line program (like javac.exe, javadoc.exe or javah.exe).

also read:

  • New Features in Java 6.0 – Part 1
  • New Features in Java 6.0 – Part 2
  • New Features in JDBC API (Java 6.0)
  • What is New in Java 8.0

Instead of looking into all the classes and interfaces that are available in the javax.tools package, it makes more sense to go through some sample programs, then examining what the classes and the methods are doing.

Compiling a java source file from another Java source

Following is the small sample program that will demonstrate how to compile a Java source file from another java file on the fly.

[All the examples given here are written and tested with Mustang build 1.6.0-b105, and it seems that more API changes and restructuring of classes and methods are occurring in the newer builds].

[code lang=”java”]MyClass.java:

package test;
public class MyClass {
public void myMethod(){
System.out.println("My Method Called");
}
}

Listing for SimpleCompileTest.java that compiles the MyClass.java file.

SimpleCompileTest.java:

package test;
import javax.tools.*;
public class SimpleCompileTest {
public static void main(String[] args) {
String fileToCompile = "test" + java.io.File.separator +"MyClass.java";
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
int compilationResult = compiler.run(null, null, null, fileToCompile);
if(compilationResult == 0){
System.out.println("Compilation is successful");
}else{
System.out.println("Compilation Failed");
}
}
}[/code]

The entry point for getting a compiler instance is to depend on the ToolProvider class. This class provides methods for locating a Tool object. (Remember a Tool can be anything like javac, javadoc, rmic, javah etc…) Though the only Tool available in Mustang build (1.6.0-b105) is the JavaCompiler as of now, it is expected that many more tools are to be added in the future.

The getSystemJavaCompiler() method in the ToolProvider class returns an object of some class that implements JavaCompiler (JavaCompiler is an interface that extends Tool interface and not a class). To be more specific, the method returns a JavaCompiler implementation that is shipped along with the Mustang Distribution. The implementation of the Java Compiler is available in the tools.jar file (which is usually available in the <JDK60_INSTALLATION_DIR>\lib\tools.jar).

After getting an instance of the JavaCompiler, compilation on a set of files (also known as compilation units) can be done by invoking the run(InputStream inputStream, OutputStream outputStream, OutputStream errorStream, String … arguments) method. To use the defaults, null can be passed for the first three parameters (which correspond to System.in, System.out and System.err), the fourth parameter which cannot be null is a variable argument that refers to the command-line arguments we usually pass to the javac compiler.

The file that we are going to compile is MyClass.java which is the same test package as of the SimpleCompileTest. The complete file name (along with the directory ‘test’) is passed as 4th argument to the run method. If there are no errors in the source file (MyClass.java, in this case), the method will return 0 which means that the source file was compiled successfully.

After compiling and running the SimpleCompileTest.java, one can see the following output message in the console.

‘Compilation is successful’

Let us modify the source code by introducing a small error by removing the semi-colon after the end of the println() statement and see what happens to the output.

[code lang=”java”]MyClass.java:
package test;

public class MyClass {
public void myMethod(){
System.out.println("My Method Called")
// Semi-colon removed here purposefully.
}
}
Now, running the SimpleCompileTest.java leads to the following output,

test\MyClass.java:5: ‘;’ expected
System.out.println("My Method Called")
^
1 error
Compilation Failed[/code]

This is the error message one normally sees when compiling a java file using javac.exe in the command prompt. The above represents the error message(s) and since we have made the error output stream point to null (which defaults to System.err, which is the console) we are getting the output error messages in the console. If instead we have pointed the errorStream to something like this,

[code lang=”java”]FileOutputStream errorStream = new FileOutputStream("Errors.txt");
int compilationResult = compiler.run(null, null, errorStream, fileToCompile);[/code]

a new file called Errors.txt will be created in the current directory and the file would be populated with the error messages that we saw before.

Compiling multiple files

One might be tempted to think that the following code will work for compiling multiple java files (assuming that the two files that are to be compiled are One.java and Two.java).

[code lang=”java”]
String filesToCompile = new String("One.java Two.java") ;[/code]

But surprisingly, when you try this, you will get a ‘Compilation Failed’ error message in the console.

The answer is JavaCompiler needs to extract individual options and arguments and these options and arguments should not be separated by spaces, but should be given as individual strings.

So, this won’t work at all.

[code lang=”java”]compiler.run(null, null, null, “One.java Two.java”);[/code]

But, the below code will work nicely.

[code lang=”java”]compiler.run(null, null, null, “One.java”, “Two.java”);[/code]

One reason for forcing this kind of restriction is that sometimes the complete java file names (which includes directory path as well) itself can have white-spaces , in such a case it would be difficult for the parser to parse all the tokens correctly into options and arguments.

Following is a sample code that compiles multiple java files.

[code lang=”java”]MyClass.java:
package test;

public class MyClass {
}

MyAnotherClass.java:
package test;

public class MyAnotherClass {
}

MultipleFilesCompileTest.java:
package test;
import javax.tools.*;

public class MultipleFilesCompileTest {
public static void main(String[] args) throws Exception{
String file1ToCompile = "test" + java.io.File.separator + "MyClass.java";
String file2ToCompile = "test" + java.io.File.separator + "MyAnotherClass.java";
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
int compilationResult = compiler.run(null, null, null, file1ToCompile, file2ToCompile);
if(compilationResult == 0){
System.out.println("Compilation is successful");
}else{
System.out.println("Compilation Failed");
}
}
}[/code]

The above program compiles fine with the output message ‘Compilation is successful’.

Do remember, that the final argument is a variable argument and it can accept any number of arguments.

[Starting with Java 5.0, variable argument is a new feature where the callers (the calling method) can pass any number of arguments. To illustrate this concept, look at the sample code.

[code lang=”java”]public int addNumbers(int …numbers){

int total = 0;
For(int temp : numbers){
Total = total + temp;
}
return total;
}[/code]

A variable argument is represented by ellipsis (…) preceding the variable name like this
int … numbers. And , one can call the above method in different styles, like the below

[code lang=”java”]addNumbers(10, 10, 30,40); // This will work.
addNumbers(10,10) // This also will work.[/code]

type must be the last one. Variable arguments are internally treated as arrays. So, this is also possible now.

[code lang=”java”]addNumbers(new int[]{10, 34, 54});[/code]

So, great care should be exercised when passing multiple options along with values to the run method. As an example, the ideal way to pass options along with its option values might me,

[code lang=”java”]compiler.run(null, null, null, ”-classpath”, ”PathToClasses”, ”-sourcepath”, ”PathToSources”, ”One.java”, ”Two.java”);[/code]

Ass one can see, even the option and its option values must be treated as a separate string.

As you are aware of the fact, when invoking the java compiler with the ‘verbose’ option specified, the javac will output messages that will occur during the compilation life-cycle (like parsing the input, validating them, scanning for the paths for both source and class files, loading all the necessary class files, then finally creating the class files in the specified destination directory). Let us achieve the same effect through the following sample code.

[code lang=”java”]SimpleCompileTestWithVerboseOption.java
package test;
import java.io.FileOutputStream;
import javax.tools.*;
public class SimpleCompileTestWithVerboseOption {
public static void main(String[] args) throws Exception{
String fileToCompile = "test" + java.io.File.separator + "MyClass.java";
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
FileOutputStream errorStream = new FileOutputStream("Errors.txt");
int compilationResult = compiler.run(
null, null, errorStream, "-verbose", fileToCompile);
if(compilationResult == 0){
System.out.println("Compilation is successful");
}else{
System.out.println("Compilation Failed");
}
}
}[/code]

May be a bug in Mustang

One might see that the errorStream (3rd argument) has been pointed out to a file to collect the output messages instead of the outputStream (2nd argument).

The following code was tried for capturing the verbose output, but this code failed, in the sense, the output was still written to the standard console, though the code seems to re-direct the output to a file.

[code lang=”java”]FileOutputStream outputStream = new FileOutputStream("Output.txt");
int compilationResult = compiler.run(
null, outputStream, null, "-verbose", fileToCompile);[/code]

The Java Compiler API is treating the output messages (in this case, the output messages obtained by specifying the ‘verbose’ option) as error messages, and so even though the output is pointing to the file (‘Output.txt’), it is spitting all the output messages to the console.

Also, the documentation for the run method is unclear; it tells that any diagnostics (errors, warnings or information) may be written either to the output stream or the error messages.

Advanced Compilation

In the above section, we saw how to compile files using the JavaCompiler tool. For advanced compilation related stuffs, the JavaCompiler depends on two more services namely the file manager services and the diagnostics services. These services are provided by the JavaFileManager and the DiagnosticListener classes respectively.

JavaFileManager

The JavaFileManager (being given implementation as StandardJavaFileManager) manages all the file objects that are usually associated with tools. This JavaFileManager is not only to JavaCompiler, but instead it can work with any kinds of objects that conform to the standard Tool interface. To understand why JavaFileManager is so important, let us understand what could be the things that may happen during compilation process.

[code lang=”java”]javac MyClass.java[/code]

When we issue this command in the command prompt, so many things will happen. The very first thing is that the compiler will parse all the options that are specified by the user, have to validate them, and them have to scan the source and class path for java source files and the jar files. It then has to deal with the input files (in this case it is MyClass.java) and output files (MyClass.class).

So, JavaFileManager which is associated with any kind of tool (normally all tools have some kind of input and the output files for processing), deals with managing all the input files and output files. By managing, we mean that JavaFileManager is responsible for creating output files, scanning for the input files, caching them for better performance. One such implementation given to the JavaFileManager is the StandardJavaFileManager.

A file being managed by JavaFileManager doesn’t mean the file is necessarily a file in the hard-disk. The contents of the file managed may come from a physical file in a hard-disk, may come from in-memory or can even come from a remote socket. That’s why JavaFileManager doesn’t deal with java.io.File objects (which usually refer to the physical files in the operating system File System). Rather, JavaFileManager manages all the files and their contents in the form of FileObject and JavaFileObject which represents the abstract representation for any kind of files by managed by the JavaFileManager.

FileObject and JavaFileObject refer to the abstract file representation that are being managed by the JavaFileManager and also have support related to reading and writing the contents to the right destination. The only difference between a FileObject and a JavaFileObject is that a FileObject can represent any kind of FileObject (like text file, properties file, image file etc), whereas a JavaFileObject can represent only java source file (.java) or a class file (.class). If a FileObject represents a java source file or a class file, then implementations should take care to return a JavaFileObject instead.

Diagnostics

The second dependent service by the JavaCompiler is the Diagnostic Service. Diagnostic usually refers to errors, warnings or informative messages that may appear in a program. For getting diagnostics messages during the compilation process, we can attach a listener to the compiler object. The listener is a DiagnosticListener object and its report() method will be called with a diagnostic method which contains many a more information like the kind of diagnostics (error, warning, information or other), the source for this diagnostics, the line number in the source code, a descriptive message etc.

CompilationTask

Before going into the sample code to clarify things like JavaFileManager, Diagnostic,and the DiagnosticListener classes, let us have a quick review on the class CompilationTask. From its name, obviously one can tell that it represents an object that encapsulates the actual compilation operation. We can initiate the compilation operation by calling the call() method in the CompilationTask object.

But how to get the CompilationTask object?

Since, CompilationTask is closely associated to a JavaCompiler object, one can easily say JavaCompiler.getCompilationTask( arguments ) to get the CompilationTask object. Let we now see the parameters that are needed to pass to the getCompilationTask(…) method.

Almost all the arguments can be null (with their defaults, which is discussed later), but the final argument represents the list of java objects to be compiled (or the compilation units) which cannot be null. The last argument is Iterable<? Extends JavaFileObject> javaObjects.

So, how can one populate this argument?

[Iterable is a new interface that was added with jdk 5.0 which is use to iterate (or traverse over a collection of objects. It has one method called iterator() which returns an Iterator object, and using it one can traverse over the collection by using the combinational hasNext() and the next() methods.

Considering type safety which started from Java 5.0, it is the role of the developer to specify what exactly is the type of object to be iterated. Iterable<? Extends JavaFileObject> means that this argument is an iterable that is acting on any class that implement the JavaFileObject interface].

JavaFileManager has 4 convenience methods like getJavaFileObject(File … files) , getJavaFileObjects(String … filenames), getJavaFileObjectsFromFiles(Iterable(? extends JavaFileObject) and getJavaFileObjectFromString(Iterable<? extends String> filenames) that returns javaObjects in the form of Iterable<? Extends JavaFileObject> type.

So, we can construct the 4th arguments using any of the convenience methods.

With more bits of theory in the last few sections, let us see a sample program that incorporates all the classes and the methods that we saw listed.

Purposefully we create a java file called MoreErrors.java that has some error code in it. The listing for MoreErrors.java is shown below.

[code lang=”java”]MoreErrors.java:
package test;
public class MoreErrors {
public void errorOne ()
// No open braces here. Line a
}
public void errorTwo(){
System.out.println("No semicolon") // Line b
// No semicolon at the end of the statement.
}
public void errorThree(){
System.out.prntln("No method name called prntln()"); // Line c
}
}[/code]

As one can see, the statement above line a, line b and line c has errors.

Let us look into the AdvancedCompilationTest.java that uses all the classes and the methods that we discussed above.

[code lang=”java”]AdvancedCompilationTest.java:
package test;
import java.io.*;
import java.util.*;
import javax.tools.*;
import javax.tools.JavaCompiler.*;
public class AdvancedCompilationTest {
public static void main(String[] args) throws Exception {
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
// Line 1.
MyDiagnosticListener listener = new MyDiagnosticListener(); // Line 2.
StandardJavaFileManager fileManager =
compiler.getStandardFileManager(listener, null, null); // Line 3.
String fileToCompile = "test" + File.separator + "ManyErrors.java";
// Line 4
Iterable fileObjects = fileManager.getJavaFileObjectsFromStrings(
Arrays.asList(fileToCompile)); // Line 5
CompilationTask task = compiler.getTask(null, fileManager, listener, null, null, fileObjects); // Line 6
Boolean result = task.call(); // Line 7
if(result == true){
System.out.println("Compilation has succeeded");
}
}
}
class MyDiagnosticListener implements DiagnosticListener{
public void report(Diagnostic diagnostic) {
System.out.println("Code->" + diagnostic.getCode());
System.out.println("Column Number->" + diagnostic.getColumnNumber());
System.out.println("End Position->" + diagnostic.getEndPosition());
System.out.println("Kind->" + diagnostic.getKind());
System.out.println("Line Number->" + diagnostic.getLineNumber());
System.out.println("Message->"+ diagnostic.getMessage(Locale.ENGLISH));
System.out.println("Position->" + diagnostic.getPosition());
System.out.println("Source" + diagnostic.getSource());
System.out.println("Start Position->" + diagnostic.getStartPosition());
System.out.println("\n");
}
}[/code]

Let us explore the above code in greater detail.

Line 1 is essentially creating an object of type JavaCompiler using the ToolProvider class. This is the entry point.

[code lang=”java”]JavaCompiler compiler = ToolProvider.getSystemJavaCompiler()[/code]

Line 2 and Line 3 is making the compiler to make use of the FileManager and the Diagnostic’s Services. To rewind the theory a bit, JavaFileManager object is used to manage the input and the output files that a tool normally deals with. And Diagnostic’s objects refer the diagnostics (errors, warnings or information) that may occur during the compilation process within a program.

[code lang=”java”]MyDiagnosticListener listener = new MyDiagnosticListener();[/code]

Line 2 creates an object of Type DiagnosticListener, since we want to monitor the diagnostics that may happen during the process of compilation (diagnostics, in the form of error messages will always happen in our case, as we have purposefully done some errors in the java code). We have overridden the report(Diagnostic) method and have extracted all the possible information. Since diagnostics can happen in any kind of file object, how can we specifically tell that this Diagnostics is for a java file object?

The answer has become easy because of Java 5.0 generics. One can notice that MyDiagnosticListener is a typed class (having some typed parameter) meaning that it can act on any kind of object that has diagnostics properties; here we are explicitly telling that the diagnostics is for JavaFileObject and not for any other file object by mentioning the JavaFileObject in the class declaration and the method declaration (shown in bold).

[code lang=”java”]class MyDiagnosticListener implements DiagnosticListener{
public void report(Diagnostic diagnostic) {
System.out.println("Code->" + diagnostic.getCode());
System.out.println("Column Number->" + diagnostic.getColumnNumber());
….
….
}
}[/code]

In Line 3, we are associating the Diagnostics listener object to the compiler object through the standard java file manager by making this method call.

[code lang=”java”]StandardJavaFileManager fileManager =
compiler.getStandardFileManager(listener, null, null);[/code]

This method tells to attach the diagnostics listener object to this compiler object, so whenever this compiler object executes a compilation operation, and if any diagnostics related errors or warnings have occurred in a program, then the diagnostics being encapsulated by the Diagnostic object will be passed back to the report() method of the DiagnosticListener interface.

The last 2 arguments refer the locale and charset arguments for formatting the diagnostic messages in a particular locale and by using the specified charset which can be null.

Line 4 and Line 5 populates the file objects to be compiled by using the convenience objects in the StandardJavaFileManager class.

[code lang=”java”]String fileToCompile = "test" + File.separator + "ManyErrors.java";
Iterable fileObjects = fileManager.getJavaFileObjectsFromStrings(
Arrays.asList(fileToCompile));[/code]

Line 6 gets an instance of the CompilationTask object by calling the getCompilationTask() method and by passing the fileManager, listener and the fileObjects objects. The null arguments refer to the Writer object (for getting the output from the compiler), list of options (the options that we pass to javac like –classpath classes, -sourcepath Sources…) and classes (for processing the custom annotations that are found in the source code).

Finally, the actual compilation operation is done by calling the call() method which returns true if all the files (compilationUnits) succeed compilation. If any of the files have errors in it , then the call() method will return false. Anyway, in our case, we have some error code in the MoreErrors.java file, so the report method will be called printing all the diagnostics information.

DiagnosticCollector

In the previous program, we saw that we have a customized class called MyDiagnosticListener. Its sole purpose it to collect and print all the diagnostic messages to the console. This class can be completely eliminated since Mustang already has a class called DiagnosticCollection<SourceObject> that does the same thing. It has a method called getDiagnostics() which returns a list , through which we can iterate and can output the diagnostic messages to the console.

The following code achieves the same using DiagosticCollector class.

[code lang=”java”]AdvancedCompilationTest2.java:
package test;
import java.io.*;
import java.util.*;
import javax.tools.*;
import javax.tools.JavaCompiler.*;
public class AdvancedCompilationTest2 {
public static void main(String[] args) throws Exception {
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler(); // Line 1.
DiagnosticCollector diagnosticsCollector =
new DiagnosticCollector();
StandardJavaFileManager fileManager =
compiler.getStandardFileManager(diagnosticsCollector, null, null); // Line 3.
String fileToCompile = "test" + File.separator + "ManyErrors.java"; // Line 4
Iterable fileObjects = fileManager.getJavaFileObjectsFromStrings(Arrays.asList(fileToCompile)); // Line 5
CompilationTask task = compiler.getTask(null, fileManager, diagnosticsCollector, null, null, fileObjects); // Line 6
Boolean result = task.call(); // Line 7
List> diagnostics = diagnosticsCollector.getDiagnostics();
for(Diagnostic d : diagnostics){
// Print all the information here.
}
if(result == true){
System.out.println("Compilation has succeeded");
}else{
System.out.println("Compilation fails.");
}
}
}[/code]

Compilation of Java Source from a String object

Having discussed about the various ways of compiling java file sources, it’s now time to look at how to compile a java source that is encapsulated in a string object. As previously mentioned, it is not mandatory that the contents of a java source must reside in hard-disk, it can even reside in memory. By saying that compiling a java source from a string object, we are implicitly saying that the java source is residing in memory, more specifically, the contents are residing in RAM.

For this to happen, we have to encapsulate a class the represents the java source from a string. We can extend this class from the SimpleJavaFileObject (a convenient class that overrides all the methods in the JavaFileObject with some default implementation). The only method to override is the getCharContent() that will be called internally by the Java compiler to get the java source contents.

[code lang=”java”]JavaObjectFromString.java:
package test;
import java.net.URI;
class JavaObjectFromString extends SimpleJavaFileObject{
private String contents = null;
public JavaObjectFromString(String className, String contents) throws Exception{
super(new URI(className), Kind.SOURCE);
this.contents = contents;
}
public CharSequence getCharContent(boolean ignoreEncodingErrors) throws IOException {
return contents;
}
}[/code]

Since the SimpleJavaFileObject has a protected two argument constructor that accepts an URI (the URI representation of the file object) and a Kind object (a special type that tells what kind is this object, the Kind may be a Kind.SOURCE, Kind.CLASS, Kind.HTML, Kind.OTHER, in our case, it is Kind.SOURCE, since we are inferring a Java source object), we have defined a two argument constructor in JavaObjectFromString class and delegates the control back the base class. The getCharContent() method has to be overridden (since this is the method that will be called by the JavaCompiler to get the actual java source contents) to return the string (remember, String implements CharSequence) that represents the entire java source (which was previously saved in the constructor).

The code that uses this JavaObjectFromString object looks like this.

[code lang=”java”]AdvancedCompilationTest3.java:
package test;
import java.io.*;
import java.util.*;
import javax.tools.*;
import javax.tools.JavaCompiler.*;
public class AdvancedCompilationTest3 {
public static void main(String[] args) throws Exception {
JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();
DiagnosticCollector diagnosticsCollector =
new DiagnosticCollector();
StandardJavaFileManager fileManager =
compiler.getStandardFileManager(diagnosticsCollector, null, null);
JavaFileObject javaObjectFromString = getJavaFileContentsAsString();
Iterable fileObjects = Arrays.asList(javaObjectFromString);
CompilationTask task = compiler.getTask(null, fileManager, diagnosticsCollector, null, null, fileObjects);
Boolean result = task.call();
List> diagnostics = diagnosticsCollector.getDiagnostics();
for(Diagnostic d : diagnostics){
// Print all the information here.
}
if(result == true){
System.out.println("Compilation has succeeded");
}else{
System.out.println("Compilation fails.");
}
}
static SimpleJavaFileObject getJavaFileContentsAsString(){
StringBuilder javaFileContents = new StringBuilder("" +
"class TestClass{" +
" public void testMethod(){" +
" System.out.println(" + "\"test\"" + ");" +
"}" +
"}");
JavaObjectFromString javaFileObject = null;
try{
javaFileObject = new JavaObjectFromString("TestClass", javaFileContents.toString());
}catch(Exception exception){
exception.printStackTrace();
}
return javaFileObject;
}
}[/code]

Conclusion

Before the release of Mustang, the compiler API related interfaces and classes were maintained in some non-standard packages (i.e inside com.sun.javac.tools.javac). But with Java 6.0, the designers of java have given great heights to compilation API by generalizing them in the javax.tools package. As already mentioned, this API is not for everyone. Web and Application servers can depend on this API exhaustively to provide compilation activities of the dynamically created Java source files. Although it is heard that more and more bugs related to Compiler API are there, they are expected to be fixed soon in the future builds.

Filed Under: Java Tagged With: Java 6.0, Java Compiler

What is new in Java 6.0 Collections API?

March 28, 2007 by Krishna Srinivasan Leave a Comment

In this article I will write about the new Collections APIs introduced in Java 6.0. Mustang has few interesting changes in the Collections APIs, one amoung them is the Deque. Deque is used for the Bi-Directional traversal. It has different implementations including BlockingDeque,ArrayDeque,etc. I will talk about the Deque and its various implementation, also few more changes in the Collectiona API in Java 6.0.

also read:

  • New Features in Java 6.0 – Par 1
  • New Features in Java 6.0 – Par 2
  • Java 6.0 Compiler API

Java 6.0 New Collection APIs an Overview

The following are the new collection APIs introduced in Java 6.0. I listes them as Interfaces and classes.

New Interfaces

  • Deque
  • BlockingDeque
  • NavigableSet
  • NavigableMap

New Classes

  • ArrayDeque
  • LinkedBlockingDeque
  • ConcurrentSkipListSet
  • ConcurrentSkipListMap
  • AbstractMap.SimpleEntry
  • AbstractMap.SimpleImmutableEntry

Updated Classes in Java 6.0

  • LinkedList
  • TreeSet
  • TreeMap
  • Collections

Deque and ArrayDeque

Deque is the abbreviation of “Double Ended Queue”. A Collection that allows us to add (or) remove elements at both ends. Deque supports total size of collection for both fixed and unspecified size limits.

Deque implementation can be used as Stack(Last in first out ) or Queue(First in First Out).For each insertion, retrieval and removal of elements from deque there exists methods in 2 flavours. One will throw exception if it fails in an operation and another one returns status or special value for each operation.

Operation Special value method Exception throwing method
Insertion at head offerFirst(e) addFirst(e)
Removal at head pollFirst() removeFirst()

Retrieval at Head

peekFirst() getFirst()
Insertion at Tail offerLast(e) addLast(e)
Removal at Tail pollLast() removeLast()

Retrieval at Tail

peekLast() getLast()

Implementation of Deque doesn’t require preventing the insertion of null, but when we are using special value method null is return to indicate that collection is empty. So it is recommendable not to allow insertion of null.

ArrayDeque is a class that implements Deque. It has no capacity restrictions. It will perform faster than stack when used as stack and faster than linked list when used as queue. ArrayDeque is not thread Safe. The following example explains how to write program using ArrayDeque.

Example

[code lang=”java”]import java.util.ArrayDeque;
import java.util.Iterator;

public class DequeExample
{
public static void main(String as[])
{
ArrayDeque adObj = new ArrayDeque();

//Insertion by using various methods
adObj.add(‘Oracle’);
adObj.addFirst(‘DB2’);
adObj.offerFirst(‘MySQL’); //returns boolean – true R false
adObj.offerLast(‘Postgres’); //returns boolean – true R false

//Retrievals
System.out.println(‘Retrieving First Element :’ + adObj.peekFirst());
System.out.println(‘Retrieving Last Element :’+ adObj.peekLast());

//Removals
System.out.println(‘Removing First Element :’+ adObj.pollFirst());
System.out.println(‘Removing Last Element :’+ adObj.pollLast());

//Reverse traversal
System.out.println(‘Remaining Elements :’);
Iterator it = adObj.descendingIterator();
while(it.hasNext())
{
System.out.println(it.next());
}
}
}[/code]

Output:

[code]Retrieving First Element :MySQL
Retrieving Last Element :Postgres
Removing First Element :MySQL
Removing Last Element :Postgres
Remaining Elements :
Oracle
DB2[/code]

BlockingDeque and LinkedBlockingDeque

A BlockingDeque is similar to Deque and provides additionally functionality. When we tries to insert an element in a BlockingDeque, which is already full, it can wait till the space becomes available to insert an element. We can also specify the time limit for waiting.

BlockingDeque methods available in four flavours

  • Methods throws exception
  • Methods returns special value
  • Methods that blocks(Waits indefinitely for space to available)
  • Methods that times out(Waits for a given time for space to available)
Operation Special Value Throws Exception Blocks Times out
Insertion at head addFirst(e) offerFirst(e) putFirst(e) offerFirst(e,time,unit)
Removal from head removefirst() pollFirst() takeFirst() takeFirst(time,unit)
Retrieval from head getFirst() peekfirst() NA NA
Insertion at tail addLast(e) offerLast(e) putLast(e) offerLast(e,time,unit)
Removal from tail removeLast() pollLast() takeLast() takeLast(time,unit)
Retrieval from tail getLast() peekLast() NA NA

A LinkedBlockingDeque is a Collection class, which implements BlockingDeque interface in which we can specify maximum capacity if we want.If we did not specify the capacity then maximum capacity will be Integer.MAX_VALUE.

[code lang=”java”]import java.util.concurrent.*;
class BlockingDequeExample implements Runnable
{
LinkedBlockingDeque lbd = new LinkedBlockingDeque(1);
volatile boolean b = true;
public void run()
{
try
{
/*First thread once enters into the block it modifies
instance variable b to false and prevents second
thread to enter into the block */
if(b)
{
b = false;
Thread.sleep(5000);//Makes the Thread to sleep for 5 seconds
System.out.println(‘Removing ‘+lbd.peek());
lbd.poll();//Removing an element from collection
}
else
{
System.out.println(‘Waiting ‘);
/*This method makes to wait till the first thread removes an elements*/
lbd.put(‘B’);
System.out.println(‘Inserted ‘+lbd.peek());
}
}
catch (Exception e)
{
e.printStackTrace();
}
}
public static void main(String[] args) throws Exception
{
BlockingDequeExample bdeObj = new BlockingDequeExample();
bdeObj.lbd.offer(‘A’);
System.out.println(‘Inserted ‘+bdeObj.lbd.peek());
Thread tMainObj = new Thread(bdeObj);
tMainObj.start();
Thread tSubObj = new Thread(bdeObj);
tSubObj.start();
}
}[/code]

Output:

[code]Inserted A
Waiting
Removing A
Inserted B[/code]

NavigableSet and ConcurrentSkipListSet

Suppose we have a requirement, from sorted set elements [5,10,15,20]
we want few things like this

  • Retrieve the element which is immediately greater than or lower than element 15
  • Retrieve all elements greater than or lower than 10

With the help of existing methods we need to take few risks to achieve them. But with NavigableSet methods it becomes just a method call.NavigableSet methods used to return the closest matches of elements for the given elements in the collection. ConcurrentSkipListSet is one of the class that implements NavigableSet.

Example

[code lang=”java”]import java.util.concurrent.*;
import java.util.*;
class SkipListSetTest
{
public static void main(String[] args)
{
ConcurrentSkipListSet csls = new ConcurrentSkipListSet();
csls.add(15);
csls.add(20);
csls.add(5);
csls.add(10);
System.out.println(‘Elements in the collections are’);
for(Integer i: csls)
{
System.out.println(i);
}
/* Retrieve immediate element less than or equal to the given element */
System.out.println(‘Floor ‘+csls.floor(12));
/* Retrieve immediate element greater than or equal to the given element */
System.out.println(‘Ceiling ‘+csls.ceiling(12));
/* Retrieve immediate element less than the given element */
System.out.println(‘Lower ‘+csls.lower(10));
/* Retrieve immediate element greater than the given element */
System.out.println(‘heigher ‘+csls.higher(10));
System.out.println(‘Head Elements ‘);
Set cslsHeadView = csls.headSet(10);
//HeadSet excludes the given element
for(Integer i: cslsHeadView)
{
System.out.println(i);
}
Set cslsTailView = csls.tailSet(10);
//TailSet includes the given element
System.out.println(‘Tail Elements’);
for(Integer i: cslsTailView)
{
System.out.println(i);
}
}
}[/code]

Output:

[code]Elements in the collections are
5
10
15
20
Floor 10
Ceiling 15
Lower 5
heigher 15
Head Elements
5
Tail Elements
10
15
20[/code]

NavigableMap and ConcurrentSkipListMap

NaviagableMap is similar to NaviagableSet. In NavigableSet, methods use to return values, but in NaviagableMap methods used to return the key,value pair.ConcurrentSkipListMap is the one of the class which implements NaviagableMap.

[code lang=”java”]import java.util.*;
import java.util.concurrent.*;

class NavigableMapExample
{
public static void main(String[] args)
{
NavigableMap nm = new ConcurrentSkipListMap();
nm.put(1,’One’);
nm.put(2,’Two’);
nm.put(3,’Three’);
nm.put(4,’Four’);
nm.put(5,’Five’);
/* Retrieves the key,value pair immediately lesser than the given key */
Map.Entry ae = nm.lowerEntry(5);
/* Map.Entry is a Static interface nested inside Map
interface,just use to hold key and value */
System.out.println(‘Key’ + ae.getKey());
System.out.println(‘Value’+ ae.getValue());
/* Retrieves key,value pairs equal to and greater then the given key */
SortedMap mm = nm.tailMap(3);
Set s = mm.keySet();
System.out.println(‘Tail elements are’);
for(Integer i:s)
{
System.out.println(‘Key ‘+ i + ‘Value ‘+ mm.get(i));
}
}
}[/code]

output

[code]Key 4
Value Four
Tail elements are
Key 3 Value Three
Key 4 Value Four
Key 5 Value Five[/code]

[code]Notes :
floorEntry method retrieves less than or equal to the givenkey (or) null
lowerEntry method retrieves always less than the givenkey (or) null
headMap method retrieves all elements less than the givenkey
tailMap method retrieves all elements greater than or equal to the givenkey[/code]

AbstractMap.SimpleEntry and AbstractMap.SimpleImmutableEntry

AbstractMap.SimpleEntry and AbstractMap.SimpleImmutableEntry are a static classes nested inside abstractMap class.The instance of this classes use to hold the key,value pair of one single entry in a Map.The difference between these two classes is that former one allow us to set the value and later one if we try to set the value, it throws UnsupportedOperationException

Modified classes

Classes are modified to implement the new interfaces in the existing classes.LinkedList is modified to implement Deque. TreeSet is modified to implement NavigableSet.TreeMap is modified to implement NavigableMap.In Collections 2 new methods newSetFromMap and asLifoQueue are added.

Conclusion

With java6.0 collections bi- directional traversal becomes easier and retrieval of elements in our desired way also possible. Some attention is also given towards Concurrent operations in Collection.

Filed Under: Java Tagged With: Java 6.0

  • « Previous Page
  • 1
  • …
  • 46
  • 47
  • 48

Follow Us

  • Facebook
  • Pinterest

As a participant in the Amazon Services LLC Associates Program, this site may earn from qualifying purchases. We may also earn commissions on purchases from other retail websites.

JavaBeat

FEATURED TUTORIALS

Answered: Using Java to Convert Int to String

Introductiion to Jakarta Struts

What’s new in Struts 2.0? – Struts 2.0 Framework

Copyright © by JavaBeat · All rights reserved