JavaBeat

  • Home
  • Java
    • Java 7
    • Java 8
    • Java EE
    • Servlets
  • Spring Framework
    • Spring Tutorials
    • Spring 4 Tutorials
    • Spring Boot
  • JSF Tutorials
  • Most Popular
    • Binary Search Tree Traversal
    • Spring Batch Tutorial
    • AngularJS + Spring MVC
    • Spring Data JPA Tutorial
    • Packaging and Deploying Node.js
  • About Us
    • Join Us (JBC)
  • Privacy
  • Contact Us

Pentaho Reporting 3.5 for Java Developers

November 21, 2009 by itadmin Leave a Comment

Pentaho Reporting lets you create, generate, and distribute rich and sophisticated report content from different data sources. Knowing how to use it quickly and efficiently gives you the edge in producing reports from your database. If you have been looking for a book that has plenty of easy-to-understand instructions and also contains lots of examples and screenshots, this is where your search ends.

also read:

  • Java Tutorials
  • Java EE Tutorials
  • Design Patterns Tutorials
  • Java File IO Tutorials

This book shows you how to replace or build your enterprise reporting solution from scratch with Pentaho’s Reporting Suite. Through detailed examples, it dives deeply into all aspects of Pentaho’s reporting functionalities, providing you with the knowledge you need to master report creation.

What This Book Covers

Chapter 1—An Introduction to Pentaho Reporting provides a quick overview of Pentaho
Reporting, including a feature summary and architectural summary, as well as a history of the product.

Chapter 2—Pentaho Reporting Client and Enterprise Examples tells how to install and
create reports, and how to embed reports in your J2EE and client Java applications. Chapter 3— Pentaho Reporting Examples in the Real World tells how to connect to a JDBC data source and create realistic inventory, balance, and invoice reports, including charts and sub-reports.

Chapter 4—Design and Layout in Pentaho’s Report Designer takes a deep dive into Pentaho’s Report Designer, learning how to create great-looking reports.

Chapter 5—Working with Data Sources teaches the various ways to connect your report to live data, including JDBC, Hibernate, Java Beans, OLAP, and many other data sources.

Chapter 6—Including Charts and Graphics in Reports is about incorporating Pie, Bar, Line, and many other chart types in your reports, as well as including dynamic images in your report.

Chapter 7—Parameterization, Functions, Formulas, and Internationalization in Reports
defines parameters for dynamic report generation. It helps you write formulas and use available functions for rich summary and calculated values in your reports, along with dynamically adjusting colors and styles using expressions in your report.
Chapter 8—Adding Sub-Reports and Cross Tabs to Reports gives an overview of how to
build reports that include side-by-side sub-reports and cross tabs.

Chapter 9—Building Interactive Reports teaches how to add dynamic interaction to HTML and Swing reports, for immediate feedback and dashboard-like functionality.

Chapter 10—API-based Report Generation is about building reports from XML and by using Pentaho Reporting’s Java Bean API.
Chapter 11—Extending Pentaho Reporting teaches how to write custom functions and
elements within Pentaho Reporting.

Chapter 12—Additional Pentaho Reporting Topics discovers how to use Pentaho Reporting with the Pentaho BI Server, including Pentaho Metadata. It aids in learning more about Pentaho Reporting’s open source approach, and how you can contribute to the free software movement.

Including Charts and Graphics in Reports

In this chapter, you’ll learn how to incorporate charts and graphics into Pentaho Reports. You’ll learn about the different types of charts supported, and how to configure them in Pentaho Report Designer. You’ll also learn how to populate a chart with various types of data.
In addition to learning all about charts, this chapter also covers the various methods for including visual information in your report, including embedding images and Java graphics in your report.

Supported charts

Pentaho Reporting relies on JFreeChart, an open source Java chart library, for charting visualization within reports. From within Report Designer, many chart types are supported. In the chart editor, two areas of properties appear when editing a chart. The first area of properties is related to chart rendering, and the second tabbed area of properties is related to the data that populates a chart.

Following is the screenshot of the chart editor within Pentaho Report Designer:

All chart types receive their data from three general types of datasets. The first type is known as a Category Dataset , where the dataset series and values are grouped by categories. A series is like a sub-group. If the exact category and series appear, the chart will sum the values into a single result. The following table is a simple example of a category dataset:

Category Series Sale Price
Store 1 Sales Cash $14
Store 1 Sales Credit $12
Store 2 Sales Cash $100
Store 2 Sales Credit $120

Pentaho Reporting builds a Category Dataset using the CategorySetDataCollector. Also available is the PivotCategorySetCollector, which pivots the category and series data. Collector classes implement Pentaho Reporting’s Function API. The second type of dataset is known as an XY Series Dataset, which is a two dimensional group of values that may be plotted in various forms. In this dataset, the series may be used to draw different lines, and so on. Here is a simple example of an XY series dataset:

Series Cost of Goods (X) Sale Price (Y)
Cash 10 14
Credit 11 12
Cash 92 100
Credit 105 120

Note that X is often referred to as the domain, and Y is referred to as the range.

Pentaho Reporting builds an XY Series Dataset using the XYSeriesCollector. The XYZSeriesCollector also exists for three dimensional data. The third type of dataset is known as a Time Series Dataset , which is a two dimensional group of values that are plotted based on a time and date. The Time Series Dataset is more like an XY Series than a Category Dataset, as the time scale is displayed in a linear fashion with appropriate distances between the different time references.

Pentaho Reporting builds a Time Series Dataset using the TimeSeriesCollector.

Common chart rendering properties

Most charts share a common set of properties. The following properties are common across most charts. Any exceptions are mentioned as part of the specific chart type.



Common category series rendering properties

The following properties appear in charts that render category information:



Common XY series rendering properties

The following properties appear in charts that render XY series information.

Common dataset properties

The following properties are common across all chart datasets:

Common category series dataset properties

The following properties are common across all charts that utilize category series dataset for populating the chart:

Common XY series dataset properties

The following properties are common across all charts that utilize the XY series dataset for populating a chart:

Now that you’ve reviewed the common set of properties for all charts, you’ll begin to explore the individual charts, including going through their configurable properties, as well as providing a quick example.

Area chart

The area chart displays a category dataset as a line, with the area underneath the line filled in. Multiple areas may appear depending on the number of series provided. The area chart is useful for visualizing the differences between two or more sets of data. It utilizes the common properties defined in the previous tables, including the category series common properties. The area chart defines no additional properties.

Area chart example

This example will demonstrate the area chart’s capabilities. First, you’ll need a rich enough dataset to demonstrate this and all the other charts in this chapter. You’ll reuse the ElectroBarn HSQLDB data source configured in Chapter 3. To begin, launch Pentaho Report Designer and create a new report.

Now, select the Data tab. Right-click on the Data Sets tree element, and select the JDBC data source. If ElectroBarn is not already configured as a connection type, click the Connections add image button and fill in the following values, customizing the database location for your particular environment:

Click the Test button to verify your connection, and then click OK when you are done.

You need to define a SQL statement to populate your chart. You’ll define a simple query that takes a look at the inventory data. Add a new query with the following SQL code:
[code]
SELECT
"INVENTORY"."ITEMCATEGORY",
"INVENTORY"."SALEPRICE",
"INVENTORY"."COST"
FROM
"INVENTORY"
ORDER BY
"INVENTORY"."ITEMCATEGORY" ASC
[/code]
Click OK. You’re now ready to add a chart to your empty report. For this example, select the Chart report element from the palette and drag it into the Report Header. Double-click on the chart, or right-click on the chart and select Chart….

Once the Edit Chart dialog appears, select the Area Chart .

In the Primary DataSource tab, select the ITEMCATEGORY data field as your category-column. For your value-columns, select SALEPRICE and COST. Enter the strings Sale Price and Cost as the series–by-value values. When rendering an area chart, the order of value columns is important. If a larger value is rendered after a smaller value, the smaller value will not appear on the chart.

Once you configured the data for the chart, you can also make some customization to the rendering. Set horizontal to True, as well as specifying the bg-color as yellow.
Finally, set the show-legend property to True. Click the OK button and then
preview your report to see the results!

Bar chart

The bar chart displays individual bars broken out into individual categories and series. Bar charts are useful for comparing relative sizes of data across categories. The bar chart utilizes the common properties defined earlier, including the category series common properties.

The bar chart defines the following additional rendering properties:

Bar chart example

You’ll now build an example bar chart. Create a new report with the ElectroBarn data source, and use the following SQL query, which investigates purchase quantity and payment type:
[code]
SELECT
"INVENTORY"."ITEMCATEGORY",
"PURCHASES"."PAYMENTTYPE",
"PURCHASEITEMS"."QUANTITY"
FROM
"PURCHASES" INNER JOIN "PURCHASEITEMS" ON
"PURCHASES"."PURCHASEID" = "PURCHASEITEMS"."PURCHASEID"
INNER JOIN "INVENTORY" ON "PURCHASEITEMS"."ITEMID" =
"INVENTORY"."ITEMID"
ORDER BY
"INVENTORY"."ITEMCATEGORY" ASC,
"PURCHASES"."PAYMENTTYPE" ASC
[/code]
Place a Chart element in the Report Header of the report, selecting bar as its type.

To begin, configure the dataset properties for your bar chart. Set category-column to ITEMCATEGORY, value-columns to QUANTITY, and series-by-field to PAYMENTTYPE. By setting the series-by-field property, the chart will create a series for each PAYMENTTYPE in the dataset.
Now, you’ll customize the look of your chart. First, set the X-Axis show-labels property to True and text-format to {2}. This will display the value of each bar at the top of the bar. Then set max-label-width to 2.0, so that you can easily see all the category names in the chart. Finally, set the show-legend to True, in order to see what types of payments map to which bar color. You’re now ready to preview your chart!

Line chart

The line chart displays connected lines between categories for each series provided. This chart is useful for visualizing trends. The line chart utilizes the common properties defined in the previous tables, including the category series common properties. The line chart defines the following additional rendering properties:

Note that the stacked and stacked-percent properties do not apply to the line chart type.

Line chart example

In this example, reuse the SQL query and dataset sections from your area chart example. Select the Line chart type , and customize the chart with show-markers set to True as well as line-size set to 4.0. The result should look like this:

Pie chart

The pie chart displays a sliced, multi-colored pie with individual slices consisting of individual series information. The pie chart uses its own dataset type, versus using a category or XY series dataset. The pie chart utilizes the common properties defined above, but does not utilize the category or XY dataset properties. Instead, it defines its own properties for providing chart data using the PieDataSetCollector. The pie chart defines the following rendering properties:

Note that the pie chart does not share the common properties horizontal, series-color, stacked, or series-names. The pie chart defines the following dataset properties:

Pie chart example

For the pie chart example, you’ll compare the various costs of inventory items to one another by category. First, you’ll need to define an SQL query as shown next:
[code]
SELECT
"INVENTORY"."ITEMCATEGORY",
"INVENTORY"."ITEMNAME",
"INVENTORY"."COST"
FROM
"INVENTORY"
ORDER BY
"INVENTORY"."ITEMCATEGORY" ASC,
"INVENTORY"."ITEMNAME" ASC
[/code]
You’ll then need to define a Group Header for your report. Right-click on the Groups section within the report structure and edit the root group, naming the group Item Category. Select the ITEMCATEGORY field as the only field in the Selected Items list. Expand the Group node in the structure tree, and select the Group Header. Now, uncheck the hide-on-canvas property, so you can view the Group Header in the canvas.

Drag-and-drop the ITEMCATEGORY field at the top of the Group Header. Place a chart below the text field, and click Edit Chart…
Select the Pie chart type . You’ll start configuring the chart by selecting the correct dataset. For the value-column, select the COST field. For the series-by-field property, select the ITEMNAME field. You’ll also need to tell the chart collector to reset the data after each group. Set the reset-group property to the already defined Item Category group.

Finally, you’ll want to customize some of the rendering properties. Set the explode-slice to maxValue, and set the explode-pct to 0.5. This will highlight the most expensive item in each category. Also set show-legend to False to hide the legend and show-labels to True to display the individual pie slice labels.

Click the OK button and preview the report. You should see a group of charts as shown in the following figure:

Ring chart

The ring chart is identical to the pie chart, except that it renders as a ring versus a complete pie. In addition to sharing all the properties similar to the pie chart, it also defines the following rendering property :

Ring chart example

For this example, simply open the defined pie chart example and select the Ring chart type. Also, set the section-depth to 0.1, in order to generate the following effect:

Multi pie chart

The multi pie chart renders a group of pie charts, based on a category dataset. This meta-chart renders individual series data as a pie chart, each broken into individual categories within the individual pie charts. The multi pie chart utilizes the common properties defined above, including the category dataset properties. In addition to the standard set of properties, it also defines the following two properties:

Note that the horizontal, series-color, stacked and stacked-percent properties do not apply to this chart type.

Multi pie chart example

This example demonstrates the distribution of purchased item types, based on payment type. To begin, create a new report. You’ll reuse the bar chart’s SQL query.

Now, place a new Chart element into the Report Header. Edit the chart, selecting Multi Pie as the chart type. To configure the dataset for this chart, select ITEMCATEGORY as the category-column. Set the value-columns property to QUANTITY and the series-by-field to PAYMENTTYPE.

Waterfall chart

The waterfall chart displays a unique stacked bar chart that spans categories. This chart is useful when comparing categories to one another. The last category in a waterfall chart normally equals the total of all the other categories to render appropriately, but this is based on the dataset, not the chart rendering. The waterfall chart utilizes the common properties defined above, including the category dataset properties. The stacked property is not available for this chart. There are no additional properties defined for the waterfall chart.

Waterfall chart example

In this example, you’ll compare by type, the quantity of items in your inventory. Normally, the last category would be used to display the total values. The chart will render the data provided with or without a summary series, so you’ll just use the example SQL query from the bar chart example. Add a Chart element to the Report Header and select Waterfall as the chart type. Set the category-column to ITEMCATEGORY, the value-columns to QUANTITY, and the series-by-value property to Quantity. Now, apply your changes and preview the results.

Bar line chart

The bar line chart combines the bar and line charts, allowing visualization of trends with categories, along with comparisons. The bar line chart is unique in that it requires two category datasets to populate the chart. The first dataset populates the bar chart, and the second dataset populates the line chart. The bar line chart utilizes the common properties defined above, including the category dataset properties.

This chart also inherits the properties from both the bar chart, as well as the line chart. This chart also has certain additional properties, which are listed in the following table:


As part of the bar line chart, a second y-axis is defined for the lines. The property group Y2-Axis is available with similar properties as the standard y-axis.

Bar line chart example

To demonstrate the bar line chart, you’ll reuse the SQL query from the area chart example. Create a new report, and add a Chart element to the Report Header. Edit the chart, and select Bar Line as the chart type.
You’ll begin by configuring the first dataset. Set the category-column to ITEMCATEGORY, the value-columns to COST, and the series-by-value property to Cost. To configure the second dataset, set the category-column to ITEMCATEGORY, the value-columns to SALEPRICE, and the series-by-value property to Sale Price.

Set the x-axis-label-width to 2.0, and reduce the x-font size to 7. Also, set show-legend to True.

You’re now ready to preview the bar line chart.

Bubble chart

The bubble chart allows you to view three dimensions of data. The first two dimensions are your traditional X and Y dimensions, also known as domain and range. The third dimension is expressed by the size of the individual bubbles rendered.

The bubble chart utilizes the common properties defined above, including the XY series dataset properties. The bubble chart also defines the following properties:

The bubble chart defines the following additional dataset property:

Bubble chart example

In this example, you need to define a three dimensional SQL query to populate the chart. You’ll use inventory information, and calculate Item Category Margin:
[code]
SELECT
"INVENTORY"."ITEMCATEGORY",
"INVENTORY"."ONHAND",
"INVENTORY"."ONORDER",
"INVENTORY"."COST",
"INVENTORY"."SALEPRICE",
"INVENTORY"."SALEPRICE" – "INVENTORY"."COST" MARGIN
FROM
"INVENTORY"
ORDER BY
"INVENTORY"."ITEMCATEGORY" ASC
[/code]
Now that you have a SQL query to work with, add a Chart element to the Report Header and select Bubble as the chart type. First, you’ll populate the correct dataset fields. Set the series-by-field property to ITEMCATEGORY. Now, set the X, Y, and Z value columns to ONHAND, SALEPRICE, and MARGIN.
You’re now ready to customize the chart rendering. Set the x-title to On Hand, the y-title to Sales Price, the max-bubble-size to 100, and the show-legend property to True. The final result should look like this:

Scatter chart

The scatter chart renders all items in a series as points within a chart. This chart type utilizes the common properties defined above, including the XY series dataset properties. The scatter chart also defines the following two properties:

Scatter chart example

For this example, you’ll reuse the SQL query defined in your bubble chart example, as well as the default rendering properties configured. Simply select the Scatter chart type in the chart editor . The chart below shows Sales Price and On Hand values:

XY Area, XY Bar and XY Line charts

The XY Area, XY Bar, and XY Line charts graph an XY series dataset as an area, bar, or a simple line chart. These chart types utilize the common properties defined above, including the XY series dataset properties. The XY Bar chart also uses the property show-bar-borders, which is defined earlier in the bar chart. The XY Area and XY Line charts share the properties line-style, line-size, and show-markers, defined earlier in the line chart.

In addition to the standard XY Series Dataset, XY charts may use a Time Series Dataset to render data. To use the TimeSeriesCollector, you can select it in the Primary DataSource drop-down list. The Time Series Dataset is similar to the Category Dataset, but instead of a category it defines a category-time-column. The field selected for the category-time-column must be of the type java.util.Date.

Also defined is the time-period-type, which defines at what interval of time should the results be grouped together. Valid values for this property include Millisecond, Second, Minute, Hour, Day, Week, Month, Quarter, and Year.

XY charts example

In this example, you’ll reuse the SQL query defined in the bubble chart example, as well as the default rendering properties configured for each of the individual charts, XY Area , XY Bar , and XY Line Chart . You’ll also reuse the X and Y dataset configuration specified for the scatter chart.

Extended XY Line chart

The Extended XY Line chart allows the rendering of three additional chart types—StepChart, StepAreaChart, and DifferenceChart. The Step chart types display an XY series dataset as a set of steps, and the Difference Chart renders two XY series and highlights the differences between the two. The Extended XY Line chart utilizes the common properties defined above, including the XY series dataset properties. The Extended XY Line chart also defines the following property:

Extended XY Line chart example

In this example, you’ll reuse the SQL query defined in the bubble chart example, as well as the default rendering properties configured for each of the individual charts. Select Extended XY Line as the chart type and specify StepChart, StepAreaChart and DifferenceChart as the Chart Type to see the different renderings:

You’ve now worked with all the major chart types within Pentaho Reporting. Under the covers, charts are simply dynamic images that are generated and included in your reports. You’ll now learn more about including images within reports.

Radar chart

The Radar chart renders a web-like chart that displays a categorical dataset. The Radar chart utilizes the common properties defined above, including the category series common properties. The Radar chart also defines the following properties:

Radar chart example

In this example, reuse the SQL query and dataset sections from your area chart example. The result should look like this:

Including static images in your report

To include static images in your report, select the image report element
from the report designer palette and place it in your report. Double-click on the element, or right-click on the element and select Edit Content to select the static image. This brings up a resource dialog, where you can browse to the specific file location. You may Link to or Embed the image in the PRPT file. An example of a static image, with the ElectroBarn logo, is provided in Chapter 3.
The image report element uses Pentaho Reporting’s ResourceManager API to load the image. The ResourceManager interface is located in the org.pentaho.reporting.libraries.resourceloader package.

Including dynamic images in your report

To add dynamic images to your report, use the content-field report element . The content field accepts different types of image inputs for rendering. The first approach is dynamically changing the image location within your dataset. If you have a field that contains a URL or file system location to your image, the content-field element will render the specified image.

The second approach is to populate the content-field with an object of type java.awt.
Image for rendering. This approach would require a custom-implemented TableModel (as described in Chapter 5), or a custom function that returns an Image object. The third approach is to populate the content-field with an object that contains the following method, which is determined through Java introspection:
[code lang=”java”]
void draw(Graphics2D g2, Rectangle2D area);
[/code]
In addition to this API, Pentaho Reporting also defines an extended org.pentaho. reporting.engine.classic.core.util.ReportDrawable API with the following methods, for more detailed access into the report rendering process:

  • void draw(Graphics2D g2, Rectangle2D area);
  • void setConfiguration(Configuration config);
  • void setStyleSheet(StyleSheet style);
  • void setResourceBundleFactory(ResourceBundleFactory
    bundleFactory);
  • ImageMap getImageMap(final Rectangle2D bounds);

A custom TableModel implementation or custom function would also be required to make this object available to the Reporting engine.

Summary

In this chapter, you learned how to incorporate many chart types into your reports in many different ways. You learned how to configure a chart’s dataset as well as customizing how each chart type looks in a report. You learned how to populate a category series dataset, as well as an XY series dataset, and make that data available to the various types of charts that render in your report. You also learned how to include static and dynamic images, as well as graphics, in your reports.

Filed Under: Java Tagged With: Java Reports

Book Joomla ECommerce with VirtueMart

November 12, 2009 by itadmin Leave a Comment

Book Joomla! E-Commerce with VirtueMart

Joomla! is an award-winning content management system, which can be used to build multiple types of websites including, but not limited to, e-commerce sites. Joomla!’s power comes from its extensibility through different types of extensions, namely components, modules, plug-ins, and templates. There is a vast repository of over 4,500 Joomla! extensions, most of which are available free of cost and comes with open source licensing. VirtueMart is one such extension which helps to build an online shop in conjunction with Joomla!. Being an extension of Joomla!, VirtueMart provides seamless integration with a Joomla! site, using the same security, look and feel, and convenient framework for extending the e-commerce application. Web developers can easily build a Joomla! and VirtueMart-based e-commerce website without the need for custom coding. Even ordinary people, with little knowledge in HTML, CSS, and PHP, can build a functional online store using Joomla! and VirtueMart. This book teaches how to build a Joomla! and VirtueMart online shop without delving into extensive coding.

also read:

  • HTML Tutorials
  • CSS Tutorials
  • JavaScript Tutorials

What This Book Covers

Chapter 1, Introduction to Joomla! and E-Commerce, introduces Joomla! and VirtueMart along with some other components similar to VirtueMart. This chapter describes Joomla!, its main features, and the e-commerce options in Joomla!. It also elaborates on VirtueMart and its features, and lists alternatives to VirtueMart and the other shopping carts that can be used with Joomla!
Chapter 2, Installation and Basic Configuration of Joomla! and VirtueMart, explains the installation of Joomla! and Virtuemart. First, it shows the basic requirements for installing Joomla! and VirtueMart. It then proceeds to show the installation procedures for Joomla! and VirtueMart. This chapter also describes installing and uninstalling Joomla! components, plug-ins, modules, and templates. It also explains setting up the basic configurations for a Joomla! site, installing the VirtueMart component and modules, and configuring the basic options for a VirtueMart shop. At the end of this chapter, you will get a Joomla! site with the VirtueMart shopping cart installed.
Chapter 3, Configuring the VirtueMart Store, explains how to configure a VirtueMart shop. First, this chapter explains configuring the shop, creating and using appropriate zones, currencies, and locales, installing and uninstalling appropriate modules, and configuring those followed by configuring the payment methods, shipping methods, and taxes for the shop. The configuration options discussed in this chapter are specific to VirtueMart which gives basis for further configuring and customizing the shop.
Chapter 4, Managing the Product Catalogue, explains details about building a product catalogue and managing the catalogue for a VirtueMart store. This chapter teaches managing manufacturers and vendors, managing the product categories and products, creating and using product attributes, and creating and using product types. In this chapter, you are going to add and edit a lot of information about manufacturers, vendors, product categories, and products. In this chapter, the VirtueMart shop will take shape with the products you want to sell.
Chapter 5, Managing Customers and Orders, discusses managing customers and orders. Specifically, it teaches configuring the user registration settings for VirtueMart, managing users for the VirtueMart shop, creating and managing fields for the customer registration form, creating and managing user groups, and creating and using order status types. This is followed by viewing order statistics, viewing details of an order, updating an order, and managing inventory. The skills taught in this chapter are invaluable for any shop administrator.
Chapter 6, Customizing the Look and Feel, discusses customizing the look and feel of the shop. This chapter teaches installing and applying a new Joomla! template to the site. It then shows how to customize the look and feel of the VirtueMart store. It also explains
VirtueMart theming and layouts. Later, this chapter shows how to customize the look and feel of the VirtueMart store as a whole, and how to use search engine friendly (SEF) URLs for your shop.
Chapter 7, Promotion and Public Relations, describes the promotion and public relations tools available in VirtueMart. This chapter teaches you to use Joomla!’s and VirtueMart’s promotional tools like banner ads, specials, and featured products, and also how to use
coupons to attract more customers. Later, this chapter explains how to use newsletters and product notifications to keep continuous communication with your customers. You will also learn how to use VirtueMart’s product review feature to express customer experiences.

Managing Customers and Orders

So far, we have seen how to configure a store and build a product catalog. When our product catalog is ready, it is time to test the user registration and order management functionalities. In this chapter, we are going to discuss how to manage customers and orders. On completion of this chapter, you will be able to:

  • Configure the user registration settings for VirtueMart
  • Manage users for a VirtueMart shop
  • Create and manage fields for a customer registration form
  • Create and manage user groups
  • Create and use order status types
  • View order statistics
  • View details of an order
  • Update an order
  • Manage inventory

Note that all VirtueMart customers must be registered with Joomla!. However, not all Joomla! users need to be the VirtueMart customers. Within the first few sections of this chapter, you will have a clear concept about user management in Joomla! and VirtueMart.

Customer management

Customer management in VirtueMart includes registering customers to the VirtueMart shop, assigning them to user groups for appropriate permission levels, managing fields in the registration form, viewing and editing customer information, and managing the user groups. Let’s dive in to these activities in the following sections.

Registration/Authentication of customers

Joomla! has a very strong user registration and authentication system. One core component in Joomla! is com_users, which manages user registration and authentication in Joomla!. However, VirtueMart needs some extra information for customers. VirtueMart collects this information through its own customer registration process, and stores the information in separate tables in the database. The extra information required by VirtueMart is stored in a table named jos_vm_user_info, which is related to the jos_users table by the user id field. Usually, when a user registers to the Joomla! site, they also register with VirtueMart. This depends on some global settings. In the following sections, we are going to learn how to enable the user registration and authentication for VirtueMart.

Revisiting registration settings

If you remember, we discussed the global settings for user registration in VirtueMart, in Chapter 3, Configuring the VirtueMart Shop. For convenience, we are going to recap the global configuration settings for user registration in the VirtueMart store. We configure it from VirtueMart’s administration panel Admin | Configuration | Global screen. There is a section titled User Registration Settings, which defines how the user registration will be handled:

Ensure that your VirtueMart shop has been configured as shown in the screenshot above. The first field to configure is the User Registration Type. Selecting Normal Account Creation in this field creates both a Joomla! and VirtueMart account during user registration. For our example shop, we will be using this setting. In Chapter 3, we also warned that Joomla!’s new user activation should be disabled when we are using VirtueMart. That means the Joomla! New account activation necessary? field should read No.

Enabling VirtueMart login module

There is a default module in Joomla! which is used for user registrations and login. When we are using this default Login Form (mod_login module), it does not collect information required by VirtueMart, and does not create customers in VirtueMart. By default, when published, the mod_login module looks like the following screenshot.

As you see, registered users can log in to Joomla! through this form, recover their forgotten password by clicking on the Forgot your password? link, and create a new user account by clicking on the Create an account link. When a user clicks on the Create an account link, they get the form as shown in the following screenshot:

We see that normal registration in Joomla! only requires four pieces of information:
Name, Username, Email, and Password. It does not collect information needed in VirtueMart, such as billing and shipping address, to be a customer. Therefore, we need to disable the mod_login module and enable the mod_virtuemart_login
module. We have already learned how to enable and disable a module in Joomla!. We have also learned how to install modules. If you followed the instructions from Chapter 2 and installed all of the VirtueMart modules, you will find it from Joomla! control panel by clicking on Extensions | Module Manager:

By default, the mod_virtuemart_login module’s title is VirtueMart Login. You may prefer to show this title as Login only. In that case, click on the VirtueMart Login link in the Module Name column. This brings the Module: [Edit]
screen:

In the Title field, type Login (or any other text you want to show as the title of this module). Make sure the module is enabled and position is set to left or right. Click on the Save icon to save your settings. Now, browse to your site’s front-page
(for example, http://localhost/bdosn/), and you will see the login form as shown in the following screenshot:

As you can see, this module has the same functionalities as we saw in the mod_login module of Joomla!. Let us test the account creation in this module. Click on the Register link. It brings the following screen:

The registration form has three main sections: Customer Information, Bill To Information, and Send Registration. At the end, there is the Send Registration
button for submitting the form data. In the Customer Information section, type your email address, the desired username, and password. Confirm the password by typing it again in the Confirm password field. In the Bill To Information section, type
the address details where bills are to be sent. In the entire form, required fields are marked with an asterisk (*). You must provide information for these required fields.
In the Send Registration section, you need to agree to the Terms of Service. Click on the Terms of Service link to read it. Then, check the I agree to the Terms of Service
checkbox and click on the Send Registration button to submit the form data:

If you have provided all of the required information and submitted a unique email address, the registration will be successful. On successful completion of registration, you get the following screen notification, and will be logged in to the shop automatically:

If you scroll down to the Login module, you will see that you are logged in and greeted by the store. You also see the User Menu in this screen:

Both the User Menu and the Login modules contain a Logout button. Click on either of these buttons to log out from the Joomla! site. In fact, links in the User Menu module are for Joomla! only. Let us try the link Your Details. Click on the Your Details link, and you will see the information shown in the following screenshot:

As you see in the screenshot above, you can change your full name, email, password, frontend language, and time zone. You cannot view any information regarding billing address, or other information of the customer. In fact, this information is for regular Joomla! users. We can only get full customer information by clicking on the Account Maintenance link in the Login module. Let us try it. Click on the Account Maintenance link, and it shows the following screenshot:

The Account Maintenance screen has three sections: Account Information, Shipping Information, and Order Information. Click on the Account Information link to see what happens. It shows the following screen:

This shows Customer Information and Bill To Information, which have been entered during user registration. The last section on this screen is the Bank Information, from where the customer can add bank account information. This section looks like the following screenshot:
As you can see, from the Bank Account Info section, the customers can enter their bank account information including the account holder’s name, account number, bank’s sorting code number, bank’s name, account type, and IBAN (International Bank Account Number). Entering this information is important when you are using a Bank Account Debit payment method.
Now, let us go back to the Account Maintenance screen and see the other sections. Click on the Shipping Information link, and you get the following screen:

There is one default shipping address, which is the same as the billing address. The customers can create additional shipping addresses. For creating a new shipping address, click on the Add Address link. It shows the following screen:

As you see in the above screenshot, customers can add shipping address information. Mandatory fields are marked with an asterisk (*), and must be filled in. The customer also needs to provide a nickname for the address, which will be displayed for selecting the shipping address during checkout. After filling in the form, save it by clicking on the Save button.
Now, let us again move to the Account Maintenance page. For a new customer, the order information section will not show any orders. When the customer places some orders, this section will look like the following screenshot:

To see the details of a particular order, click on the View link. This opens up details of the purchase order. The first part of the purchase order looks like the following screenshot:

The first part of the Purchase Order contains the store’s address, order information like order number, order date, and its status. It also contains the customer’s information including the Bill To and Ship To addresses. The second part of the Purchase order contains shipping information, a list of order items, total price, shipping and handling fee, taxes, and payment information. This
part looks like the following screenshot:

Customers can view purchase orders they have placed, but cannot
modify those purchase orders.

When you enable the VirtueMart Login module, it is wise to disable the User Menu module of Joomla!. We have seen that account details provided by the link in the User Menu do not show customer information. Therefore, it is recommended that you disable the User Menu and the Login modules of Joomla! and keep the VirtueMart Login module enabled.

Managing fields for user registration form

In the previous section, we saw how customers can register to a VirtueMart shop. To enable registration and login of customers, we have disabled Joomla!’s Login Form module, and enabled the VirtueMart Login module. When registering through the Register link provided by the VirtueMart Login module, customers get some extra fields which are used for the shop’s purpose such as billing and shipping to addresses. VirtueMart gives us the flexibility to define additional fields for the form, and also decide which fields will be shown in which page—registration, account information, and so on.
For managing the fields in user registration form, go to the VirtueMart administration panel and click on Admin | Manage User Fields. This shows the list of user fields currently used:

The Manage User Fields screen lists the available fields for the registration form. This list indicates what type of fields these are, whether any field is required or not, its published or unpublished status, and in which forms the fields will be displayed. Note the Show in registration form, Show in shipping form, and Show in account maintenance columns. A checkbox in these columns against any field indicates that the field will be available in that form (registration, shipping, or account maintenance). You can also reorder the fields from the Reorder column by clicking the up or down arrow icon. Another way to reorder the fields is to type the ordernumber and then saving it by clicking on the Save icon (). Clicking the a-z() icon reorders the fields alphabetically. Also note the trash () icon in the Remove column is available only for the fields which are a non-system field, that is, either a delimiter or a custom field.

Adding a new field

As an administrator of the VirtueMart shop, you can add a new field to the customer registration form from the Manage User Fields screen. To add a new field, click on the New button. This shows the Add/Edit User Fields screen:

The first field in the Add/Edit User Fields screen is Field type. You need to specify what type of field you are going to add. Then, provide a name for the field in the Field name text box. This name is for internal use only and will not be displayed. Type the
label for this field in the Field title box, which will be displayed in the form. In the Description, field-tip text area, type the description of the field which will be shown as a tooltip in the form. Select Yes or No in the Required? field to indicate whether the user must provide a value for this field or not. As you can see, you can also select in which forms (for example, registration, account maintenance, and shipping) the field will be displayed. When you select Yes in the Read-Only field, users cannot change the value for that field. In the Published field, select Yes to publish that. For the Text
Field
, you can specify a Field Size which will be the size of the text box. As you can see from the Manage User Fields, most of the fields necessary to collect customer information are available by default. However, sometimes you may need to add some extra fields. Let us see in the following sections how we can create different types of fields.

Text field

This type of field allows up to 255 characters to be added. This is suitable for short text information, such as a username, first name, last name, and so on. Most of the fields available in the VirtueMart user registration form are of this type. For adding such fields, click on the New button in the Manage User Fields screen. This brings the Add/Edit User Fields screen. Select Text Field in the Field type drop-down list. Then, fill in the other fields as shown in the following screenshot:

When finished providing all information, click on the Save icon, and go back to the Manage User Fields screen. Now, reorder the fields and position the field where you want it to show. To see how this field looks, go to store frontend and click on the Register link in the Login module. That will show the registration form and in that form, you see the field as shown in the following screenshot:

As you can see, the Pager field is added to the form. Hover your mouse pointer over the info icon () besides the field. It shows the text you typed in the Description, field-tip field during the creation of this field.

Checkbox (Single)

This type of field shows a single checkbox, which can be checked or unchecked by the users. Use this type for fields such as terms of agreement, where users need to agree by checking the checkbox. For creating such fields, follow the same procedure as creating a Text Field, but choose the Checkbox (Single) in the Field type drop-down list. Fields of this type look like the one shown in the following screenshot:

Checkbox (Multiple)

Fields of this type show multiple checkboxes from where users to check multiple options. Use this type for fields where you want to collect some preferences. For example, you may create a field to know the customer’s preferences for product categories. For creating the Checkbox (Multiple) field, select this from the Field type drop-down list on the Add/Edit User Fields screen:

All other fields are same as adding the Text Field. However, at the end of the form, you need to define the options and values. Click on the Add a Value button to add new option title and values. This will show two columns, where you can type a Title and Value for the option. Add as many options as you want:

When entering values for all fields is done, click on the Save button. Then, go back to the Manage User Fields screen, and reorder the field to show it in preferred order. The field you have created will look like the screenshot below:

Date

This type is to show a field for entering date with a date picker. In the same way as with the other field types, you can create this type of field by choosing Date in the Field type drop-down list in the Add/Edit User Fields screen. All other information
is the same as other types of fields. For example, we want to collect information on a customer’s date of birth. In that case, we need to add a field of the Date type. Let us configure the field as shown in the following screenshot:

Save the field by clicking on the Save icon in the toolbar. Then, go back to the Manage User Fields screen and reorder the fields so that our new field shows after the password confirmation field. Now, go to the user registration form to see the result. It will look like following screenshot:

The Date of Birth field is marked with an asterisk (*) to indicate that users must enter a value for this field. This happened as we selected Yes in the Required drop-down list while creating the field.

Age verification (date select fields)

Fields of this type provide a drop-down list for selecting a month, day, and year to indicate a date of birth. While creating a field of this type, the administrator can set a minimum age for registration. Selecting the date from a field of this type, and submitting the form, will automatically calculate the user’s age and notify whether he or she is eligible for registration or not. For some sites, registration is restricted to adults only (for example, 18+ years old). Adding a field of this type can help ensure implementing the restriction policy. To enforce such a policy, let us create a field of this type with the configurations shown in the following screenshot:

As you can see, we have made this field mandatory by selecting Yes in the Required drop-down list. The minimum age for registration is set to 18 years in the Specify the minimum age drop-down list. In the registration form, this field will look like the following:

As per the condition of this field, anyone who wants to register must be aged 18 years or above. Let us see how it works. In the registration form, fill in all the required fields and select “10 September 2008” in the Select your date of birth field. Then, submit the form for registration. What do you see? It throws a JavaScript error message as shown below:

Click on the OK button, and you see the registration form with the information you provided. Scroll down and you find that the Select your date of birth field is marked in red color to indicate error in value provided for this field:

Now, select the birth date as “10 September 1985”, and click on the Send Registration button. Voila! It works! You are now registered, because the date of birth indicates that your age is more than 18 years.

Drop Down (Single Select)

Fields of this type show a drop-down list with some options to select, from where users can select only one option. For example, you want to collect information on the user’s sex (male or female). In that case, you can create a field with the configurations shown in the following screenshot:

Save the field, and from the Manage User Fields screen, reorder the field to show after the Date of Birth field. Now, on the frontend, click the Register link in the Login module. That shows the registration form. In the registration form, the
drop-down field we have created will look like the following screenshot:

Drop Down (Multiple Select)

Fields of this type show a multiple-select combo box from where users can select multiple options. In the previous example of creating the Checkbox (Multiple), we saw that users can select multiple options. Let us convert that into the Drop Down (Multiple Select) field. Create the field in the same process, but select the Drop Down (Multiple Select) in the Field type drop-down list in the Add/Edit User Fields screen. At the bottom, add the same option-value pairs. In the registration form, this field will look like the following:
In fields of this type, you can select multiple options by holding down the Ctrl key and clicking on the options.

Email Address

Fields of this type are similar to text fields. The difference between the Text Field and Email Address types is that the latter has built-in validation criteria for ensuring an email address pattern. By default, there is one email address type field in the user
registration form. You may want to add another email address field, for collecting an alternative email address, using this type.

EU VAT ID

While doing business with European Union (EU) countries, you need a valid Value Added Tax (VAT) ID. Customers who are from EU countries may use their EU VAT ID, if you add a field of this type and collect that information. When you define a field of this type, you can also configure which shopper group the customer will be moved to after successfully validating of his or her VAT ID. For example, we may
create a shopper group named EU Wholesale, and add all the customers to this shopper group upon successful validation of their EU VAT ID.
For creating the EU VAT ID field, follow the similar steps for other types of fields. In the Add/Edit User Fields screen, configure the fields as shown in the following screenshot:

As you can see, we have selected the EU Wholesale shopper group where customers will be moved upon successfully validating their EU VAT ID. The field we created now, will be displayed in the user registration and account maintenance form same as text input field:

When customers enter their EU VAT ID in the Eurpoean Union VAT ID field, and along with other information submits the form for registration, VirtueMart connects to the online database at http://ec.europa.eu/taxation_customs/vies/api/checkVatPort?wsdl and verifies the validity of the VAT ID provided by the customer. If it finds the VAT ID invalid, the customer will not be registered, or the VAT ID information will not be saved and an error message will be displayed. This type of field should remain optional, as not all customers will have EU VAT IDs.

Editor text area

Fields of this type are in fact a text area with the rich text editor enabled. Creating such fields may help you collect descriptive information with rich text. For example, we create a text area with the rich text editor where the customers may write something about themselves, with fancy formatting, color, bullets, and links. For creating a field of this type, just select Editor Text Area from the Field type drop-down list in the Add/Edit User Fields screen. Once created and published, the field will look like the following:

Text area

Fields of this type are a simple text area where customers can enter ample descriptive information. This does not show the rich text editor in the text area. For creating such field, select Text Area as the field type and specify other information. At the end of the Add/Edit User Fields screen, specify the Columns and Rows (for example,40 and 10, respectively). Once saved and published, the field will look like the following screenshot:

You can make this text area smaller or larger by changing the values in the Columns and Widths fields in the Add/Edit User Fields screen.

Radio button

Fields of this type show radio buttons with options you provide allowing customers to check only one radio button. For example, we can add the Sex field using radio buttons. For creating such a field, select Radio Button from the Field type drop-down
list in the Add/Edit User Fields screen. Enter other information, and at the end, add the option title and value pairs by clicking on the Add a value button. When saved and published, it looks like the following in the registration or account maintenance forms.

Sometimes, it is better to use a Radio Button instead of a Drop Down (Single Select), especially when the options are limited. The benefit of using this is that the user can see all the options without clicking on the field. However, if there are many options (for example country field), then it is better to use a drop-down, otherwise it will be difficult to show all the options as radio buttons.

Web address

Fields of this type allow web addresses to be entered, and validates the input to ensure that these fields are in the URL format. In the Add/Edit User Fields screen, select Web Address as the field type and at the bottom, select the URL Only or the Hypertext and URL from the Field type drop-down list.

Fieldset delimiter

Fields of this type are used to group several fields and label that group. We have already seen that used. In the registration form, there are three groups of fields:
Customer Information, Bill To Information, and Send Registration. You can create such a delimiter by selecting ===Fieldset Delimiter=== from the Field type drop-down list in the Add/Edit User Fields screen. You just need to provide a name and title for this field type:

Once you have saved the field, go back to the Manage User Fields screen and reorder the field. Fields that are going to be under this group should be placed under the delimiter. As we have created some additional fields, we can group these under this delimiter. Then, the list looks like the following:

From here, you can see that under our new delimiter vm_customgroup, there are three fields. Now, go to the user registration page in the frontend, and you will see the group as shown:

So far, we have discussed all the available field types. If you have
installed components like Letterman, YANC, ANJEL, or CCNewletter,
another field type for subscribing to newsletters will be available.
We will discuss more on implementing newsletters in Chapter 7,
Promotion and Public Relations.

Editing a field

From the Manage User Fields screen, you can edit a field. Just click on the Field name in the list and that opens the Add/Edit User Fields screen:

Although you can edit all of the information provided in the screen, you cannot change the field type. For example, a field created as a checkbox cannot be changed into a drop-down list or a text box. However, you may delete the field and create another field of your desired type. In that case, any data collected through the fields will be deleted.

In creating additional fields, we have typed plain English in the Field
title
text box, which is displayed in the frontend as a label for that
particular field. If you look into the built-in or system fields, you see the
values in the Field title field are something like PHPSHOP_***. These
language constants are defined in the language files for VirtueMart.
These constants are required for localization of VirtueMart. Since we
have not yet discussed language files or localization, we just typed
English words. We are going to see details of VirtueMart localization
and language files in Chapter 8, Localization of VirtueMart.

User manager

In Joomla!, there is one User Manager component from where you can manage the users of that site. However, for the VirtueMart component, there is another user manager which should be used for the VirtueMart shop. To be clear about the differences of these two user managers, let us look into both.

Joomla! user manager

Let us first try Joomla!’s user manager. Go to the Joomla! control panel and click on the User Manager icon or click on Site | User Manager. This brings the User Manager screen of Joomla!:

We see that the users registered to the Joomla! site are listed in this screen. This screen shows the username, full name, enabled status, group that the user is assigned to, email of the user, date and time when they last visited, and user ID. From this screen, you may guess that any user can be enabled or disabled by clicking on the icon in the Enabled column. Enabled user accounts show a green tick mark in the Enabled column.
For viewing the details of any user, click on that user’s name in the Name column. That brings up the User: [Edit] screen:

As you see, the User Details section shows some important information about the user including Name, Username, E-mail, Group, and so on. You can edit and change these settings including the password. In the Group selection box, you must select one level. The deepest level gets the highest permission in the system. From this section, you can also block a user and decide whether they will receive system emails or not.
In the Parameters section, you can choose the Front-end Language and Time Zone for that user. If you have created contact items using Joomla!’s Contacts component, you may assign one contact to this user in the Contact Information section.

VirtueMart user manager

Let us now look into VirtueMart’s user manager. From the Joomla! control panel, select Components | VirtueMart to reach the VirtueMart Administration Panel. To view the list of the user’s registered to the VirtueMart store, click on Admin | Users.
This brings the User List screen:

As you can see, the User List screen shows the list of users registered to the shop. The screen shows their username, full name, group the user is assigned to, and their shopper group. In the Group column, note that there are two groups mentioned. One group is without brackets and another is inside brackets. The group name mentioned inside brackets is Joomla!’s standard user groups, whereas the one without brackets is VirtueMart’s user group. We are going to learn about these user groups in the next section.
For viewing the details of a user, click on the user’s name in Username column. That brings the Add/Update User Information screen:

The screen has three tabs: General User Information, Shopper Information, and Order List. The General User Information tab contains the same information which was shown in Joomla!’s user manager’s User: [Edit] screen. The Shopper Information tab contains shop related information for the user:

The Shopper Information section contains:

  • a vendor to which the user is registered
  • the user group the user belongs to
  • a customer number/ID
  • the shopper group

Other sections in this tab are: Shipping Addresses, Bill To Information, Bank Account, and any other section you have added to the user registration or account maintenance form. These sections contain fields which are either available on the registration or account maintenance form. If the user has placed some orders, the Order List tab will list the orders placed by that user. If no order has been placed, the Order List tab will not be visible.

Which user manager should we use?

As we can see, there is a difference between Joomla!’s user manager and VirtueMart’s user manager. VirtueMart’s user manager shows some additional information fields, which are necessary for the operation of the shop. Therefore, whenever you are managing users for your shop, use the user manager in the VirtueMart component, not Joomla!’s user manager. Otherwise, all customer information will not be added or updated. This may create some problems in operating the VirtueMart store.

User groups

Do you want to decide who can do what in your shop? There is a very good way for doing that in Joomla! and VirtueMart. Both Joomla! and VirtueMart have some predefined user groups. In both cases, you can create additional groups and assign permission levels to these groups. When users register to your site, you assign them to one of the user groups.

Joomla! user groups

Let us first look into Joomla! user groups. Predefined groups in Joomla! are described below:


As you can see, most of the users registering to your site should be assigned to the Registered group. By default, Joomla! assigns all newly registered users to the Registered group. You need to add some users to the Editor or Publisher group if they need to add or publish content to the site. The persons who are managing the shop should be assigned to other Public Backend groups such as Manager, Administrator or Super Administrator.

VirtueMart user groups

Let us now look into the user groups in VirtueMart. To see the user groups, go to VirtueMart’s administration panel and click on Admin | User Groups. This shows the User Group List screen:

By default, you will see four user groups: admin, storeadmin, shopper, and demo. These groups are used for assigning permissions to users. Also, note the values in the User Group Level column. The higher the value in this field, the lower the permissions assumed for the group. The admin group has a level value of 0, which means it has all of the permissions, and of course, more than the next group storeadmin. Similarly, storeadmin group has more permissions than the shopper group. These predefined groups are key groups in VirtueMart, and you cannot modify or delete these groups. These groups have the following permissions:

For most of the shops, these four predefined groups will be enough to implement appropriate permissions. However, in some cases you may need to create a new user group and assign separate permissions to that group. For example, you may want to employ some people as store managers who will add products to the catalog and manage the orders. They cannot add or edit payment methods, shipping methods, or other settings, except product and orders. If you add these people to the storeadmin group then they get more permissions than required. In such situations, a good solution is to create a new group, add selected user accounts to that group, and assign permissions to that group.

Creating a new user group

For creating a new user group, click on the New button in the toolbar on the User Group List screen. This brings Add/Edit a User Group screen:

In the Add/Edit a User Group screen, enter the group’s name and group level. You must type a higher value than existing groups (for example, 1000). Click on the Save icon to save the user group. You will now see the newly created user group in the User Group List screen.
Are you thinking of how this group will control a user’s permissions? Yes, there is still something more to do. Creating a new group and adding users to that group will not assign any permission to users. We have to set the permissions for each group (that we create) and then users in those groups will get those permissions. We are going to learn about viewing and setting group permissions in next section.

Group permissions

Each user group has permissions associated with it. Although there is no simple way to view all of the permissions a user group has, we cans still view the associated permissions for all user groups. To view the permissions associated with the user groups, click on Admin | List Modules. This brings the Module List screen:

The Module List screen shows the modules and the group’s permissions to access those modules. As you can see, our newly created storemanager user group is also in the list.

Assigning permissions to user groups

We must now assign appropriate permissions to the storemanager group. First, select the store module. This module allows us to see store-wide configurations and store information. We don’t want to allow the storemanager group to change the store information. However, we are selecting this store module, because to display the VirtueMart Administration Panel, this module is necessary. Click on the
Function List link against the store module. That shows Function List: store screen:

In the Function List: store screen, you can see the main functions available in the store module. From here, you can select functions that will be available to the storemanager group. To know what each functions do, click on the function name to see the Function Information screen:

The Function Information screen shows the function name, class name, class method, groups which have permission to use that function, and a description of the function. This will help you understand where the function comes from and for what purpose it serves.

Are you pondering the fields in this screen? We are going to explain
the fields available in this screen later in this chapter, under the
Adding New Function section.

As our store managers will not change any settings regarding credit cards, payment and shipping methods, and export modules, in the Function List: store screen, we need to uncheck all modules for the storemanager group. For the storemanager group, select the store, product, order, reportbasic, account, and help modules. Then, click on the Save Permissions link.
After giving access to these modules, we can assign permissions to specific functions under these modules. Click on the Function List link against each module and select the functions you want to allow for store managers. For example, we want store managers to add new products, but not to delete products once added to the catalog. To implement this rule, click on the Function List link against the product module. You get the Function List: product screen:

In the Function List: product screen, you may select all of the functions for the storemanager group except the productDelete function. After checking and unchecking the checkboxes under the storemanager column for different functions,
click on the Save Permissions link to save the permissions you have set.

When you see the none column checked, that means no restriction is
applied for that function or module. Also note that, in both the Module
List
and Function List screen, there is a New button in the toolbar. You
can add a new module or function by clicking on this New button.

Adding new module

Why do you need to add a new module while assigning permissions to groups?
Generally, the default modules listed in the Module List screen are enough for assigning permissions to most of the functions. However, in some cases, you may like to assign permissions to a group of functions, which have not been explicitly assigned, to any group. For example, by default, functions related to managing payment methods are listed under the store module. Someone may like to make
another module named payment and put the related functions under this module. This will help assign permissions to payment functions easily. Therefore, the first step will be to create a module named payment. For creating a module, go to the Module List screen by clicking on Admin | List Modules. In the Module List screen, click on the New icon in the toolbar. This
opens up the Module Information screen:

In the Module Information screen, we need to provide the name of the module, and some additional information. In the Module Name field, type payment (or any other name which is not used as a module name already). In the Module Perms list, select the groups to which you want to give permissions to access this module. Select Yes in the Show Module in Admin menu? drop-down list. This will show a section named Payment in the admin menu. Assign the display order, say 7, in the Display Order field. Finally, give a description what the module does. Click on the Save icon to save the module. You can now see this module in the Module List screen.

Adding new function

After adding the module, we need to add functions to the module. In the Module List screen, go to the payment module and click on the Function List link. The Function List: payment screen will show no function. This is because we have not yet added any function to the payment module. For adding a function, click on the New icon in the toolbar on the Function List screen. This shows the Function Information screen:

From the Function Information screen, you need to configure the following fields:

  • Function Name: Provide a function name. If you are adding the function for allowing the group to add a payment method, the function name will be paymentMethodAdd.
  • Class Name: From this drop-down list, select an appropriate class file. As we are adding functions for payment methods, select the ps_payment_method class file here.
  • Class Method: When you select a class file in the Class Name field, you will see the available functions from that class in this drop-down list.. You will notice that, in the ps_payment_method class, there are add, update, delete, list_method, and some other functions. The functions named here are usable by user groups. Other functions, such as validate_add, validate_delete,
    validate_update
    , and so on, are automatically executed upon use of the add, delete or update functions. For the time being, select the add function from the drop-down list.
  • Function Perms: Select the user groups who will be able to use this function. You can select multiple groups from the list.
  • Function Description: Provide a description of the function to help administrators understand what this function is for. As the
    paymentMethodAdd function will add a payment method, type Adds a payment method in this text area.

When you have entered all this information, click on the Save icon in the toolbar. That adds the function to the payment module. Similarly, add three more functions named paymentMethodUpdate, paymentMethodDelete, and paymentMethodList.
All of these will use the same ps_payment_method class and use the update, delete, and list_method class methods respectively.

Warning:

You may get an error message while adding new functions. It happens if
another function exists with the same name. As the paymentMethodAdd,
and other functions we have added now, are part of store module, you
will first need to delete those functions in store module.

After adding all the functions, go back to the Function List: payment screen, and you will see the function listed there:

From the Function List: payment screen, you can see the permissions assigned to the different user groups. If you want to change some of these permissions, do so, and click on the Save Permissions link to save the settings. In principle, the function name field should take any string that is not the same as other functions. However, you may find it strange when you name the update() function as updatePaymentMethod instead of paymentMethodUpdate. You will get a message saying that the function is not registered:

Let us investigate why this happens. Open the file ../administrator/components/com_virtuemart/html/store.payment_method_form.php. Now, go to line #186.
The variable $funcname specifies what functions will be used. The line looks like the following:
[code lang=”java”]
$funcname = !empty($payment_method_id) ? "paymentMethodUpdate" :
"paymentMethodAdd";
[/code]
As you can see, function names are specified in the file. Therefore, whenever you are adding such a function, make sure the function name you provide is the same as mentioned in the $funcname variable.

Assigning users to groups

We have already seen how to view a user’s information in VirtueMart. For viewing and updating user information, go to Admin | Users. Then, click on the username whose details you want to view. That brings up the Add/Update User Information screen. Go to the Shopper Information tab in this screen:

In the Shopper Information tab, you can assign appropriate permissions to the user. Select the user group from the Permissions drop-down list. For example, we assign the user to the storemanager user group, which we created earlier. When the user group is selected from the Permissions drop-down list, click on the Save icon in the toolbar. Now, the user is a member of the storemanager group and will have the permissions that are assigned to the storemanager group.

Checking how these work

We will now check how our user groups and permissions work. We have created a user group named storemanager, given permissions to manage products and orders to this user group, and finally added a user to this user group. Now, to see the effect, we need to log in as that user, and see whether we can add products and manage orders. Before testing, we need to publish the mod_virtuemart module, because a link to administration section is visible in this module when the user has the necessary permissions. Lets try it first! Go to the shop frontend and log in using that username and password. After logging in, search for the Admin link in the VirtueMart module.
Is it there? No, you can’t see that now:

For getting the Admin link in the VirtueMart Module, and also to get some administrative permission, we have to apply a little hack. We need to edit two files. First, open the file ../components/com_virtuemart/virtuemart.php. At line #96, you get the following code block:
[code lang=”html”]
if ( vmIsAdminMode()
<b>&& $perm->check("admin,storeadmin")</b>
&& ((!stristr($my->usertype, "admin") ^
PSHOP_ALLOW_FRONTENDADMIN_FOR_NOBACKENDERS == ” )
|| stristr($my->usertype, "admin")
)
&& !stristr($page, "shop.")
) {
[/code]
As you will notice, in the second line of the code above, two user groups are mentioned. If we want to give other groups access to the administration panel, we must add that group’s name here. So, we change the above code block as follows:
[code lang=”java”]
if ( vmIsAdminMode()
<b>&& $perm->check("admin,storeadmin,storemanager")</b>
&& ((!stristr($my->usertype, "admin") ^
PSHOP_ALLOW_FRONTENDADMIN_FOR_NOBACKENDERS == ” )
|| stristr($my->usertype, "admin")
<b>|| stristr($my->usertype, "storemanager")</b>
)
&& !stristr($page, "shop.")
) {
[/code]
The changed lines are highlighted in above code block. We have added the storemanager group in second line, and also added another line after || stristr($my->usertype, “admin”). With these changes, the user will get the assigned permissions and have access to the administration panel. However, you still will not see the Admin link on the VirtueMart Module. For getting that, open ../modules/mod_virtuemart/mod_virtuemart.php file. In line # 139, you will see the following code block:
[code lang=”html”]
<?php
}
$perm = new ps_perm;
// Show the Frontend ADMINISTRATION Link
<b>if ($perm->check("admin,storeadmin")</b>
&& ((!stristr($my->usertype, "admin") ^
PSHOP_ALLOW_FRONTENDADMIN_FOR_NOBACKENDERS == ” )
|| stristr($my->usertype, "admin")
)
&& $show_adminlink == ‘yes’
) { ?>
[/code]
In plain language, the above code block says that if the users are of type admin or storeadmin, then show the admin link. Therefore, to show the admin link to other groups, we need to add that group’s name here. Change the above code block as follows:
[code lang=”html”]
<?php
}
$perm = new ps_perm;
// Show the Frontend ADMINISTRATION Link
<b>if ($perm->check("admin,storeadmin,storemanager")</b>
&& ((!stristr($my->usertype, "admin") ^
PSHOP_ALLOW_FRONTENDADMIN_FOR_NOBACKENDERS == ” )
|| stristr($my->usertype, "admin")
<b>|| stristr($my->usertype, "storemanager")</b>
)
&& $show_adminlink == ‘yes’
) { ?>
[/code]
The changed lines are highlighted above. Like the previous code block, we have added the storemanager group to the list.

Warning:

While listing the group names, do not use spaces. Using spaces will
not show the Admin link. For example, admin,storeadmin,
storemanager will work fine, but admin, storeadmin,
storemanager will not work. Be careful when applying this hack.

Now, log in again with the same username and password and see what happens. Wow! We got our Admin link on the VirtueMart module:

To access the VirtueMart administration panel, and manage products and orders, click on the Admin link. You will get the VirtueMart Administration panel (in Standard Layout):

As you can see, there is a Back button for going back to frontend. You also get the list of modules in the left sidebar. Clicking on one module will bring out the available functions. I hope you remember that we have assigned permissions to the storemanager group to manage products and orders only. They can add new products, but cannot delete any product. Click on the Products module, and then on
List Products. This shows the list of products available in the catalog. Try deleting a product by clicking on the trash icon in the Remove column. You get a message like the following:

Also, try managing the orders. Click on the Orders module and then on List Orders. You will see the list of orders placed so far. Try deleting an order from the list by clicking on the trash icon in the Remove column. As we have not given permission to
the storemanager group to delete an order, you will get the following message:
For More Information:

Try to do something else for which the group has no permission, and you will get messages like these. From this, we understand that the permissions we have given to users are in effect. This is one wonderful way for giving access to a frontend user to manage the shop’s specific tasks.
What other changes were made? Yes, we created a module named payment and added four functions to that module: paymentMethodAdd,
payMentMethodUpdate, paymentMethodDelete
, and paymentMethodList. The storemanager group can use all of the methods except the paymentMethodDelete. Lets try that.

But where is the payment module in left sidebar? All other modules
are there, only our newly created payment module is missing. Then
how do you try to add, update, and list the payment methods. During
creation of the payment module, we indicated that this module should
be displayed in the administration panel. However, it is not showing
there. To showthe module, and other links to that module, we need to
edit a file. We will be looking at this issue later, in Chapter 9, Extending
VirtueMart’s Functionalities
.

If you click on the Store module, you get two payment method related links: List Payment Methods, and Add Payment Method. As the group storemanager has permission to do both, you may try and see what happens. Surely, you will be able to add a payment method, edit a payment method, and to see the list of payment methods. However, you will not be able to delete a payment method, as you have
no permission to do so.

also read:

  • DOJO Tutorials
  • jQuery Tutorials

Filed Under: Internet Tagged With: Joomla

Choosing an Open Source CMS

October 28, 2009 by itadmin Leave a Comment

Choosing an Open Source CMS

There are many powerful Open Source Content Management Systems (CMSs) available to take the pain away from managing a web site. These systems are feature-rich, often easy to use, and free. Unfortunately, there are so many choices that it’s tough to be sure which CMS is the right one for your needs. How can you be sure that you are selecting and working with the right tool?

also read:

  • HTML Tutorials
  • CSS Tutorials
  • JavaScript Tutorials

This book will guide you through choosing the right CMS for your needs. You can be confident in your choice of CMS for the needs of your project. It will also help you make a start using the CMS, and give you a feel for what it’s like to use it—even before you install it yourself.
Are you bewildered by the many open source CMSs available online? Open source CMSs are the best way to create and manage sophisticated web sites. You can create a site that precisely meets your business goals, and keep the site up-to-date easily because these systems give you full control over every aspect of your site. Because open source CMSs are free to download, you have a vast choice between the various systems. There are many open source CMSs to choose from, each with unique strengths—and occasionally limitations too. Choosing between the bewildering numbers of options can be tough.
Making the wrong choice early on may lead to a lot of wasted work because you’ll have a half-finished site that doesn’t meet your initial requirements, and you may have to restart from scratch.
This book will show you how to avoid choosing the wrong CMS. It will guide you through assessing your site requirements, and then using that assessment to identify the CMS that will best fit your needs. It contains discussions of the major CMSs and the issues that you should consider when choosing: their complexity to use, their features, and the power they offer. It discusses technical considerations such as programming languages and compliance with best practice standards in a clear and friendly way that non-technical readers can understand.
The book also contains quick-start guideslines and examples for the most popular CMSs such as WordPress, Joomla!, and Drupal. You can experiment with these CMSs, get a feel of how they work, and start using them to build your site.
After reading this book, you can be confident that your CMS choice will support your web site’s needs because you have carefully assessed your requirements and explored the available options.
The author has created a special website for this book—http://www.cmsbook.info/. You can communicate with other readers and get additional insights and support from there.

What This Book Covers

Section I: Opening up to Open Source CMSs
Chapter 1 Do I even want an Open Source CMS?—When and how a content management system is useful. Why open source? Readymade or custom-built?
Chapter 2 Evaluating your Options—Different CMS types, their purposes, and different CMS technologies
Section II: Thinking your choices through
Chapter 3 Understanding your Requirements—brainstorm and clarify your requirements, standard compliance, scale of the site, and key features
Chapter 4 Building the Site—trying out CMSs, technical requirements, downloading and installation, configuration, and creating navigation
Chapter 5 Content Editing and Management—using WYSIWYG editors, adding pictures, publishing content, and creating links
Chapter 6 Templates and Plug-ins—adding a photo gallery and customizing design via templates
Chapter 7 Extending and Customizing—understand a CMS’s code quality, and make code-level changes to understand their complexity
Section III: CMSs by breed
Chapter 8 Blog CMSs—perform typical tasks with the top three blog choices and evaluate features
Chapter 9 Web CMSs—using top Web CMSs, customizing them, and gaining key CMS skills
Chapter 10 CMSs for E-Commerce—managing product/service-based e-commerce sites with CMSs, and knowing which would be best for you
Chapter 11 Team Collaboration CMSs—internal sites for collaboration and communication, workflow, access privileges, and version tracking; Alfresco
Chapter 12 Specialized CMSs—CMSs that serve niches—e-learning, wiki, photo galleries, discussion forums, and so on
Section IV: Open source CMS tips
Chapter 13 Hosting your CMS-Powered Site—selecting and working with a web host
Chapter 14 Getting Involved in the Community—asking questions, learning from documentation, and getting help
Chapter 15 Working with a Specialist—finding experts, evaluating them, tips for project management, and outsourced teams
Chapter 16 Packt Open Source CMS Awards—Best CMSs voted by the community and experts

Web CMSs


After understanding our requirements and learning the basics of using CMSs, we evaluated the top Blog CMSs in the last chapter. We are now ready to look at Web Content Management Systems (commonly known as WCMS, Web CMS, or WCM Systems). Web CMSs allow you to manage your web content easily. They are generic in nature and perform a variety of operations. If you ask someone about a CMS, they will most probably recommend you one of the systems we cover in this chapter. It’s important to learn the features of the top web CMSs to make the right choice for your project.
In this chapter, we will take a look at the top general-purpose Web CMSs. In the process, we will:

  • Cover a variety of top Web CMSs
  • Perform customizations and content management operations
  • Discover interesting features in CMSs
  • Examine which CMS could be right for you

Let’s get started.

Do you want a CMS or a portal?

We are evaluating a CMS for our Yoga Site. But you may want to build something else. Take a look again at the requirements you draft ed in Chapter 3. Do you need a lot of dynamic modules such as an event calendar, shopping cart, collaboration module, file downloads, social networking, and so on? Or you need modules for publishing and organizing content such as news, information, articles, and so on? Today’s top-of-the-line Web CMSs can easily work as a portal. They either have a lot of built-in functionality or a wide range of plug-ins that extend their core features. Yet, there are solutions specifically made for web portals. You should evaluate them along with CMS soft ware if your needs are more like a portal. On the other hand, if you want a simple corporate or personal web site, with some basic needs, you don’t require a mammoth CMS. You can use a simple CMS that will not only fulfill your needs, but will also be easier to learn and maintain.
We have used Joomla! in our examples in Chapters 4 through 7. Joomla! is a solid CMS. But it requires some experience to get used to it. For this chapter, let’s first evaluate a simpler CMS. How do we know which CMS is simple? I think we can’t go wrong with a CMS that’s named “CMS Made Simple”.

Evaluating CMS Made Simple

As the name suggests, CMS Made Simple (http://www.cmsmadesimple.org/) is an easy-to-learn and easy-to-maintain CMS. Here’s an excerpt from its home page:


If you are an experienced web developer, and know how to do the things you need to do, to get a site up with CMS Made Simple is just that, simple. For those with more advanced ambitions there are plenty of addons to download. And there is an excellent community always at your service. It’s very easy to add content and addons wherever you want them to appear on the site. Design your website in whatever way or style you want and just load it into CMSMS to get it in the air. Easy as that!

That makes things very clear. CMSMS seems to be simple for first-time users, and extensible for developers. Let’s take CMSMS to a test drive.

Time for action-managing content with CMS Made Simple

  1. Download and install CMS Made Simple. Alternatively, go to the demo at http://www.opensourcecms.com/.
  2. Log in to the administration section.
  3. Click on Content | Image Manager. Using the Upload File option, upload the Yoga Site logo.
  4. Click on Content | Pages option from the menu. You will see a hierarchical listing of current pages on the site.
  5. The list is easy to understand. Let’s add a new page by clicking on the Add New Content link above the list.
  6. The content addition screen is similar to a lot of other CMSs we have seen so far. There are options to enter page title, category, and so on. You can add page content using a large WYSIWYG editor.
  7. Notice that we can select a template for the page. We can also select a parent page. Since we want this page to appear at the root level, keep the Parent as none.
  8. Add some Yoga background information text. Format it using the editor as you see fit.
  9. There are two new options on this editor, which are indicated by the orange palm tree icons. These are two special options that CMSMS has added: first, to insert a menu; and second, to add a link to another page on the site. This is excellent. It saves us the hassle of remembering, or copying, links.
  10. Select a portion of text in the editor. Click on the orange palm icon with the link symbol on it. Select any page from the fl yout menu. For now, we will link to the Home page.
  11. Click on the Insert/edit Image icon. Then click on the Browse icon next to the Image URL field in the new window that appears.
  12. Select the logo we uploaded and insert it into content.
  13. Click on Submit to save the page.
  14. The Current Pages listing now shows our Background page. Let’s bring it higher in the menu hierarchy. Click on the up arrow in the Move column on our page to push it higher. Do this until is at the second position—just after Home.
  15. That’s all. We can click on the magnifying glass icon at the main menu bar’s right side to preview our site. Here’s how it looks.

What just happened?

We set up the CMSMS and added some content to it. We wanted to use an image in our content page. To make things simpler, we first uploaded an image. Then we went to the current pages listing. CMSMS shows all pages in the site in a hierarchical display. It’s a simple feature that makes a content administrator’s life very easy. From there, we went on to create a new page. CMSMS has a WYSIWYG editor, like so many other CMSs we have seen till now. The content addition process is almost the same in most CMSs. Enter page title and related information, type in content, and you can easily format it using a WYSIWYG editor. We inserted the logo image uploaded earlier using this editor. CMSMS features extensions to the default WYSIWYG editor. These features demonstrate all of the thinking that’s gone into making this soft ware. The orange palm tree icon appearing on the WYSIWYG editor toolbar allowed us to insert a link to another page with a simple click. We could also insert a dynamic menu from within the editor if needed. Saving and previewing our site was equally easy. Notice how intuitive it is to add and manage content. CMS Made Simple lives up to its name in this process. It uses simple terms and workfl ow to accomplish tasks at hand. Check out the content administration process while you evaluate a CMS. After all, it’s going to be your most commonly used feature!

Hierarchies: How deep do you need them?
What level of content hierarchies do you need? Are you happy with two levels? Do you like Joomla!’s categories → sections → content fl ow ? Or do you need to go even deeper? Most users will find two levels sufficient. But if you need more, find out if the CMS supports it. (Spoiler: Joomla! is only two-level deep
by default.)

Now that we have learned about the content management aspect of CMSMS, let’s see how easily we can customize it. It has some interesting features we can use.

Time for action-exploring customization options

  1. Look around the admin section. There are some interesting options.
  2. The third item in the Content menu is Global Content Blocks. Click on it.
  3. The name suggests that we can add content that appears on all pages of the site
    from there. A footer block is already defined.
  4. Our Yoga Site can get some revenue by selling interesting products. Let’s create a block to promote some products on our site. Click on the Add Global Content Block link at the bottom.
  5. Let’s use product as the name.
  6. Enter some text using the editor.
  7. Click on Submit to save.
  8. Our new content block will appear in the list. Select and copy Tag to Use this Block.
  9. Logically, we need to add this tag in a template. Select Layout | Templates from the main menu. If you recall, we are using the Left simple navigation + 1 column template. Click on the template name.
  10. This shows a template editor. Looking at this code we can make out the structure of a content page. Let’s add the new content block tag after the main page content.
  11. Paste the tag just after the {* End relational links *} text. The tag is something like this.
  12. Save the template. Now preview the site. Our content block shows up after main page content as we wanted. Job done!

What just happened?

We used the global content block feature of CMSMS to insert a product promotion throughout our site. In the process, we learned about templates and also how we could modify them.
Creating a global content block was similar to adding a new content page. We used the WYSIWYG editor to enter content block text. This gave us a special tag. If you know about PHP templates, you will have guessed that CMSMS uses Smarty templates and the tag was simply a custom tag in Smarty.

Smarty Template Engine
Smarty (http://www.smarty.net/) is the most popular template engine for the PHP programming language. Smarty allows keeping core PHP code and presentation/HTML code separate. Special tags are inserted in template files as placeholders for dynamic content. Visit http://www.smarty.net/crashcourse.php and http://www.packtpub.com/smarty/book for more.

Next, we found the template our site was using. We could tell it by name, since the template shows up in a dropdown in the add new pages screen as well. We opened the template and reviewed it. It was simple to understand—much like HTML. We inserted our product content block tag after the main content display. Then we saved it and previewed our site. Just as expected, the product promotion content showed up after main content of all pages. This shows how easy it is to add global content using CMSMS. But we also learned that global content blocks can help us manage promotions or commonly used content. Even if you don’t go for CMS Made Simple, you can find a similar feature in the CMS of your choice.

Simple features can make life easier
CMS Made Simple’s Global Content Block feature made it easy to run product promotions throughout a site. A simple feature like that can make the content administrator’s life easier. Look out for such simple things that could make your job faster and easier in the CMS you evaluate.

It’s good time now to dive deeper into CMSMS. Go ahead and see whether it’s the right choice for you.

Have a go hero-is it right for you?

CMS Made Simple (CMSMS) looks very promising. If we wanted to build a standard website with a photo gallery, newsletter, and so on, it is a perfect fit. Its code structure is understandable, the extending functionality is not too difficult. The default templates could be more appealing, but you can always create your own.

The gentle learning curve of CMSMS is very impressive. The hierarchical display of pages, easy reordering, and simplistic content management approach are excellent. It’s simple to figure out how things work. Yet CMSMS is a powerful system—remember how easily we could add a global content block? Doing something like that may need writing a plug-in or hacking source code in most other systems. It’s the right time for you to see how it fits your needs. Take a while and evaluate the following:

  • Does it meet your feature requirements?
  • Does it have enough modules and extensions for your future needs?
  • What does its web site say? Does it align with your vision and philosophy?
  • Does it look good enough?
  • Check out the forums and support structure. Do you see an active community?
  • What are its system requirements? Do you have it all taken care of?
  • If you are going to need customizations, do you (or your team) comfortably understand the code?

We are done evaluating a simple CMS. Let us now look at the top two heavyweights in the Web CMS world—Drupal and Joomla!.

Diving into Drupal

Drupal (http://www.drupal.org) is a top open source Web CMS. Drupal has been around for years and has excellent architecture, code quality, and community support. The Drupal terminology can take time to sink in. But it can serve the most complicated content management needs.
FastCompany and AOL’s Corporate site work on Drupal:

Here is the About Drupal section on the Drupal web site. As you can see, Drupal can be used for almost all types of content management needs. The goal is to allow easy publishing and management of a wide variety of content.

Let’s try out Drupal. Let’s understand how steep the learning curve really is, and why so many people swear by Drupal.

Time for action-putting Drupal to the test

  1. Download and install Drupal.

    Installing Drupal involves downloading the latest stable release, extracting and uploading files to your server, setting up a database, and then following the instructions in a web installer. Refer to http://drupal.org/gettingstarted/ if you need help.

  2. Log in as the administrator. As you log in, you see a link to Create Content. This tells you that you can either create a page (simple content page) or a story (content with comments). We want to create a simple content page without any comments. So click on Page.

    In Drupal, viewing a page and editing a page are almost the same. You log in to Drupal and see site content in a preview mode. Depending on your rights, you will see links to edit content and manage other options.

  3. This shows the Create Page screen. There is a title but no WYSIWYG editor. Yes, Drupal does not come with a WYSIWYG text editor by default. You have to install an extension module for this.
  4. Let’s go ahead and do that first.
  5. Go to the Drupal web site. Search for WYSIWYG in downloads.
  6. Find TinyMCE in the list. TinyMCE is the WYSIWYG editor we have seen in most other CMSs.
  7. Download the latest TinyMCE module for Drupal—compatible with your version of Drupal.
  8. The download does not include the actual TinyMCE editor. It only includes hooks to make the editor work with Drupal.
  9. Go to the TinyMCE web site (http://tinymce.moxiecode.com/download.php). Download the latest version.
  10. Create a new folder called modules in the sites/all/ folder of Drupal. This is the place to store all custom modules.
  11. Extract the TinyMCE Drupal module here. It should create a folder named tinymce within the modules folder.
  12. Extract the TinyMCE editor within this folder. This creates a subfolder called tinymce within sites/all/modules/tinymce.
  13. Make sure the files are in the correct folders. Here’s how your structure will look:
  14. Log in to Drupal if you are not already logged in. Go to
    Administer | Site building | Modules.
  15. If all went well so far, at the end of the list of modules, you will find TinyMCE. Check the box next to it and click on Save Configuration to enable it.
  16. We need to perform two more steps before we can test this. Go to Administer | Site configuration | TinyMCE. It will prompt you that you don’t have any profiles created. Create a new profile. Keep it enabled by default.
  17. Go to Administer | User management | Permissions. You will get this link from the TinyMCE configuration page too. Allow authenticated users to access tinymce. Then save permissions.
  18. We are now ready to test. Go to the Create Content | Page link.
  19. Super! The shiny WYSIWYG editor is now functional! It shows editing controls below the text area (all the other CMSs we saw so far show the controls above).
  20. Go ahead and add some content. Make sure to check Full HTML in Input Format.
    Save the page.
  21. You will see the content we entered right after you save it. Congratulations!

What just happened?

We deserve congratulations. After installing Drupal, we spott ed that it did not come with a WYSIWYG editor. That’s a bit of a setback. Drupal claims to be lightweight, but it should come with a nice editor, right? There are reasons for not including an editor by default. Drupal can be used for a variety of needs, and diff erent WYSIWYG editors provide diff erent features. The reason for not including any editor is to allow you to use the one that you feel is the best. Drupal is about a strong core and fl exibility.
At the same time, not getting a WYSIWYG editor by default was an opportunity. It was our opportunity to see how easy it was to add a plug-in to Drupal. We went to the Drupal site and found the TinyMCE module. The description of the module mentioned that the module is only a hook to TinyMCE. We need to download TinyMCE separately. We did that too.
Hooks are another strength of Drupal. They are an easy way to develop extensions for Drupal. An additional function of modules is to ensure that we download a version compatible with Drupal’s version. Mismatched Drupal and module versions create problems. We created a new directory within sites/all. This is the directory in which all custom modules/extensions should be stored. We extracted the module and TinyMCE ZIP files. We then logged on to the Drupal administration panel. Drupal had detected the module. We enabled it and configured it. The configuration process was multistep. Drupal has a very good access privilege system, but that made the configuration process longer. We not only had to enable the module, but also enable it for users. We also configured how it should show up, and in which sections. These are superb features for power users.
Once all this was done, we could see a WYSIWYG editor in the content creation page. We used it and created a new page in Drupal. Here are the lessons we learned:

  • Don’t assume a feature in the CMS. Verify if that CMS has what you need.
  • Drupal’s module installation and configuration process is multistep and may require some looking around.
  • Read the installation instructions of the plug-in. You will make fewer mistakes that way.
  • Drupal is lightweight and is packed with a lot of power. But it has a learning curve of its own.

With those important lessons in our mind, let’s look around Drupal and figure out our way.

Have a go hero-figure out your way with Drupal

We just saw what it takes to get a WYSIWYG editor working with Drupal. This was obviously not a simple plug-and-play setup! Drupal has its way of doing things. If you are planning to use Drupal, it’s a good time to go deeper and figure your way out with Drupal. Try out the following:

  • Create a book with three chapters.
  • Create a mailing list and send out one newsletter.
  • Configure permissions and users according to your requirements.
  • What if you wanted to customize the homepage? How easily can you do this?
    (Warning: It’s not a simple operation with most CMSs.)

Choosing a CMS is very confusing!
Evaluating and choosing a CMS can be very confusing. Don’t worry if you feel lost and confused among all the CMSs and their features. The guiding factors should always be your requirements, not the CMS’s features. Figure out who’s going to use the CMS—developers or end users. Find out all you need: Do you need to allow customizing the homepage? Know your technology platf orm. Check the code quality of the CMS—bad code can gag you. Does your site need so many features? Is the CMS only good looking, or is it beauty with brains? Consider all this in your evaluation.

Drupal code quality

Drupal’s code is very well-structured. It’s easy to understand and extend it via the hooks mechanism. The Drupal team takes extreme care in producing good code. Take a look at the sample code here. If you like looking around code, go ahead and peek into Drupal. Even if
you don’t use Drupal as a CMS, you can learn more about programming best practices.

You may be wondering why we haven’t covered Joomla! so far. After all, we used Joomla! for the examples in the initial chapters. Since we have gained a good understanding of how Joomla! can meet our needs, let’s do a quick review and see some interesting Joomla! features.

Is Joomla! the best choice?

Joomla! (http://joomla.org/) is the most popular open source Web CMS. It’s been more than three years since Joomla! was born as a fork of Mambo (http://mambofoundation.org/). Today Joomla! has an active community of more than 200,000 users and contributors. Joomla! has around 4,000 extensions and many themes. Numerous high-profile sites use Joomla!. The code quality is good enough, but there is a steep learning curve. Many users complain about its template system. Also, the backend administration system could be simpler.
The Harvard University web site and the MTV’s Quizilla web site are both Joomla! based.


But is Joomla! the best choice? Consider the following:

  • Joomla! has the reach and size.
  • It satisfies the content management needs of most typical sites—either out of the box or with some extension.
  • Since there are so many choices in Joomla!, it can get confusing. Selecting a template can be arduous. Selecting the best extension for your need maybe completely a guess.
  • Joomla! does not score too well on usability. But that’s the case with most CMSs.
  • Joomla! is also known to be demanding on the server.
  • If you are looking for additional modules such as e-commerce, communities, and so on, you won’t go wrong with Joomla!.

With this overall feedback, let me show a few useful out-of-the-box features of Joomla!.

Joomla! gives you more

Here are some useful features in the default installation of Joomla!. We did not cover them earlier, since we concentrated on the core content management features.

  • The Frontpage Manager controls what shows up on your home page. This gets really important as your site grows.
  • Menus control navigation around the site. You can manage them the way you want. You can order items in the priority you wish, and even control access levels.
  • Banners let you run advertisements and promotions. They support both text and image ads. This means you can display Google AdSense-like ads on your own.
  • News Feeds make it easy to syndicate content from other sites. You can even categorize feeds.
  • Polls make it easy to carry out surveys.
  • Joomla! even has an internal messaging system. You can easily communicate with all users of your site.



If you use these features creatively, you can build a very powerful site.

Have a go hero-set up a full site with Joomla!

This is a great time to explore Joomla! further. Here’s something you can try out:

  • Set up a full site with Joomla! along with sample content, images, menus, and homepage.
  • Create users and understand how a workfl ow can be established.
  • Install a SEO extension for Joomla! and learn how it can help your site.

I did not answer the question we started with: Is Joomla! the best choice? You are the person who has to decide on that because everyone’s needs will be diff erent. Continue to evaluate other CMSs, and then you can make your final decision.

A small requirement can jeopardize your development. Keep a watch on your requirements and carefully evaluate that the CMS you choose can either fulfill it by default, or allow doing it with custom code. A small requirement (especially if it is not clear at the start) can derail your CMS project if the CMS you select does not accomplish that requirement easily.

SilverStripe—easy and extensive

SilverStripe (http://www.silverstripe.com/) is an upcoming CMS. When you see it, you will be impressed. When you try the demo, you will be further impressed! Take a look at the following screen, which shows up right after you log in:

Notable features

Here are the noticeable things in this screen:

  • You start in content editing. You don’t have to click around menus to get to the content editing screen.
  • All the content is easily available on the left hand side. Click on an item and its content loads on the right side without reloading the page.
  • Apart from the standard WYSIWYG editor, you get all other options for this content right here. This includes metadata and page behavior (standard page, forum, blog, e-commerce page, and so on).
  • You can control access and translations of content right here.
  • When invoked, the image manager shows on the righthand side with thumbnails and quick insertion.
  • SilverStripe has an built-in image editor. It allows you to resize and crop images.
  • On the left , you also get options for Page Versions and Site Usage Reports.
  • The CMS also has other powerful features such as Newsletter, Files, Comments, Reports, and Statistics.

The things that you don’t see onscreen, but are worth a mention for SilverStripe, are:

  • An RoR-like framework in PHP, based on Sapphire
  • An easy template system
  • SilverStripe has additional modules for e-commerce, blog, forum, Flickr, and Google Maps
  • Thorough documentation for both users and developers
  • Is it for you?

    SilverStripe is a strong contender for any site that needs core content management features. If you don’t need all the extensions and overhead, it makes perfect sense. The eff orts spent on making the soft ware usable are evident. The terminology is simple, and the workfl ow
    even bett er. Anyone can get started with SilverStripe in minutes. The bott om line is, evaluate SilverStripe before you take your decision.

    ezPublish—enterprise CMS

    If you are looking for an enterprise class CMS, you should consider ezPublish
    (http://ez.no/). High-profile sites such as MySQL (http://www.mysql.com) and Zend
    (http://www.zend.com), and even NASA, National Geographic, and MIT run on ezPublish.
    The soft ware has more than 2.5 million downloads, at least 230 official partners across the world, and approximately 30,000 community users.
    So what makes ezPublish an enterprise-class CMS? Let’s review some of its notable features.

    Notable features

    • A complete workfl ow control, which includes adding, editing, and publishing content
    • An extensive user access and privilege system
    • Multilingual support from the ground up
    • Content Versioning
    • Publishes to multiple sites easily
    • Strong SEO features
    • Strong controls for media or news publishing
    • Imports Word or OpenOffice documents, and even supports WebDAV for uploads
    • Supports diff erent content types such as text, images, videos, and so on
    • Extensive documentation and partner support
    • Many extensions available

    The following image shows the diff erent categories of setting options in ezPublish:

    Is it for you?

    If you want a strong workfl ow, ezPublish is one of the best. It comes with all standard CMS features. However, the variety of extensions available is not as good as Joomla!; and the product has a strong corporate feel to it. If you are looking for a quick solution, this may not
    be your bet. But if you are deploying something for a large organization, ezPublish can top the list. All the CMSs we have seen up to this point use PHP as the backend programming language. PHP is available on most web servers. But what if you want to use some other environment?
    Let’s quickly review some non-PHP CMSs.

    Umbraco—rising high

    Umbraco (http://www.umbraco.org/) is a simple and easy CMS writt en for .Net. It’s gaining popularity because of its simplicity. The management interface is simple and allows developers to customize the design and functionality.
    Hasselblad (http://www.hasselblad.se/) is a high-end photography equipment site that runs on Umbraco.

    Notable features

    • Writt en in C#, and can be used with any .Net language
    • Convenient for custom design, and has full HTML support
    • APIs for easy integration with their own applications
    • IntelliSense and tight Visual Studio integration
    • Outputs content as XML—easily integrates with Flex/SilverLight rich Internet applications
    • Out-of-the-box XSLT and AJAX support
    • Versioning of content, integration with MS Word
    • Multilingual, and easy content translation support
    • Simple and easy system that focuses on web site building and content, and not on endless extensions
    • Professional support and extensions available

    Is it for you?

    Umbraco is prett y impressive. You will love its simplicity and integration features. But the documentation needs improvements, and you can’t run it without an SQL Server. If your site wants core CMS features, Umbraco is the best .Net system today. Go check it out.

    DotNetNuke—the first you may notice

    If you are on Windows and want a .Net-based CMS, dotNetNuke (DNN) (http://www.dotnetnuke.com/) is the first CMS you will notice. DNN was inspired from phpNuke—a once very popular CMS—and derived from sample web site code that Microsoft opened up. DNN is advertised as a web application framework. It has well-rounded core features and modules that extend it.

    Notable features

    • One of the first open source .Net CMSs, DNN has been there for ages
    • A good base system, allows extensions via modules
    • Many free and commercial modules available
    • Feature-rich, extensive support available

    Is it for you?

    If you want a well-known, well-rounded .Net CMS, DNN is a very good choice. It’s not the best when it comes to usability or quality, but it’s popular and easy to get developers to review it!

    Plone—for Python lovers

    If you are into Python, you must have heard of Zope and Plone (http://plone.org/). As a matt er of fact, you may have heard of Python because of Plone. Plone is a high-profile (sometimes more hyped) CMS, built on Zope. Zope (http://www.zope.org/) is an application server writt en in Python—with built-in web server and database—to build CMSs, intranets, portals, and community sites. The magazine Discover and the Free Soft ware Foundation web site are the prime advocates of Plone.

    Notable features

    • Solid and extensible system
    • Enterprise features—workfl ow engine, security, LDAP, and so on
    • Used by many high-profile sites
    • Easy to install, powerful template language
    • Ability to define its own content types
    • Can be used for intranets, community sites, and so on—Plone is not just a CMS!
    • Based on Zope, uses ZODB as database—this is also a limitation

    Is it for you?

    Plone has some great features and some big advocates. It has an equally great learning curve. If you are new to Python, Plone will have a significant learning curve for you. If you don’t have a programming background, you may find yourself stuck when you want to enhance the core system. Python is easy to learn, but getting around with Zope and Plone can take a few weeks even for an experienced programmer. If you are already using Python, Plone is a natural choice for your CMS. It has the elegance and features that satisfy demanding users. Go for Plone if you’ve got a team to manage it.

    dotCMS—enterprise and Java

    DotCMS (http://www.dotcms.org/) is a J2EE Web CMS. It’s packed with features and is in constant development. It’s not just a CMS, since it also off ers many portal-like components. It has an interesting history, and is from the same company that produced dotProject—an open source project management system.

    Notable features

    • Excellent core features that match and top similar PHP solutions
    • Structured content
    • Enterprise features such as caching, rules support, clustering, Amazon EC2 support, WebDAV support, task-based workfl ow, and so on
    • Built-in systems such as calendars, events, CRM, newsletter, and so on
    • AJAX used to make things faster and simpler

    Is it for you?

    If you have a J2EE infrastructure running, dotCMS is a very good choice as a CMS. There are only a handful of Java CMSs, and dotCMS is one of the best. Although setting up dotCMS is not as easy as setting up a PHP CMS, we must remember that they are in diff erent leagues altogether. There are some other popular Java CMSs as well, and most of them are more than just Web CMSs.

    Where to find more?

    We covered most of the top web CMSs here. If you are still looking for more, here is a quick list:

    • XOOPS: http://xoops.org/
    • Typo Light: http://www.typolight.org/
    • Apache Lenya: http://lenya.apache.org/
    • Alfresco: http://www.alfresco.com/ (we will cover this later in the book)
    • OpenCMS: http://www.opencms.org/
    • mojoPortal: http://www.mojoportal.com/
    • ImpressCMS: http://www.impresscms.org/
    • miaCMS: http://www.miacms.org/
    • MemHT: http://www.memht.com/
    • WikiPedia’s list of CMSs: http://en.wikipedia.org/wiki/List_of_
      content_management_systems

    That should satisfy anyone’s need for a list of CMSs! We have seen enough CMSs in this chapter. Let’s summarize what we learned.

    Summary

    We reviewed a whole lot of Web CMSs in this chapter. We covered details of only a few, since most have common features and workfl ow. Doing all these evaluations, we can see that most CMSs are similar. The choice of which to pick depends a lot on factors other than features. The ease of use, platf orm, integration with other systems, and so on weigh a lot more than just features. At the same time, most CMSs are under constant development. They keep improving on their limitations. Always keep your requirements and situation at the top priority while selecting a CMS. In this chapter, we specifically looked at:

    • Creating structure and content with CMS Made Simple
    • Adding a WYSIWYG editor to Drupal
    • Using Drupal administration and content addition features
    • Drupal’s code quality
    • Built-in Joomla! features that we can use
    • Easy-to-use SilverStripe CMS
    • Enterprise features of CMSs
    • ezPublish, Plone, Umbraco, DNN, dotCMS— an overview and notable features
    • The CMS that could be right for you

    also read:

    • DOJO Tutorials
    • jQuery Tutorials

    We accomplished a lot in this chapter. There is a lot for you to review and think through. Once you are through that, let’s go on to e-commerce CMSs in the next chapter.

    Filed Under: Internet Tagged With: CMS

    Domain Name System

    October 22, 2009 by itadmin Leave a Comment

    DNS in Action: A detailed and practical guide to DNS implementation, configuration, and administration

    Recently, while driving to my work, I listened to radio as usual. Because of the establishment of the new EU (European Union) domain, there was an interview with a representative of one of the Internet Service Providers. For some time the interview went on, boringly similar to other common radio interviews, but suddenly the presswoman started to improvise and she asked, “But isn’t the DNS too vulnerable? Is it prepared for terrorist attacks?” The ISP representative enthusiastically answered, “The whole Internet arose more than 30 years ago, initiated by the American Department of Defense. From the very beginning, the Internet architecture took into account that it should be able to keep the communication functional even if a part of the infrastructure of the USA were destroyed, i.e., it must be able to do without a destroyed area.”

    also read:

    • HTML Tutorials
    • CSS Tutorials
    • JavaScript Tutorials

    He went on enthusiastically, “We have 13 root name servers in total. Theoretically, only one is enough to provide the complete DNS function.” At this point, we must stop for a moment our radio interview to remind you that a role and principle of usage of root name servers are described in the first chapter of this book. Now, let’s go back to our interview again. The presswoman, not satisfied with the answer, asked, “All these root name servers are in the USA, aren’t they? What will happen if someone or something cuts off the international connectivity, and I am not be able to reach any root name server?” The specialist, caught by the presswoman’s questions, replied, “This would be a catastrophe. In such a case, the whole Internet would be out of order.”

    That time I did not immediately came upon the solution that an area cut off this way is by nature similar to an Intranet. In such a case, it would be enough to create national (or continental) recovery plan and put into work a fake national (or continental) name server, exactly according to the description in Chapter 9, describing closed company networks. The result would be that the Internet would be limited only to our national (or continental) network; however, it would be at least partially functional. In fact at that time, the specialist’s answer made me angry. “So what?”, I thought, “Only DNS would be out of order; i.e., names could not be translated to IP addresses. If we do not use names but use IP addresses instead, we could still communicate. The whole network infrastructure would be intact in that case!”

    But working according to my way would be lengthy, and I thought about it over and over. After some time I realized that the present Internet is not the same as it was in the early 1990s. At that time the handful of academics involved with the Internet would have remembered those few IP addresses. But in the present scenario, the number of IP addresses is in the millions, and the number of people using the Internet is much higher still. Most of them are not IT experts and know nothing about IP addresses and DNS. For such people, the Internet is either functional or not—similar to, for example, an automatic washing machine. From this point of view, the Internet without functional DNS would be really out of order (in fact it would still be functional, but only IT experts would be able to use it).

    The goal of this publiction is to illustrate to readers the principles on which the DNS is based.
    This publication is generously filled with examples. Some are from a UNIX environment, some from Microsoft. The concrete examples mostly illustrate some described problem. The publication is not a text book of a DNS implementation for a concrete operating system, but it malways tries to find out the base of the problem. The reader is led to create similar examples according to his or her concrete needs by him- or herself.

    The goal of this book is to give the reader a deep understanding of DNS, independent of any concrete DNS implementation. After studying this book, the reader should be able to study DNS standards directly from the countless Requests for Comments (RFC). Links to particular RFCs are listed in the text. In fact, it is quite demanding to study the unfriendly RFCs directly without any preliminary training. For a beginner, only to find out the right RFC could be a problem.

    Before studying this book, the reader should know the IP principles covered in the Understanding TCP/IP book published by Packt Publishing (ISBN: 1-904811-71-X) because this publication is a logical follow-on from that book.

    The authors wish you good luck and hope that you get a lot of useful information by reading this publication.

    What This Book Covers

    Chapter 1 begins to explain basic DNS principles. It introduces essential names, for example, domain and zone, explaining the difference between them. It describes the iteration principle by which the DNS translates names to IP addresses. It presents a configuration of a resolver both for UNIX and for Windows. The end of the chapter explains name server principles and describes various name server types.
    Chapter 2 is fully focused on the most basic DNS procedure, the DNS query. Through this procedure, the DNS translates names to IP addresses. In the very beginning, however, this chapter describes in detail the Resource Record structure. At the end of this chapter, many practical examples of DNS exchanges are listed.
    Chapter 3 deals with other DNS procedures (DNS Extensions), i.e., DNS Update, DNS Notify, incremental zone transfer, negative caching, IPv6 Extensions, IPsec, and TSIG.
    Chapter 4 talks about the DNS implementation. It is derived from its historical evolution. From the historical point of view, the oldest DNS implementation that is still sometimes used is BIND version 4. This implementation is very simple so it is suitable to describe basic principles with it. Next, the new generations of BIND are discussed followed by the Windows 2000 implementation.
    Chapter 5 discusses the tools for debugging DNS such as nslookup, dnswalk, and dig, how to control a name server using the rndc program, and the common errors that might occur while configuring DNS.
    Chapter 6 deals with the creation of DNS domains (domain delegation) and with the procedure of domain registration.
    Chapter 7 also talks about domain delegation. In contrast to Chapter 6, here the domain registration relates not to forward domains but to reverse domains.
    Chapter 8 deals with international organizations, called Internet Registries, which are responsible for assigning IP addresses and domain registration.
    Chapter 9 describes the DNS architecture of closed intranets.
    Chapter 10 talks about the DNS architecture from the point of view of firewalls.

    Domain Name System

    All applications that provide communication between computers on the Internet use IP addresses to identify communicating hosts. However, IP addresses are difficult for human users to remember. That is why we use the name of a network interface instead of an IP address. For each IP address, there is a name of a network interface (computer)—or to be exact, a domain name. This domain name can be used in all commands where it is possible to use an IP address. (One exception, where only an IP address can be used, is the specification of an actual name server.) A single IP address can have several domain names affiliated with it.

    The relationship between the name of a computer and an IP address is defined in the Domain Name System (DNS) database. The DNS database is distributed worldwide. The DNS database contains individual records that are called Resource Records (RR). Individual parts of the DNS database called zones are placed on particular name servers. DNS is a worldwide distributed database.

    If you want to use an Internet browser to browse to www.google.com with the IP address 64.233.167.147 (Figure 1.1), you enter the website name www.google.com in the browser address field.

    Just before the connection with the www.google.com web server is made, the www.google.com DNS name is translated into an IP address and only then is the connection actually established.

    It is practical to use an IP address instead of a domain name whenever we suspect that the DNS on the computer is not working correctly. Although it seems unusual, in this case, we can write something like:
    [code]
    ping 64.233.167.147
    http://64.233.167.147
    [/code]
    or send email to
    [code]
    dostalek@[64.233.167.147]
    [/code]
    However, the reaction can be unexpected, especially for the email, HTTP, and HTTPS protocols. Mail servers do not necessarily support transport to servers listed in brackets. HTTP will return to us the primary home page, and the HTTPS protocol will complain that the server name does not match the server name in the server’s certificate.

    Domains and Subdomains

    The entire Internet is divided into domains, i.e., name groups that logically belong together. The domains specify whether the names belong to a particular company, country, and so forth. It is possible to create subgroups within a domain that are called subdomains. For example, it is possible to create department subdomains for a company domain. The domain name reflects a host’s membership in a group and subgroup. Each group has a name affiliated with it. The domain name of a host is composed from the individual group names. For example, the host named bob.company.com consists of a host named bob inside a subdomain called company, which is a subdomain of the domain com.

    The domain name consists of strings separated by dots. The name is processed from left to right.
    The highest competent authority is the root domain expressed by a dot (.) on the very right (this dot is often left out). Top Level Domains (TLD) are defined in the root domain. We have two kind of TLD, Generic Top Level Domain (gTLD) and Country Code Top Level Domain(ccTLD). Well known gTLDs are edu, com, net, and mil which are used mostly in the USA. According to ISO 3166, we also have two letter ccTLD for individual countries. For example, the us domain is affiliated with USA. However ccTLD are used mostly outside the USA. A detailed list of affiliated ccTLD and their details are listed in Appendix A.

    The TLD domains are divided into subdomains for particular organizations, for example, cocacola.com, mcdonalds.com, google.com. Generally, a company subdomain can be divided into lower levels of subdomains, for example, the company Company Ltd. can have its subdomain as company.com and lower levels like bill.company.com for its billing department, sec.company.com for its security department, and head.company.com for its headquarters.

    The names create a tree structure as shown in the figure:

    The following list contains some other registered gTLDs:

    • The .org domain is intended to serve the noncommercial community.
    • The .aero domain is reserved for members of the air transport industry.
    • The .biz domain is reserved for businesses.
    • The .coop domain is reserved for cooperative associations.
    • The .int domain is only used for registering organizations established by international treaties between governments.
    • The .museum domain is reserved for museums.
    • The .name domain is reserved for individuals.
    • The .pro domain is being established; it will be restricted to credited professionals and related entities.

    Name Syntax

    Names are listed in a dot notation (for example, abc.head.company.com). Names have the following general syntax:
    [code]
    string.string.string ………string.
    [/code]
    where the first string is a computer name, followed by the name of the lowest inserted domain, then the name of a higher domain, and so on. For unambiguousness, a dot expressing the root domain is also listed at the end.

    The entire name can have a maximum of 255 characters. An individual string can have a maximum of 63 characters. The string can consist of letters, numbers, and hyphens. A hyphen cannot be at the beginning or at the end of a string. There are also extensions specifying a richer repertoire of characters that can be used to create names. However, we usually avoid these additional characters because they are not supported by all applications.

    Both lower and upper case letters can be used, but this is not so easy. From the point of view of saving and processing in the DNS database, lower and upper case letters are not differentiated. In other words, the name newyork.com will be saved in the same place in a DNS database as NewYork.com or NEWYORK.com. Therefore, when translating a name to an IP address, it does not matter whether the user enters upper or lower case letters. However, the name is saved in the database in upper and lower case letters; so if NewYork.com was saved in the database, then during a query, the database will return “NewYork.com.”. The final dot is part of the name.

    In some cases, the part of the name on the right can be omitted. We can almost always leave out the last part of the domain name in application programs. In databases describing domains the situation is more complicated:

    • It is almost always possible to omit the last dot.
    • It is usually possible to omit the end of the name, which is identical to the name of the domain, on computers inside the domain. For example, inside the company.com domain it is possible to just write computer.abc instead of computer.abc.company.com. (However, you cannot write a dot at the end!) The domains that the computer belongs to are directly defined by the domain and search commands in the resolver configuration file. There can be several domains of this kind defined (see Section 1.9).

    Reverse Domains

    We have already said that communication between hosts is based on IP addresses, not domain names. On the other hand, some applications need to find a name for an IP address—in other words, find the reverse record. This process is the translation of an IP address into a domain name, which is often called reverse translation.

    As with domains, IP addresses also create a tree structure (see Figure 1.2). Domains created by IP
    addresses are often called reverse domains. The pseudodomains inaddr-arpa for IPv4 and IP6.arpa for IPv6 were created for the purpose of reverse translation. This domain name has historical origins; it is an acronym for inverse addresses in the Arpanet.

    Under the domain in-addr.arpa, there are domains with the same name as the first number from the network IP address. For example, the in-addr.arpa domain has subdomains 0 to 255. Each of these subdomains also contains lower subdomains 0 to 255. For example, network 195.47.37.0/24 belongs to subdomain 195.in-addr.arpa. This actual subdomain belongs to domain 47.195.in-addr.arpa, and so forth. Note that the domains here are created like network IP addresses written backwards.

    This whole mechanism works if the IP addresses of classes A, B, or C are affiliated. But what should you do if you only have a subnetwork of class C affiliated? Can you even run your own name server for reverse translation? The answer is yes. Even though the IP address only has four bytes and a classic reverse domain has a maximum of three numbers (the fourth numbers are already elements of the domain—IP addresses), the reverse domains for subnets of class C are created with four numbers. For example, for subnetwork 194.149.150.16/28 we will use domain 16.150.149.194.in-addr.arpa. It is as if the IP address suddenly has five bytes! This was originally a mistake in the implementation of DNS, but later this mistake proved to be very practical so it was standardized as an RFC. We will discuss this in more detail in Chapter 7. You will learn more about reverse domains for IPv6 in Section 3.5.3.

    Domain 0.0.127.in-addr.arpa

    The IP address 127.0.0.1 presents an interesting complication. Network 127 is reserved for loopback, i.e., a software loop on each computer. While other IP addresses are unambiguous within the Internet, the address 127.0.0.1 occurs on every computer. Each name server is not only an authority for common domains, but also an authority (primary name server) to domain 0.0.127.in-addr.arpa. We will consider this as given and will not list it in the chart, but be careful not to forget about it. For example, even a caching-only server is a primary server for this domain. Windows 2000 pretends to be the only exception to this rule, but it would not hurt for even Windows 2000 to establish a name server for zone 0.0.127.in-addr.arpa.

    Zone

    We often come across the questions: What is a zone? What is the relation between a domain and a zone? Let us explain the relationship of these terms using the company.com domain.

    As we have already said, a domain is a group of computers that share a common right side of their domain name. For example, a domain is a group of computers whose names end with company.com.
    However, the domain company.com is large. It is further divided into the subdomains bill.company. com, sec.company.com, sales.company.com, xyz.company.com, etc. We can administer the entire company.com domain on one name server, or we can create independent name servers for some subdomains. (In Figure 1.3, we have created subordinate name servers for the subdomains bill.company.com and head.company.com.) The original name server serves the domain company.com and the subdomains sec.company.com, sales.company.com, and xyz.company.com—in other words, the original name server administers the company.com zone. The zone is a part of the domain namespace that is administered by a particular name server.

    Special Zones

    Besides classic zones, which contain data about parts of the domains or subdomains, special zones are also used for DNS implementation. Specifically, the following zones are used:

    • Zone stub: Zone stub is actually a subordinate zone that only contains information about what name servers administer in a particular subdomain (they contain the NS records for the zone). The zone stub therefore does not contain the entire zone.
    • Zone cache/hint: A zone hint contains a list of root name servers (non-authoritative data read into memory during the start of the name server). Only BIND version 8 and later use the name hint for this type of zone. In previous versions, a name cache zone was used. Remember that the root name servers are an authority for a root domain marked as a dot (.).

    Reserved Domains and Pseudodomains

    It was later decided that other domains could also be used as TLD. Some TLD were reserved in RFC 2606:

    • The test domain for testing
    • The example domain for creating documentation and examples
    • The invalid domain for evoking error states
    • The localhost domain for software loops

    Domains that are not directly connected to the Internet can also exist, i.e., computers that do not
    even use the TCP/IP network protocol therefore do not have an IP address. These domains are
    sometimes called pseudodomains. They are meaningful especially for electronic mail. It is
    possible to send an email into other networks and then into the Internet with the help of a pseudodomain (like DECnet or MS Exchange).

    In its internal network, a company can first use TCP/IP and then DECnet protocol. A user using TCP/IP in the internal network (for example, Daniel@computer.company.com) is addressed from the Internet. But how do you address a user on computers working in the DECnet protocol?

    To solve this, we insert the fictive dnet pseudodomain into the address. The user Daniel is therefore addressed Daniel@computer.dnet.company.com. With the help of DNS, the entire email that was addressed into the dnet.company.com domain is redirected to a gateway in DECnet protocol (the gateway of the company.com domain), which performs the transformation from TCP/IP (for SMTP) into DECnet (for Mail-11).

    Queries (Translations)

    Most common queries are translation of a hostname to an IP address. It is also possible to request additional information from DNS. Queries are mediated by a resolver. The resolver is a DNS client that asks the name server. Because the database is distributed worldwide, the nearest name server does not need to know the final response and can ask other name servers for help. The name server, as an answer to the resolver, then returns the acquired translation or returns a negative answer. All communication consists of queries and answers.

    The name server searches in its cache memory for the data for the zone it administers during its start. The primary name server reads data from the local disk; the secondary name server acquires data from the primary name server by a query zone transfer of the administered zones and also
    saves them into the cache memory. The data stored within the primary and secondary name servers is called authoritative data. Furthermore, the name server reads from its memory cache/hint the zone data, which is not part of the data from its administered zone (local disk), but nonetheless enables this data to connect with the root name servers. This data is called nonauthoritative data. In the terminology of BIND program version 8 and 9, we sometimes do not speak of them as primary and secondary servers, but as master servers and slave servers.

    Name servers save into their cache memory positive (and sometimes even negative) answers to queries that other name servers have to ask for. From the point of view of our name server, this data acquired from other name servers is also non-authoritative, thereby saving time when processing repeated queries.

    Requirements for translations occur in a user program. The user program asks a component within
    the operating system, which is called a resolver, for a translation. The resolver transfers the query for translation to a name server. In smaller systems, there is usually only a stub resolver. In such cases, the resolver transfers all requirements by DNS protocol to a name server running on another computer (see Figure 1.5). A resolver without cache memory is called a stub resolver. It is possible to establish cache memory for a resolver even in Windows 2000, Windows XP, etc. This service in Windows is called DNS Client. (I think this is a little bit misleading as a stub resolver is not a proper DNS client!)

    Some computers run only a resolver (either stub or caching); others run both a resolver and a name server. Nowadays, a wide range of combinations are possible (see Figure 1.6) but the principle remains the same:

    1. The user inserts a command, then the hostname needs to be translated into an IP address (in Figure 1.6, number 1).
    2. If the resolver has its own cache, it will attempt to find the result within it directly (2).
    3. If the answer is not found in the resolver cache (or it is a stub), the resolver transfers the request to a name server (3).
    4. The name server will look for the answer in its cache memory.
    5. If the name server does not find the answer in its cache memory, it looks for help from other name servers.
    6. The name server can contact more name servers by a process referred to as iteration. By iteration, the name server can access or contact a name server, which is an authority on the answer. The authoritative name server will then give a last resort answer (negatively if there is no information in DNS corresponding with the inserted name).
    7. But if the process described above does not return the result fast enough, the resolver repeats its query. If there are more name servers listed in the resolver configuration, then it will send the next query to the next name server listed in the directory (i.e., another name server). The directory of name servers is processed cyclically. The cycle starts for the particular query from the name server, which is listed in the first position.


    DNS uses both UDP and TCP protocols for the transport of its queries/answers. It uses port 53 for
    both protocols (i.e., ports 53/UDP and 53/TCP). Common queries such as the translation of a name to an IP address and vice versa are performed by UDP protocol. The length of data transported by UDP protocol is implicitly limited to 512 B (a truncation flag can be used to signal that the answer did not fit into 512 B and it is therefore necessary for the query answer to be repeated by the TCP protocol). The length of UDP packets is limited to 512 B because a fragmentation could occur for larger IP datagrams. DNS does not consider fragmentation of UDP as sensible. Queries transporting zone transfer data occur between the primary and secondary name servers and are transported by TCP protocol.

    Common queries (such as the translation of a name to an IP address and vice versa) are performed with the help of datagrams in UDP protocol. The translations are required by a client (resolver) on the name server. If the name server does not know what to do, it can ask for translation (help) from other name servers. Name servers solve questions among themselves by iteration, which always starts from the root name server. More details are available in Section 1.10.

    There is a rule in the Internet that a database with data needed for translations is always saved on at least two independent computers (independent name servers). If one is unavailable, the translation can be performed by the other computer.

    In general, we cannot expect that all name servers are accessible all the time. If the TCP protocol is used for a translation, attempts to establish a connection with an inaccessible name server would cause long time intervals while the TCP protocol is trying to connect. Only when this time interval is over is it possible to connect to the next name server.

    The solution for this in UDP protocol is more elegant: A datagram containing a request for the translation is sent to the first server. If the answer does not come back within a short time-out interval, then a datagram with a request is sent to another name server, if the answer does not come back again, it is sent to the next one, and so on. If all possible name servers are used, it will start again from the first one, and the whole cycle repeats until the answer comes back or the set interval times out.

    Round Robin

    Round Robin is a technique that can be used to equally load several machines (Load Balancing). It is possible to use this technique for the majority of name servers (including Windows 2000/2003). This is a situation where we have more than one IP address for one name in DNS. For example, we may operate an exposed web server and because the performance of the machine is not sufficient, we buy another or two more. We start running the web server on all three of them (for example, www.company.com). The first one has an IP address 195.1.1.1, the second one 195.1.1.2, and the third one 195.1.1.3. There will be three records in DNS for www.company.com, and each of them will have a different IP address. Round Robin technique ensures that the answer to the:

    1. first query (to the first user) will be that the web server return addresses 195.1.1.1, 195.1.1.2, and 195.1.1.3
    2. the answer to the next query (to the second user) will be that the server return IP addresses 195.1.1.2, 195.1.1.3, and 195.1.1.1.
    3. the answer to te next query (may be 3rd user) will return IP addresses 195.1.1.3, 195.1.1.1, and 195.1.1.2.
    4. procedure are repeating from 1st point again and again.

    Resolvers

    A resolver is a component of the system dealing with the translation of an IP address. A resolver is a client; it is not a particular program. It is a set of library functions that are linked with application programs requiring services such as Telnet, FTP, browsers, and so on. For example, if Telnet needs to translate the name of a computer to its IP address, it calls the particular library functions.

    The client (in this case, the aforementioned Telnet) calls the library functions (gethostbyname), which will formulate the query, and send it to the name server.

    Time limitations must also be considered. It is possible that a resolver does not receive an answer to its first query, while the next one with the same content is answered correctly (while the server is waiting for the first query, it manages to obtain the answer for the second query from another name server, so the first query was not answered, because the response of its name server took too long). From the user’s point of view, it seems that the translation was not managed on the first try, but was completed by processing it again. The use of the UDP protocol causes a similar effect. Note that it can also happen that the server did not receive the request for the translation at all, because the network is overloaded, and the UDP datagram has been lost somewhere along the way.

    Resolver Configuration in UNIX

    The configuration file for a resolver in the UNIX operating system is /etc/resolv/conf. It usually contains two types of lines (the second command can be repeated several times):
    [code]
    domain the name of the local domain
    nameserver IP address of name server
    [/code]
    If the user inserted the name without a dot at the end, the resolver will add the domain name from the domain command after the inserted name, and will try to transfer it to the name server for translation. If the translation is not performed (a negative answer has been received from the name server), the resolver will try to translate the actual name without the suffix from the domain command.

    Some resolvers enable the search command. This command allows us to specify more names of local domains.

    The IP address of a name server that the resolver should contact is specified by the nameserver command. It is recommended to use more nameserver commands for times when some name server is not available.
    [code]
    The IP address of a name server always has to be stated in the configuration file of the
    resolver, not the domain name of the name server!
    [/code]
    When configuring the resolver and name server on the same machine, the nameserver command can be directed to a local name server 127.0.0.1 (but this is not necessary).

    Other parameters of the resolver (for example, the maximum number of nameserver commands) can be set in the configuration file of the operating system kernel. This file is often called /usr/include/resolv.h. Afterwards, of course, a new compilation of the kernel operating system
    must follow.

    Generally, it is also possible to configure all computers without the use of DNS. Then all requests
    for address translations are performed locally with the help of the /etc/hosts file (in Windows
    %System_Root%/System32/Drivers/etc/hosts). It is possible to combine both methods (the most typical variant); however, we need to be careful about the content of the database /etc/hosts. Usually it is also possible to set the order in which the databases are supposed to be browsed. Usually one /etc/hosts file is browsed and afterwards the DNS.

    Resolver Configuration in Windows

    There is an interesting situation in Windows 2000 and higher. Here we still have the previously mentioned DNS Client service. It is an implementation of a caching resolver. This service is started implicitly. It is strictly recommended in the documentation not to stop this service. However, according to my tests, Windows acts like a station with a stub resolver after stopping this service.

    The content of a resolver cache can even be written out by a ipconfig /displayDNS command or deleted by ipconfig /flushDNS command.

    The content of a %System Root%/System32/Drivers/etc/hosts file whose content is not changed by the ipconfig /flushDNS command is also a part of the cache resolver. The cache resolver can be parameterized by the insertion or change of keys in the Windows register folder HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Services/Dnscache/Parameters, for example, by a NegativeCacheTime key, where you can specify a time period within which negative answers kept in the cache resolver can be changed.

    In older Windows versions, the configuration of a resolver was as simple as it was in UNIX. The difference was only in the fact that a text configuration file was not created by a text editor, but the values were inserted into a particular window. With Windows XP this particular configuration window of the resolver (Figure 1.8) contains a lot more information.

    It is necessary to look at Windows XP and higher from a historical point of view. The LAN Manager System based on NetBIOS protocol was the predecessor of the Windows network. NetBIOS protocol also uses names of computers, which it needs to translate to network addresses in the network layer. When Windows uses TCP/IP as a network protocol, it needs to translate the names of computers to IP addresses and vice versa.

    LAN Manager implemented its own system of names. Names with particular IP addresses were saved locally in a %SystemRoot%/System 32/Drivers/etc/lmhosts file. Later Windows implemented a DNS analogy, a database called WINS (Windows Internet Names Service).

    The translation of names is an interesting problem in Windows. When a translation is not found either in an lmhosts file or on WINS server, it is then sent to a broadcast requesting whether the searched for computer is present on the LAN. Searching in DNS after the implementation of DNS into Windows has extended the entire mechanism. So programs in Windows 2000, which have LAN Manager system as a precursor search for the translation:

    1. In the LAN Manager cache of a local computer (nbtstat –c command lists the cache). It is a cache of the NetBIOS protocol. Rows of the lmhosts file, having the #PRE string as a last parameter, are loaded into this cache when a computer starts. If the lmhosts file is changed, we can force reloading of these rows into a cache by the nbtstat –R command.
    2. On WINS servers. By a broadcast or multicast on LAN.
    3. In the lmhosts file.
    4. In a resolver cache (even the content of hosts file is read into it).
    5. On DNS servers.

    And programs (for example, the ping command) that are Internet oriented search for the translation:

    1. In the resolver cache (even the content of hosts file is read into it).
    2. On DNS Servers.
    3. On WINS servers.
    4. By a broadcast or multicast packet of NetBIOS protocol.
    5. In the lmhosts file.

    So if you make a mistake in the name of the computer in the ping command, then in the record of a MS Network Monitor program or in the record of an Ethereal program (visit http://www.ethereal.com for additional information) you will also be able to see the packets of NetBIOS protocol and even the search conducted by a broadcast.

    Now to the configuration of a resolver in Windows XP in Figure 1.8. First we will insert the IP addresses of name servers into the upper window (DNS server’s address, in order of use). It is not necessary to insert them if we get them during the start up of the computer, for example, from a
    DHCP server or during the establishment of a dial-up connection with the help of PPP protocol.
    Furthermore, there are two options here:

    1. Select Append primary and connection specific in DNS suffixes in the DNS tab (this option is not selected in Figure 1.8); the translation is performed as follows:
      • If the required name contains a dot, then the resolver tries to translate the name without adding a suffix.
      • If the name does not contain a dot, it tries to translate the inserted name after which it has added a dot and a domain name of a Windows domain (configured on Properties in the Computer Name tab).
      • It tries to translate the inserted name after which it has added a dot and a name of a chain in a field DNS suffix for this connection.
    2. Click Append these DNS suffixes (in order); the translation is performed as follows:
      • If the required name contains a dot then the resolver tries to translate the name without adding a suffix.
      • It tries to add particular suffixes according to a list listed in the window below the mentioned option.

    So if you make a mistake in the name of the computer and hit a nonexistent name xxx, then because you have selected a second option, the resolver will first try to translate the name xxx.bill.company.com and then a name xxx.sec.company.com. In both cases, it will generate a query to the name server 195.70.130.1 for each of these translations and then if you do not receive the answer in time, it will repeat the question to the server 195.70.130.10, and the whole cycle is repeated until the time limit is exceeded.

    Name Server

    A name server keeps information for the translation of computer names to IP addresses (even for reverse translations). The name server takes care of a certain part from the space of names of all computers. This part is called the zone (at minimum it takes care of zone 0.0.127.in-addr.arpa).

    A domain or its part creates the zone. The name server can with the help of an NS type record (in
    its configuration) delegate administration of a subdomain to a subordinate name server.

    The name server is a program that performs the translation at the request of a resolver or another name server. In UNIX, the name server is materialized by the named program. Also the name BIND (Berkeley Internet Name Domain) is used for this name server.

    Types of name servers differ according to the way in which they save data:

    • Primary name server/primary master is the main data source for the zone. It is the authoritative server for the zone. This server acquires data about its zone from databases saved on a local disk. Names of these types of servers depend on the version of BIND they use. While only the primary name server was used for version 4.x, a primary name master is used for version 8. The administrator manually creates databases for this server.The primary server must be published as an authoritative name server for the domain in the SOA resource record, while the primary master server does not need to be published. There is only one of this type of server for each zone.
    • Master name server is an authoritative server for the zone. The master server is always published as an authoritative server for the domain in NS records. The master sever is a source of data of a zone for the subordinate servers (slave/secondary servers). There can be several master servers. This type of server is used for Bind version 8 and later.
    • Secondary name server/slave name server acquires data about the zone by copying the data from the primary name server (respectively from the master server) at regular time intervals. It makes no sense to edit these databases on the secondary name servers, although they are saved on the local server disk because they will be rewritten during further copying. This type of name server is also an authority for its zones, i.e., its data for the particular zone is considered irrevocable (authoritative). The name of this type of server depends again on the version of BIND it uses. For version 4, only the secondary name was used, the term slave server was used for a completely different type of server. In version 8 you can come across both names.
    • Caching-only name server is neither a primary nor secondary name server (it is not an authority) for any zone. However, it uses the general characteristics of name
      servers, i.e., it saves data that comes through its cache. This data is called nonauthoritative. Each server is a caching server, but by the words caching, we understand that it is neither a primary nor secondary name server for any zone. (Of course, even a caching-only server is a primary name server for zone 0.0.127.inaddr. arpa, but that does not count).
    • Root name server is an authoritative name server for the root domain (for the dot). Each root name server is a primary server, which differentiates it from other name servers.
    • Slave name server (in BIND version 4 terminology) transmits questions for a translation to other name servers; it does not perform any iteration itself.
    • Stealth name server is a secret server. This type of name server is not published anywhere. It is only known to the servers that have its IP address statically listed in their configuration. It is an authoritative server. It acquires the data for the zone with the help of a zone transfer. It can be the main server for the zone. Stealth servers can be used as a local backup if the local servers are unavailable.

    The architecture of a master/slave system is shown in the following figure:

    One name server can be a master (primary) server for one zone and a slave (secondary) server for another.

    From the point of view of a client, there is no difference between master (primary) and slave (secondary) name servers. Both contain data of similar importance—both are authoritative for the particular zone. The client does not even need to know which server is the master (primary server) and which one is the slave (secondary). On the other hand, a caching server is not an authority, i.e., if it is not able to perform the translation, it contacts the authoritative server for the particular zone.

    So if the hostmaster change some information on the master server (i.e. adds another computer name into the database), then the databases on all slave servers are automatically corrected after a time set by a parameter in the SOA resource record (if the hostmaster only corrected the database manually on a secondary name server, the correction would disappear at the same time!). A problem occurs when the user receives the first answer from the slave server at a time when the slave server has not been updated. The answer is negative, i.e., such a computer is not in the database.

    Even worse is the following case: the master server operates correctly, but there is no data for the zone on the slave server because zone transfer failed. The clients receive authoritative answers from the master server or the slave server by chance. When the client receives an answer from the master server, the answer is correct. When the client receives an answer from the slave server, the answer is negative. But the user doesn’t know which server is correct and which is wrong . Then the user says, “First I receive a response to my query and second time I do not.”

    Authoritative data comes from the database which is stored on the primary master’s disk. Nonauthoritative data comes from other nameservers (“from the network”). There is only one exception. The name server needs to know the root name servers to ensure proper functioning of
    the name server. However, it is not an authority for them usually, still each name server has own nonauthoritative information about root servers on the disk. It is implemented by a cache command in BIND version 4 or zone cache/hint in BIND version 8 and later.

    The iteration process of a translation of the name abc.company.com to an IP address is shown in the following figure below:

    The step-by-step process is as follows:

    1. The resolver formulates the requirement to the name server and expects an unambiguous answer. If the name server is able to answer, it sends the answer immediately. It searches for the answer in its cache memory (5). Authoritative data from the disk databases is acquired as well as nonauthoritative data acquired during previous translations. If the server does not find the answer in its cache memory, it contacts other servers. It always begins with a root name server. If the name server does not know the answer itself, it contacts the root name server. That is why each
      name server must know the IP addresses of root name servers. If no root name server is available (as is, for example, the case for all closed Intranets), then after several unsuccessful attempts, the entire translation process collapses.
    2. The root name server finds out that the information about the .com domain was delegated by NS resource record to the subordinate name server and it will return this subordinate name server’s IP addresses (IP address of authoritative name servers for the zone .com).
    3. Our name server turns to the authoritative server for the .com domain and finds out that
      the information about the company.com domain was delegated by NS type resource record to the subordinate name server and will return this subordinate name server’s IP addresses (IP address of authoritative name servers for company.com zone).
    4. Our name server then turns to the authoritative name server for the company.com domain, which will solve its query (or not). The answer from authoritative name server for relevant zone is marked as an authoritative answer. The result is transmitted to the client (1).
    5. The information, which the server has gradually received, will also be saved into the cache. The answer to the next similar question is looked up in cache and returned directly from cache. But this next answer is not marked as authoritative.

    The name server even saves answers into the cache memory described in the previous five points (translation of abc.company.com). It can then use the answers from the cache for the following
    translations to save time, but it also helps the root name servers. However, if you require the translation of a name from TLD that is not in the cache, then the root name server is really contacted. From this we can see that the root servers in the Internet will be heavily burdened and
    their unavailability would damage communication on the entire Internet.

    The name server does not require the complete (recursive) answer. Important name servers (for example, root name servers or TLD name servers) do not even have to produce recursive answers, and hence avoid overloading themselves and restricting their availability. It is not possible to direct the resolver of your computer to them.

    The nslookup program is a useful program for the administrator of the name server. If you want to perform questions on a name server with the nslookup program, then forbid iteration (recursive questions) and the addition of domain names from the configuration file of the resolver with the commands:
    [code]
    $ nslookup
    set norecurse
    set nosearch
    [/code]

    Forwarder Servers

    There is another type of server, called a forwarder server. The characteristics of this server are not connected with whether it is a primary or secondary server for any zone, but with the way in which the translation of DNS questions is performed.

    So far we have said that the resolver transfers the request for the translation to a name server, i.e., it sends a query to a name server and waits for the final answer (the client sends a recursive query and waits for a final answer). If the name server is not able to answer itself, it performs a recursive translation via non-recursive queries. First it contacts the root name server. The root name server tells the resolver which name servers it must ask for answers to its query. Then it contacts the recommended name server. This name server sends many packets into the Internet.

    If a company network is connected to the Internet by a slow line, then the name server loads the line by its translations. In such a case, it is advantageous to configure some of the name servers as forwarder servers.

    The local name server transmits the queries to the forwarder server. However, the local name server marks these queries as recursive. The forwarder server takes the request from the local name server and performs translation via non-recursive queries on the Internet by itself. It then
    returns only the final result to our name server.

    The local name server waits for the answer from the forwarder server for the final result. If the local name server does not get the answer in the set time out limit, then it contacts the root name servers and tries to solve the case by iteration.

    If the local name server is not supposed to contact the root name servers, but is supposed to only wait for the answer, then it is necessary to indicate such a server in its configuration as a forwarder-only. In BIND version 4.x such a server is called slave. Forwarder-only (slave)
    servers are used on intranets (behind the firewall) where contact with root name servers is not possible. The forwarder server then contacts a name server, which is part of the firewall.

    The forwarder server can work as a caching-only server in both variants, and it can also be the primary or secondary name server for some zones.

    also read:

    • DOJO Tutorials
    • jQuery Tutorials

    It is also possible to configure forwarder servers in Windows 2003 Server as shown in the figure below:

    Run the DNS from the Administrative Tools. Right-click to your DNS server and choose Properties. Select the Forwarders tab. Click New and enter the name of the domain you want to resolve by forwarders. Insert the IP addresses of the forwarder servers below. You can insert into the Number of seconds before forward queries time out box a time limit during which the server waits for an answer from a forwarder server. We can establish a slave server by clicking the Do not use recursion for this domain option.

    Filed Under: Internet Tagged With: Domain

    First Steps with Scalix Admin Console and Scalix Web Access

    October 13, 2009 by itadmin Leave a Comment

    Scalix

    Linux Administrator’s Guide

    Scalix email and calendaring, HP OpenMail, and Samsung Contact: these three names stand for some of the most powerful open-source-based groupware solutions available. This book sets out to explain their fundamentals to Linux administrators.
    Since the early 90s, Hewlett Packard had earned many awards for its mail server, and OpenMail was said to be more scalable, reliable, and better performing than any other mail and groupware server. After only a few years, the product had managed to conquer the United States’ fortune 1000 almost entirely. Scalix Inc., a member of the Xandros family, has continued this story in the last years: several reviewers claim that it has better Outlook support than MS Exchange.
    With the right know-how, Scalix can be easily managed. Several thousand mailboxes are possible on a single server; Web-GUIs and command line tools help the administrator; and Scalix integrates easily with other professional tools, be it OpenVPN, Nagios monitoring or others.
    During its history of almost 20 years, many tools and programs were developed for Scalix to help the admin in his/her daily work. While the official documentation has several thousand pages, which are not all up-to-date, this book tries to give a detailed overview from installation to advanced setups and configuration in big companies.
    With this book, I want to provide both a concise description of Scalix’ features and an easy-to-use introduction for the inexperienced. Admins, consultants, and teachers will all find this book a helpful base for daily work and training. Though there are many other possible ways to success in the described scenarios, the ones presented have been tested in many setups and have been selected for simplicity reasons.
    High-end email and groupware is a domain where only few vendors can provide solutions. This is not the realm of Microsoft, and it has never been. It is where companies like HP, Novell or Scalix offer reliable and scalable products. And, Scalix is the only one that has licenced parts under a free and open-source licence. The software is free for up to 10 users, easy-to-use, and offers a lot of possible features ranging from caldav or syncml to clusters.

    What This Book Covers

    Chapter 1 will cover how email became a communication standard, what RFCs are, and where you can find the relevant ones. After a short glance on how email works, the related protocols: SMTP, POP, IMAP, and MAPI are explained in brief as well as LDAP, X500, MIME, and SOAP. An overview of the groupware market, including the various definitions of the latter by different vendors closes the chapter.
    Chapter 2 will start with the history of Scalix groupware. We’ll see what a mail node is and where to get more information on Scalix terms like the indexing server, daemons, and services. The chapter will also deal with the protocols supported by Scalix, the
    license involved, and the packages offered by Scalix.
    Chapter 3 describes the standard installation of Scalix software on OpenSUSE 10.2 and Fedora Core 5.
    Chapter 4 deals with advanced installation techniques. First, you will learn about how to get the graphical installation on Windows systems by using NoMachine NX Terminal software. The second part of this chapter shows a typical text-based installation. As an example, we show how the graphical installer is used to correctly uninstall a Scalix server. The last example shows upgrading and reconfiguration of the Scalix server.
    Chapter 5 deals with the Scalix Administration Console (SAC). We will take a short tour through the interface, add a first user, and have a closer look at the available configuration options.
    Chapter 6 will cover how to deploy Scalix Connect for Microsoft Outlook, to your Windows clients. After that, the integration of the supported Scalix groupware client Evolution and other IMAP mail clients is shown.
    Chapter 7 covers the most important configuration files and commands of Scalix.
    Chapter 8 deals with standard Scalix monitoring tools and the integration of Scalix in your centralized Nagios monitoring. After some details on Scalix administration programs like omstat and omlimit, we see how Outlook clients can be monitored. In the end, some of our Nagios scripts and configuration files serve to add another host to an existing Nagios configuration.
    Chapter 9 will deal with several recommendations that make your Scalix server safe—like minimizing the number of services running and listening. We will set up a firewall that allows Scalix users to connect. After that we will set up Stunnel to provide SSLencrypted Scalix services. Then, we will use OpenVPN to protect the server. Last but not least, we will have a look at the services running and discuss advanced possibilities of securing the server.
    Chapter 10 will discuss how to backup and restore a Scalix mail server—for small and large environments.
    Chapter 11 will cover how to administrate Scalix in sync with data stored in remote directories. This chapter starts with an explanation of how Scalix delivers its information in LDAP-style and rounds up with a guide on how to integrate Scalix with an external
    Microsoft Active Directory.
    Chapter 12 starts with questions that you have to ask yourself before you set up any multi-server environment with Scalix. After that, we see two examples as to how a High Availability (HA) setup might look like.
    Chapter 13 will cover how to integrate measures against spam and viruses in Scalix.
    Bibliography contains a comprehensive list of all the links used through out the book.

    First Steps with Scalix Admin Console and Scalix Web Access

    This chapter deals with the Scalix Administration Console (SAC). This web interface is the central point of administration for the Scalix server. User, group, and resource management are done here as well as controlling services and settings. In this chapter, we will take a short tour through the interface, add a first user, and have a closer look at the configuration options available for him/her. Towards the
    end, we will test the account by logging into the web client, and sending (and receiving) emails.

    SAC at a Glance

    Point your Browser to the URL of your Scalix server, following this syntax:
    http://<servername>/sac. A pop-up window with the Administration Console Login is opened. If you are using Firefox or another browser with pop-up suppression, perhaps the configuration will need some corrections. Allow the Scalix server to open popups. In Firefox, you can easily configure this by clicking in the yellow bar on top of the displayed page. Other browsers may require editing the preferences. Otherwise, Scalix will provide a web page for you with a link, which opens the Admin Console in the same browser window.

    Logging In

    On Scalix 11, the Scalix Administration Login looks like this:

    Enter the Administrator’s name in the field Login ID, exactly as configured during installation. Activate the reminder that you are connected via http and not through https by clicking on option field Not using a secure https connection. Once we have configured https for Scalix, the login dialog will not provide this option anymore.
    However, enabling https is not that easy, and therefore not standard in Scalix, except for the installations on Red Hat Enterprise. We will deal with this topic later in the chapter on Security.
    Click on the button Login to start the Administration console.

    A First Look Around

    The Scalix Administration Console is a Web application provided by a Tomcat application server. The only requirement for it is a modern browser supporting JavaScript. Firefox and Internet Explorer do fine, Konqueror may work soon. The Admin Console window is split in three parts:

    • A menu with icons called Toolbar
    • A list view on the lower left named Contents Pane and
    • The main window on the right, called Display Pane

    The icons in the menu bar let you choose the administration task you want to accomplish, the content pane lists the possible entries that can be edited, and the options and parameters of a selected entry are presented in the display pane.

    By clicking on one of the icons on the Toolbar, you can access the different sections of the Scalix Administration Console. The first three sections are about users, groups, and resources, and will be used in daily administration for adding, deleting or modifying these objects. The section Plugins offers a management GUI for your own or third-party Scalix plug-ins. The Server Info icon leads to a concise list of running services, where the administrator can set the log level of these services and browse through the services’ log files. The Settings Icon allows you to set preferences for the server and new users. A concise online help is available, and the icons Refresh and Logout complete the menu bar’s icons.

    Navigating in the Admin Console

    A nice gadget in SAC is the little icon on the top left of the main window.Surrounded by four arrows, this icon displays the icon of the current section and enables the administrator to navigate in a quick and easy manner through the administration console.

    Clicking the up or down arrows will select and activate the next entry upwards or downwards in the list view to the left, and the left/right arrows navigate you back and forth in a browser-like fashion.

    Users, Groups, Resources…

    Now click on the Users icon in order to switch to the user management dialog. Click on the entry of the only user present at this time, sxadmin.

    For every user, there are six tabs where the user information is stored. The tab General holds the most important information: Username, Display Name, and Email address. This information is all that is necessary to add an user and use the new account. The other tabs contain contact information, group memberships, and administrative delegations. The mailbox quota, that is the amount of storage that the
    user’s account may sum up to, is configured in the Mail dialog. On the Advanced tab, the administrator can add a role to the user, decide whether this user is a Standard or a Premium User, and give him a different authentication ID.

    Changing Passwords

    There are other features in the Admin Console that you will be using frequently once you are master of some Scalix users. One of them is probably the button Change Password on the lower right corner leading directly to the password dialog. This button is present in every user’s configuration dialog.

    Filtering the List

    In a large environment, the list view can be very long, and it may be tricky to find a user, group or resource in time. Thus, Scalix offers filters that can be combined and configured to reduce the displayed objects to a manageable amount. In the standard setup, a drop-down menu allows you to select the displayed user type, with special features like Logged in Users. Specifying a part of the username in the Name field will automatically display only the usernames in the list fitting to this mask.

    The Edit button filter on the top right edge of the list pane is an especially useful helper in large environments. Normally, Scalix only returns the first 100 entries, but this can be configured. Here, the administrator may define extended filter criteria to avoid long listings ,for example, of users or groups. Click on it to receive the following dialog:

    Because a typical Scalix environment may consist of several thousand users, the Admin Console can manage a scenario consisting of multiple Scalix servers and mailnodes. Each arrow that you set in this tiny dialog adds a drop-down menu or entry field to the list of available filters in the list view. This co nvenient feature enables the administrator to search and find a user much faster than in any other groupware solution I know.

    Adding a User

    Let’s ad d a first user now. Click on the Users icon in the menu bar, and then on the Create User(s) button in the lower half of the list view. Again, a pop-up window appears. It is called Create New User and offers several fields where the administrator
    can enter the user data. All that is needed for a new user is a name, an email address, and a password. The email address is generated automatically from the user name and the domain name, so all we need to enter here is our name and a password:

    Nevertheless, the adminisrator can choose several interesting settings here. One of them is selecting the user type. Whereas a Scalix Premium user has full access to the groupware (including MS Outlook), the Standard user will only have groupware in the Scalix webclient. An Internet mail user is barely an entry in the global address book for an email account for SMTP, POP, and IMAP.
    Four options in the lower half can be either checked or unchecked. Locking new users or forcing them to change passwords on first login are features that may be useful for security aware administrators. If you do not want the new user to access the Scalix Web client SWA (Scalix Web Access), then deselect this arrow.
    Like some other groupware servers, Scalix supports delegating email features to a colleague while the user is on holiday. Identifying the sender in a delegate’s outgoing mail may be tricky, and thus there is a feature enabling special headers in the email that contains information on the sender. If you check the setting Add Sender header to delegate’s outgoing messages, any mail sent from this user on behalf of someone else will contain a header identifying him.
    Click on the Next button to proceed. The dialog window contact information holds eighteen fields where you can enter administrative user data like telephone number, department or address.

    If the option Display in address book is checked, the data entered here will be displayed in the Scalix address book and is thus available to other users. Click on the Next button again.

    In the last dialog, during creation of a user, the administrator may choose the groups that the new user is a member of. After installation, there are only four groups available with different functions. The members of these groups have special administrative rights, which our standard user does not need.
    Click on the button Finish to complete the process of adding a new user to the Scalix system. By the way, you can click this button at any time. Once you have entered a user name and a password, then you do not need to enter any address data.
    The Scalix administrator can access all user data at any time later via the Scalix Admin Console. All dialogs are present, identically, in the user management. An admin is allowed to edit user name and user data, and there are some small but useful features.

    Playing with Filters

    This might be a good time to play with the filters: In the field Name in the list view, enter one or more letters that are different from the one your user’s name starts with. The user will then disappear from the list. In the example above, if I type “, the user
    sxadmin will vanish from the list, and after having typed Mart, my list is empty.
    Do you notice the little crown on the head of the new user? Scalix Premium Users can be identified by this cap and a green shirt. Standard Users like the admin account sxadmin are dressed in blue.
    The Scalix user management offers some more features worth mentioning. If you click on the Add Address button, additional email addresses for this user account are added. You can add addresses and collect the email on one particular account. Simply select real name, user part, and domain part of the email address. The drop-down menu shows that Scalix is capable of administrating multiple domains on one server.

    In the dialogs Member of and Manager of, this user can be assigned as a member or manager of Scalix groups. Click on the Advanced tab to edit the user’s login name.
    In Standard setup, Scalix uses the full email address as login name for all access to the Scalix system. This makes perfect sense for most users, because they only have to remember the email address and password. However, being lazy, I prefer a handy, short login name like “mfeilner” in addition to the email address markus.feilner@scalixbook.org. Especially, since the Scalix login is case sensitive.

    Enter the login name for this user in the field Authentication ID. There are three other interesting options on this page:

    • Under some circumstances, for example if a user has met the maximum amount of failed logins, his account will be locked. This is marked in the Scalix Admin Console by an arrow in the check box is locked. Un-checking this checkbox may be a regular administrative task for users with a bad memory, but sometimes if you want to lock out a user, this is the right place to do so.
    • With Smart Cache, a copy of the mailbox is stored on the user’s client. Smart Cache can be enabled or disabled globally or on a per-user base. Enabling the Smart Cache is a task that may take some time for large mail boxes, but it is worth it. However, if you decide to let some users have other caching settings than the server default, please note that this cannot be reversed anywhere other than from the command line.
    • Indexing speeds up most of Scalix groupware actions. The index contains meta information on mail, contacts, and appointments helpful for searches. However, such an index needs to be built before it can be used. The Scalix Indexing Service (SIS) builds this index automatically. This dialog allows the administrator to deactivate the Indexing Service for a single user. The Recreate SIS index button helps if you receive error messages about a corrupt index.

    Testing the New Account—Logging into SWA


    Immediately af ter clicking on the button “Save” in SAC, the user can log in to the web client (or connect through Outlook) using his short ID. The URL of the webmailer is simply http://<servername>/webmail, in our example setup, it is http://scalixbook.org/webmail.

    The Scalix Web Access (SWA) is a full-featured standard Webclient. It supports drag’n’drop actions in Ajax-style and has a front end that is very similar to Outlook, which makes it easy for newbies. Again, a menubar is accompagnied by a list view and a main window. Furthermore, a calendar view at the bottom rounds up this groupware client. The proprietary versions of SBE and EE, contain some features that are very helpful to Admins of larger companies. Perhaps the most valuable option is the Recovery folder that every user has by default. This folder contains all deleted emails for the last week. This may significantly reduce the amount of calls from your users.

    Sending the First Email

    Our server is c onfigured, the user account has a mail address, and the user is logged in. All that is left to do is checking if the user can send and receive emails. Click on the New button to start editing your first email. A pop-up window with the title “New Message” will appear. As you can see, the editor window is kept as close to the Outlook look and feel. By the way, both HTML and clear text email are supported.

    In the first step, local delivery is checked: Enter your own email address in the To: field, some text in the subject and the body of the mail and click on the button Send. Don’t hesitate to click on the Button Send/Recieve in SWA. The mail is being delivered locally, so it should be in the Inbox instantaneously. Unread messages are displayed in bold characters.

    Second step, test the email functionality from and to the outside world. Send an email from either of the configured mail addresses to an external recipient and confirm the success. Answer to the emails and check your Inbox. In most cases, Scalix simply works after installation.

    Summary

    In this chapter, we learned how to start and use the Scalix Administation Console. We added a user, looked at advanced filter and search criteria, and changed some advanced settings for this user. After that we logged in as the new user and tested the Scalix server by sending a local email.

    Filed Under: Misc Tagged With: Scalix

    Building SOA-Based Composite Applications Using NetBeans IDE 6

    October 8, 2009 by itadmin Leave a Comment

    Building SOA-Based Composite Applications Using NetBeans IDE 6

    Composite applications aid businesses by stitching together various componented business capabilities. In the current enterprise scenario, empowering business users to react quickly to the rapidly changing business environment is the top most priority. With the advent of composite applications the ‘reuse’ paradigm has moved from the technical aspect to the business aspect. You no longer re-use a service but re-use a business process. Now, enterprises can define their own behaviors optimized for their businesses through metadata and flows. This business process composition has become increasingly important for constructing business logic.

    also read:

    • What is UDDI?
    • Apache Axis 2 Web Services
    • RESTFul Java Web Services

    The ability of composite applications to share components between them nullifies the distinction between actual applications. Business users should be able to move between the activities they need to do without any actual awareness that they are moving from one domain to another.

    The composite application design enables your company to combine multiple heterogeneous technologies into a single application, bringing key application capability within reach of your business user. Enterprises creating richer composite applications by leveraging existing interoperable components increase the development organization’s ability to respond quickly and cost-effectively to
    emerging business requirements. While there are many vendors offering various graphical tools to create composite applications, this book will focus on OpenESB and NetBeans IDE for designing and building composite applications.

    This book introduces basic SOA concepts and shows how you can use NetBeans and OpenESB tools to design and deploy a composite application. After introducing the SOA concepts, you are introduced to various NetBeans Editors and aids that you need to understand and work with to design a
    composite application. The last part of the book deals with a full fl edged incremental example on how you can build a complex composite application with necessary screen shots accompanied by the source code available on the website.

    What This Book Covers

    Chapter 1 introduces SOA and BPEL to the readers with simple examples and gives an overview of the JBI components and runtime required to build composite applications. This chapter also gives you an overview of the need for SOA-based applications in companies by depicting an example of an imaginary AirlinesAlliance system.
    Chapter 2 shows you how you can quickly setup NetBeans IDE and other runtime environments including OpenESB runtime and BPEL engine. There are many software/tools mentioned in this chapter that you need to download and configure to get started building composite applications using NetBeans.
    Chapter 3 provides an overview of Java Business Integration (JBI) and the Enterprise Service Bus (ESB). You will learn about JBI Service Engines and how they are supported within the NetBeans IDE.
    Chapter 4 introduces JBI Binding Components and how they provide protocol independent communication between JBI components. You will also learn about the support that the NetBeans IDE provides for Binding Components.
    Chapter 5 introduces the NetBeans BPEL Designer that comes bundled with the NetBeans IDE. You will also be introduced to the graphical tools/wizards and palettes available for creating BPEL files.
    Chapter 6 provides an overview of WSDL and how WSDL documents are formed. You will learn about the use of WSDL in enterprise applications and the WSDL editor within the NetBeans IDE.
    Chapter 7 covers the XML schema designer and shows how it aids rapid development and testing of XML schema documents.
    Chapter 8 provides you an overview of the Intelligent Event Processor (IEP) module and the IEP Service Engine that can be acquired from the OpenESB software bundle. This chapter also shows the need for an event processing tool through simple composite application examples.
    Chapter 9 provides details of fault handling within a BPEL process and shows how these can be managed within the NetBeans IDE by using graphical tools.
    Chapter 10 shows you how you can build simple to complex composite applications and BPEL processes using the NetBeans IDE. The examples in this chapter are divided into several parts and the source code for all parts is available in the code bundle.
    Chapter 11 gives you the overall picture of the composite application and the
    need for a composite application to deploy your BPEL processes. The composite application support provided in NetBeans IDE comes with a visual editor for adding and configuring WSDL ports and JBI modules.

    Service Engines

    In Chapter 1, we introduced the concept of SOA applications, and introduced BPEL processes and JBI applications. To gain a greater understanding of these concepts and to enable us to develop enterprise level SOA applications, we need to understand JBI in further depth, and how JBI components can be linked together. This chapter will introduce the JBI Service Engine and how it is supported within the NetBeans Enterprise Pack.
    In this chapter, we will discuss the following topics:

    • Need for Java Business Integration (JBI)
    • Enterprise Service Bus
    • Normalized Message Router
    • Introduction to Service Engines
    • NetBeans Support for Service Engines
    • BPEL Service Engine
    • Java EE Service Engine
    • SQL Service Engine
    • IEP Service Engine
    • XSLT Service Engine

    Need for Java Business Integration (JBI)

    To have a good understanding of Service Engines (a specific type of JBI component), we need to first understand the reason for Java Business Integration.

    In the business world, not all systems talk the same language. They use different protocols and different forms of communications. Legacy systems in particular can use proprietary protocols for external communication. The advent and acceptance of XML has been greatly beneficial in allowing systems to be easily integrated, but XML itself is not the complete solution.

    When some systems were first developed, they were not envisioned to be able to communicate with many other systems; they were developed with closed interfaces using closed protocols. This, of course, is fine for the system developer, but makes system integration very difficult. This closed and proprietary nature of enterprise systems makes integration between enterprise applications very difficult. To allow enterprise systems to effectively communicate between each other, system integrators would use vendor-supplied APIs and data formats or agree on common exchange mechanisms between their systems. This is fine for small short term integration, but quickly becomes unproductive as the number of enterprise applications to integrate gets larger. The following figure shows the problems with traditional integration.

    As we can see in the figure, each third party system that we want to integrate with uses a different protocol. As a system integrator, we potentially have to learn new technologies and new APIs for each system we wish to integrate with. If there are only two or three systems to integrate with, this is not really too much of a problem. However, the more systems we wish to integrate with, the more proprietary code we have to learn and integration with other systems quickly becomes a large problem.

    To try and overcome these problems, the Enterprise Application Integration (EAI) server was introduced. This concept has an integration server acting as a central hub. The EAI server traditionally has proprietary links to third party systems, so the application integrator only has to learn one API (the EAI server vendors). With this architecture however, there are still several drawbacks. The central hub can quickly become a bottleneck, and because of the hub-and-spoke architecture, any problems at the hub are rapidly manifested at all the clients.

    Enterprise Service Bus

    To help solve this problem, leading companies in the integration community (led by Sun Microsystems) proposed the Java Business Integration Specification Request (JSR 208) (Full details of the JSR can be found at http://jcp.org/en/jsr/detail?id=208). JSR 208 proposed a standard framework for business integration by providing a standard set of service provider interfaces (SPIs) to help alleviate the
    problems experienced with Enterprise Application Integration.

    The standard framework described in JSR 208 allows pluggable components to be added into a standard architecture and provides a standard common mechanism for each of these components to communicate with each other based upon WSDL. The pluggable nature of the framework described by JSR 208 is depicted in the following figure. It shows us the concept of an Enterprise Service Bus and introduces us to the Service Engine (SE) component:

    JSR 208 describes a service engine as a component, which provides business logic and transformation services to other components, as well as consuming such services. SEs can integrate Java-based applications (and other resources), or applications with available Java APIs.

    Service Engine is a component which provides (and consumes) business
    logic and transformation services to other components. There are
    various Service Engines available, such as the BPEL service engine
    for orchestrating business processes, or the Java EE service engine for
    consuming Java EE Web Services. We will discuss some of the more
    common Service Engines later in this chapter.

    The Normalized Message Router

    As we can see from the previous figure, SE’s don’t communicate directly with each other or with the clients, instead they communicate via the NMR. This is one of the key concepts of JBI, in that it promotes loose coupling of services.

    So, what is NMR and what is its purpose? NMR is responsible for taking messages from clients and routing them to the appropriate Service Engines for processing. (This is not strictly true as there is another standard JBI component called the Binding Component responsible for receiving client messages. Binding Components are discussed in Chapter 4. Again, this further enhances the support for loose coupling within JBI, as Service Engines are decoupled from their transport infrastructure).

    NMR is responsible for passing normalized (that is based upon WSDL) messages between JBI components. Messages typically consist of a payload and a message header which contains any other message data required for the Service Engine to understand and process the message (for example, security information). Again, we can see that this provides a loosely coupled model in which Service Engines have no prior knowledge of other Service Engines. This therefore allows the JBI architecture to be flexible, and allows different component vendors to develop standard based components.

    Normalized Message Router enables technology for allowing messages to
    be passed between loosely coupled services such as Service Engines.

    The figure below gives an overview of the message routing between a client application and two service engines, in this case the EE and SQL service engines.

    In this figure, a request is made from the client to the JBI Container. This request is passed via NMR to the EE Service Engine. The EE Service Engine then makes a request to the SQL Service Engine via NMR. The SQL Service Engine returns a message to the EE Service Engine again via NMR. Finally, the message is routed back to the client through NMR and JBI framework. The important concept here is that NMR is a message routing hub not only between clients and service engines, but also for intra-communication between different service engines.

    The entire architecture we have discussed is typically referred to as an Enterprise Service Bus.

    Enterprise Service Bus (ESB) is a standard-based middleware architecture
    that allows pluggable components to communicate with each other via a
    messaging subsystem.

    Now that we have a basic understanding of what a Service Engine is, how communication is made between application clients and Service Engines, and between Service Engines themselves, let’s take a look at what support the NetBeansIDE gives us for interacting with Service Engines.

    Service Engine Life Cycle

    Each Service Engine can exist in one of a set of predefined states. This is called the Service Engine life cycle.

    • Started
    • Stopped
    • Shutdown
    • Uninstalled

    The figure below gives an overview of the life cycle of Service Engines:

    Service Engines can be managed from the command line utility, asadmin, that is supplied as part of the Sun Java System Application Server. The table below shows some of the common commands that can be used to manage Service Engines:


    Service Engines can also be managed from within the NetBeans IDE instead of using the as admin application. We will look at that in the next section.

    Service Engines in NetBeans

    As we discussed in Chapter 2, the NetBeans Enterprise Pack provides a version of the Sun Java System Application Server 9.0 which includes several Service Engines from the Open ESB project.

    All of these Service Engines can be administered from within the NetBeans IDE from the Services explorer panel. Within this panel, expand the Servers | Sun Java System Application Server 9 | JBI | Service Engines node to get a complete list of Service Engines deployed to the server.

    The NetBeans Enterprise Pack 5.5 and the NetBeans 6.0 IDE have different Service Engines installed. The following table lists which Service Engines are installed in which version of the NetBeans Enterprise Pack:

    In the previous section, we discussed the life cycle of Service Engines and how this can be managed using the asadmin application. Using the NetBeans IDE, it is easy to manage the state of a Service Engine. Right-clicking on any of the Service Engines within the Services explorer shows a menu allowing the life cycle to be managed as shown in the figure below:

    To illustrate the different states in a Service Engine life cycle, a different icon is displayed:

    Now that we have a good understanding of what Service Engines are, and what support the NetBeans IDE provides, let’s take a closer look at some of the more common Service Engines provided with the NetBeans Enterprise Pack.

    BPEL Service Engine

    Similar to all the other Service Engines deployed to the JBI Container within the Sun Java System Application Server and accessible through NetBeans, the BPEL Service Engine is a standard JBI Compliant component as defined by JSR 208.

    The BPEL Service Engine enables orchestration of WS-BPEL 2.0 business processes.
    This enables a work flow of different business services to be built as shown in the following figure:

    Within NetBeans, we can create BPEL modules which consist of one or more BPEL processes. BPEL modules are built into standard JBI component, and then deployed to the JBI container where the BPEL Service Engine allows the processes within the module to be executed. In JBI terms, this is called a Service Unit.

    A Service Unit is a deployable component (jar file) that can be deployed to a Service Engine.

    New BPEL modules are created in NetBeans by selecting the File | New Project menu option and then selecting BPEL Module from the SOA category as shown in the following figure:

    Within a BPEL module project, we add BPEL Processes. These processes describe the orchestration of different services.

    All the standard operations specified by WS-BPEL 2.0 Specification (like Providing and Consuming Web Services, Structuring the processing logic, and performing basic activities such as assignments and waiting) are available within the BPEL Service Engine. The NetBeans designer provides simple drag-and-drop support for all of these activities.

    Consider, for example, a service for generating license keys for a piece of software. In a Service Oriented Architecture, our system may consist of two services:

    1. A Customer Service: this service would be responsible for ensuring that license requests are only made by valid customers.
    2. A License Generation Service: this service would be responsible for generating valid license keys.

    Within NetBeans, we can create a BPEL process that ties these services together allowing us to return valid license keys to our customers and details of purchasing options to non-customers.

    Java EE Service Engine

    The Java EE service engine acts as a bridge between the JBI container allowing Java EE web services to be consumed from within JBI components. Without the Java EE service Engine, JBI components would have to execute Java EE Web Services via remote calls instead of via in-process communication. The Java EE Service Engine allows both servlet and EJB-based web services to be consumed from within JBI components.

    The Java EE Service Engine provides several benefits when executing Java EE Web Services.

    • Increased performance
    • Transaction support
    • Security support

    These are explained in the following subsections.

    Increased Performance

    Using the Java EE service engine enables Java EE web services to be invoked in process within the same JVM, as the services are running. This eliminates the need for any wire-based transmission protocols and provides increased performance.

    Transaction Support

    Using an in-process Communication Model between Java EE Application Server and JBI container allows both web services and JBI modules to use the same transaction model. Through multiple web service calls and calls to other JBI modules. For example, BPEL processes can all use the same transaction.

    Security Support

    When executing Java EE Web Services from within the JBI container, the Java EE Service Engine allows security contexts to propagate between components. This removes the need to authenticate against each service.

    SQL Service Engine

    SQL service engine allows SQL statements to be executed against relational databases and allows the results of SQL statements to be returned to the client application or other Service Engines for further processing.

    SQL service engine allows SQL DDL (Data Definition Language), SQL DML (Data Manipulation Language), and stored procedures to be executed against a database. This, therefore, allows different scenarios to be executed against the database. For example, obtaining a customer’s address or the number of outstanding invoices a customer may have.

    Within NetBeans, the SQL module is used to interact with the SQL Service Engine.
    The SQL module project consists of three artifacts as follows:

    • configuration xml file (connectivityInfo.xml)
    • one or more SQL files containing distinct SQL statements
    • WSDL file describing the SQL operations.

    SQL Modules are created by choosing File | New Project and then selecting the SQL Module option from within the SOA projects category.

    Within a SQL Module, there is a configuration file called connectivityInfo.xml which contains connection details for the database. This can either be specified as a driver connection or as a JNDI name for a data source.
    [code lang=”xml”]
    <?xml version="1.0" encoding="UTF-8"?>
    <connection>
    <database-url value=’jdbc:derby://localhost:1527/db_name’/>
    <jndi-name value=”/>
    </connection>
    [/code]
    Each SQL statement that is to be presented to client applications as a new operation must be stored in a separate SQL file. Using the example scenarios above, we would have two SQL files with contents shown in the following table:

    In order for other JBI components to be able to access our SQL module, we must have a WSDL file which describes the operations we have defined (customer_address.sql and outstanding_invoices.sql). NetBeans will generate this file for us when we select the Generate WSDL option from right-clicking on the project in the Projects explorer.

    SQL Service assembly units cannot be executed directly from within the JBI container. To execute the SQL Service Unit, it needs to be added as part of a composite application. This is then called a Service Assembly. Composite applications are further discussed in Chapter 4.

    Service Assembly: a deployable component (jar file) that consists of a
    collection of Service Units.

    IEP Service Engine

    The Intelligent Event Processing service engine allows data to be read from an input source and then processed into a format that can be used for a variety of different purposes such as reporting or business intelligence information.

    For example, an IEP project could be created that takes sales information from a retail system, collects all information made over the last hour, and then outputs it to a database table for reporting purposes. This would enable fast reporting based upon a periodically updated subset of the business data. Any reporting queries performed would therefore be “off-line” to the business database. This way different reporting queries could be performed as and when necessary without any performance impact on the business database.

    Depending on the version of NetBeans that you have installed, you may not automatically have support for creating and editing IEP projects. If you do not have IEP project support within NetBeans, both the IEP service engine and NetBeans editor support for IEP projects can be downloaded from http://www.glassfishwiki.org/jbiwiki/attach/IEPSE/InstallationGuide.zip.

    New IEP modules can be created within NetBeans by selecting the File | New Project menu option and then selecting the Intelligent Event Processing Module
    option within the SOA category as shown in the following figure:

    After making the above selections, the second stage of the New Project wizard allows the Project Name and the Project Location to be specified.

    Finally, after creating the new IEP module, new Intelligent Event Processors can be added to the project. This is achieved by right-clicking on the newly created IEP project within the NetBeans Project pane and selecting the New | Intelligent Event Processor menu option. Selecting this option displays the New Intelligent Event Processor wizard which includes one page allowing the IEP File Name and Folder to be specified.

    The IEP Process Editor within NetBeans allows many different processing actions to be performed on data. IEP Processes are defined using a drag-and-drop editor. The Palette, which shows all of the operations that can be performed on data, is shown in the following figure:

    IEP Processes (Service Assemblies) cannot be executed directly from within the JBI container. To execute IEP Processes, they need to be deployed into a Service Assembly and added as part of a composite application. Composite applications are further discussed in Chapter 4.

    XSLT Service Engine

    XSLT Service Engine enables transformations of XML documents from one format to another using XSL stylesheets. The service engine allows XSL transformations to be deployed as web services which can then be used by external clients.

    New XSLT modules can be built to run against the XSLT service engine by selecting the File | New Project menu option and then selecting the XSLT Module option from within the SOA category as shown in the following figure:

    Several different types of files can be created within an XSLT Module to allow the service engine to transform XML files from one format to another. XML Schema files can be used to define XML within the transformation process. WSDL files are used to define the operations that are transformed within the service engine. We won’t discuss how WSDL files and XML Schema files are created and maintained in this chapter, however, we will discuss them in full detail later in this book.

    The final type of file that can be specified within an XSLT Module is an XSLT Service. These types of files can be created by right-clicking on the XSLT Module within the Project explorer in NetBeans and selecting the New | XSLT Service menu option. The result is shown in the next screenshot.
    When creating an XSLT Service Unit, two different processing modes (Service type)
    are available:

    • Request-Reply Service
    • Service Bridge

    The Request-Reply Service mode enables an XML message to be received from a client, transformed, and then sent back to the original client.

    The Service Bridge mode enables an XML message to be received from a client and transformed into a different format. The transformed message is then used as an input for invoking a service. The output of this service is then transformed using a second XSL stylesheet and returned to the original caller. The Service Bridge mode is therefore acting as a bridge between two services. This is an implementation of the Adapter Pattern as defined in Design Patterns—Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides.

    When creating a Request-Reply Service, the New XSLT Service wizard allows the web service for the XSLT transformation to be specified including details of the port, the operation being executed and the input and output types of the operation as shown in the following two screenshots:


    When creating a Service Bridge service, the two web services to be bridged are specified by first selecting the WSDL for the implemented web service and then for the invoked web service.

    Having selected the web services to bridge, the wizard allows the implemented and invoked web services to be fully specified. Here we need to specify the operation from our implemented service and the operation to call on the invoked service.

    Summary

    In this chapter, we have introduced the concept of a Service Engine and given an overview of the Service Engines installed with the NetBeans Enterprise Pack (the BPEL, Java EE, SQL, IEP, and XSLT Service Engines). We’ve learned that Service Engines:

    • provide business logic functionality to their clients
    • can be consumers and/or providers
    • run within a Java Business Integration (JBI) Server
    • expose their interfaces via WSDL
    • communicate within an Enterprise Service Bus via messaging
    • We’ve also discussed some basic concepts about JBI such as the Normalized Message Router, Service Assemblies, and Service Units. We have a good understanding of JBI, some of the problems with Enterprise Application integration and why JBI is useful. In the next chapter, we extend our knowledge of JBI and SOA application development with NetBeans by describing another standard JBI component—the binding component.

    Filed Under: WebServices Tagged With: NetBeans, SOA

    Ruby on Rails Web Mashup Projects

    October 6, 2009 by itadmin Leave a Comment

    Ruby on Rails Web Mashup Projects

    A step-by-step tutorial to building web mashups

    A web mashup is a new type of web application that uses data and services from one or more external sources to build entirely new and different web applications. Web mashups usually mash up data and services that are available on the Internet—freely, commercially, or through other partnership agreements. The external sources that a mashup uses are known as mashup APIs.

    This book shows you how to write web mashups using Ruby on Rails—the new web application development framework. The book has seven real-world projects—the format of each project is similar, with a statement of the project, discussion of the main protocols involved, an overview of the API, and then complete code for building the project. You will be led methodically through concrete steps to build the mashup, with asides to explain the theory behind the code.

    What This Book Covers

    The first chapter introduces the concepts of web mashups to the reader and provides a general introduction to the benefits and pitfalls of using web mashups as standalone applications or as part of existing web applications.

    The first project is a mashup plugin into an existing web application that allows users to find the location of the closest facility from a particular geographic location based on a specified search radius. The location is mapped and displayed on Google Maps.

    The second project is another mashup plugin. This plugin allows users to send messages to their own list of recipients, people who are previously unknown to the website, on behalf of the website. The project uses Google Spreadsheets and EditGrid to aggregate the information, and Clickatell and Interfax to send SMS messages and faxes respectively.

    The third project describes a mashup plugin that allows you to track the sales ranking and customer reviews of a particular product from Amazon.com. The main API used is the Amazon E-Commerce Service (ECS).

    The fourth project shows you how to create a full-fl edged Facebook application that allows a user to perform some of the functions and features of a job board. This mashup uses Facebook, Google Maps, Daylife, Technorati and Indeed.com APIs.

    The fifth project shows you how to create a full web mashup application that allows users to view information on a location. This is the chapter that uses the most mashup APIs, including Google Maps, FUTEF, WebserviceX, Yahoo! Geocoding services, WeatherBug, Kayak, GeoNames, Flickr, and Hostip.info.

    The sixth project describes a mashup plugin that allows an online event ticketing application to receive payment through Paypal, send SMS receipts, and add event records in the customer’s Google Calendar account. The APIs used are Google Calendar, PayPal, and Clickatell.

    The final project shows a complex mashup plugin used for making corporate expense claims. It allows an employee to submit expense claims in Google Docs and Spreadsheets, attaching the claims form and the supporting receipts. His or her manager, also using Google Docs and Spreadsheets, then approves the expense claims and the approved claims are retrieved by the mashup and used to reimburse the employee through
    PayPal. It uses the PayPal APIs and various Google APIs.

    ‘Find closest’ mashup plugin

    What does it do?

    This mashup plugin allows your Rails website or application to have an additional feature that allows your users to find the location of the closest facility from a particular geographic location based on a specified search radius. This mashup plugin integrates with your existing website that has a database of locations of the facilities.

    Building a kiosk locator feature for your site

    Your company has just deployed 500 multi-purpose payment kiosks around the country, cash cows for the milking. Another 500 more are on the way, promising to bring in the big bucks for all the hardworking employees in the company. Naturally your boss wants as many people as possible to know about them and use them. The problem is that while the marketing machine churns away on the marvels and benefits of the kiosks, the customers need to know where they are located to use them. He commands you:


    “Find a way to show our users where the nearest kiosks to him are, and directions to reach them!”

    What you have is a database of all the 500 locations where the kiosks are located, by their full address. What can you do?

    Requirements overview

    Quickly gathering your wits, you penned down the following quick requirements:

    1. Each customer who comes to your site needs to be able to find the closest kiosk to his or her current location.
    2. He or she might also want to know the closest kiosk to any location.
    3. You want to let the users determine the radius of the search.
    4. Finding the locations of the closest kiosks, you need to show him how to reach them.
    5. You have 500 kiosks now, (and you need to show where they are) but another 500 will be coming, in 10s and 20s, so the location of the kiosks need to be specified during the entry of the kiosks. You want to put all of these on some kind of map.

    Sounds difficult? Only if you didn’t know about web mashups!

    Design

    The design for this first project is rather simple. We will build a simple database application using Rails and create a main Kiosk class in which to store the kiosk information including its address, longitude, and latitude information. After populating the database with the kiosk information and address, we will use a geolocation service to discover its longitude and latitude. We store the information in the same table. Next, we will take the kiosk information and mash it up with Google Maps and display the kiosks as pushpins on the online map and place its information inside an info box attached to each pushpin.

    Mashup APIs on the menu

    In this chapter we will be using the following services to create a ‘find closest’ mashup plugin:

    • Google Maps APIs including geocoding services
    • Yahoo geocoding services (part of Yahoo Maps APIs)
    • Geocoder.us geocoding services
    • Geocoder.ca geocoding services
    • Hostip.info

    Google Maps

    Google Maps is a free web-based mapping service provided by Google. It provides a map that can be navigated by dragging the mouse across it and zoomed in and out using the mouse wheel or a zoom bar. It has three forms of views—map, satellite and a hybrid of map and satellite. Google Maps is coded almost entirely in JavaScript and XML and Google provides a free JavaScript API library that allows developers to integrate Google Maps into their own applications. Google Maps APIs also provide geocoding capabilities, that is, they able to convert addresses to longitude and latitude coordinates.

    We will be using two parts of Google Maps:

    • Firstly to geocode addresses as part of GeoKit’s APIs
    • Secondly to display the found kiosk on a customized Google Maps map

    Yahoo Maps

    Yahoo Maps is a free mapping service provided by Yahoo. Much like Google Maps it also provides a map that is navigable in a similar way and also provides an extensive set of APIs. Yahoo’s mapping APIs range from simply including the map directly from the Yahoo Maps website, to Flash APIs and JavaScript APIs. Yahoo Maps also provides geocoding services. We will be using Yahoo Maps geocoding services as part of GeoKit’s API to geocode addresses.

    Geocoder.us

    Geocoder.us is a website that provides free geocoding of addresses and intersections in the United States. It relies on Geo::Coder::US, a Perl module available for download from the CPAN and derives its data from the TIGER/Line data set, public-domain data from the US Census Bureau. Its reliability is higher in urban areas but lower in the other parts of the country. We will be using Geocoder.us as part of GeoKit’s API to geocode addresses.

    Geocoder.ca

    Geocoder.ca is a website that provides free geocoding of addresses in the United States and Canada. Like Geocoder.us. it uses data from TIGER/Line but in addition, draws data from GeoBase, the Canadian government-related initiative that provides geospatial information on Canadian territories. We will be using Geocoder.ca as part of GeoKit’s API to geocode addresses.

    Hostip.info

    Hostip.info is a website that provides free geocoding of IP addresses. Hostip.info offers an HTTP-based API as well as its entire database for integration at no cost. We will be using Hostip.info as part of GeoKit’s API to geocode IP addresses.

    GeoKit

    GeoKit is a Rails plugin that enables you to build location-based applications. For this chapter we will be using GeoKit for its geocoding capabilities in two ways:

    • To determine the longitude and latitude coordinates of the kiosk from its given address
    • To determine the longitude and latitude coordinates of the user from his or her IP address

    GeoKit is a plugin to your Rails application so installing it means more or less copying the source files from the GeoKit Subversion repository and running through an installation script that adds certain default parameters in your environment.rb file.

    To install the GeoKit, go to your Rails application folder and execute this at the command line:

    [code]
    $./script/plugin install svn://rubyforge.org/var/svn/geokit/trunk
    [/code]

    This will copy the necessary files to your RAILS_ROOT/vendor/plugins folder and run the install.rb script.

    Configuring GeoKit

    After installing GeoKit you will need to configure it properly to allow it to work. GeoKit allows you to use a few sets of geocoding APIs, including Yahoo, Google, Geocoder.us, and Geocoder.ca.

    These geocoding providers can be used directly or through a cascading failover sequence. Using Yahoo or Google requires you to register for an API key but they are free. Geocoder.us is also free under certain terms and conditions but both Geocoder.us and Geocoder.ca have commercial accounts. In this chapter I will briefl y go through how to get an application ID from Yahoo and a Google Maps API key from Google.

    Getting an application ID from Yahoo

    Yahoo’s application ID is needed for any Yahoo web service API calls. You can use the same application ID for all services in the same application or multiple applications or one application ID per service.

    To get the Yahoo application ID, go to https://developer.yahoo.com/wsregapp/index.php and provide the necessary information. Note that for this application you don’t need user authentication. Once you click on submit, you will be provided an application ID.

    Getting a Google Maps API key from Google

    To use Google Maps you will need to have a Google Maps API key. Go to http://www.google.com/apis/maps/signup.html. After reading the terms and conditions you will be asked to give a website URL that will use the Google Maps API.

    For geocoding purposes, this is not important (anything will do) but to display Google Maps on a website, this is important because Google Maps will not display if the URL doesn’t match. However all is not lost if you have provided the wrong URL at first; you can create any number of API keys from Google.

    Configuring evironment.rb

    Now that you have a Yahoo application ID and a Google Maps API key, go to environment.rb under the RAILS_ROOT/config folder. Installing GeoKit should have added the following to your environment.rb file:

    [code]
    # Include your application configuration below
    # These defaults are
    used in GeoKit::Mappable.distance_to and in acts_as_mappable
    GeoKit::default_units = :miles
    GeoKit::default_formula = :sphere
    # This is the timeout value in seconds to be used for calls to the
    geocoder web
    # services. For no timeout at all, comment out the setting. The
    timeout unit is in seconds.
    # GeoKit::Geocoders::timeout = 3
    # These settings are used if web service calls must be routed through
    a proxy.
    # These setting can be nil if not needed, otherwise, addr and port
    must be filled in at a minimum. If the proxy requires authentication,
    the username and password can be provided as well.
    GeoKit::Geocoders::proxy_addr = nil
    GeoKit::Geocoders::proxy_port = nil
    GeoKit::Geocoders::proxy_user = nil
    GeoKit::Geocoders::proxy_pass = nil
    # This is your yahoo application key for the Yahoo Geocoder
    # See http://developer.yahoo.com/faq/index.html#appid and
    http://developer.yahoo.com/maps/rest/V1/geocode.html
    GeoKit::Geocoders::yahoo = <YOUR YAHOO APP ID>
    # This is your Google Maps geocoder key.
    # See http://www.google.com/apis/maps/signup.html and
    http://www.google.com/apis/maps/documentation/#Geocoding_Examples
    GeoKit::Geocoders::google = <YOUR GOOGLE MAPS KEY>
    # This is your username and password for geocoder.us
    # To use the free service, the value can be set to nil or false. For
    usage tied to an account, the value should be set to
    username:password.
    # See http://geocoder.us and
    http://geocoder.us/user/signup
    GeoKit::Geocoders::geocoder_us = false
    # This is your authorization key for geocoder.ca.
    # To use the free service, the value can be set to nil or false. For
    usage tied to an account, set the value to the key obtained from
    Geocoder.ca
    # See http://geocoder.ca and
    http://geocoder.ca/?register=1
    GeoKit::Geocoders::geocoder_ca = false
    # This is the order in which the geocoders are called in a failover
    scenario
    # If you only want to use a single geocoder, put a single symbol in
    the array.
    # Valid symbols are :google, :yahoo, :us, and :ca
    # Be aware that there are Terms of Use restrictions on how you can
    use the various geocoders. Make sure you read up on relevant Terms of
    Use for each geocoder you are going to use.
    GeoKit::Geocoders::provider_order = [:google,:yahoo]
    [/code]

    Go to the lines where you are asked to put in the Yahoo and Google keys and change the values accordingly. Make sure the keys are within apostrophes.

    Then go to the provider order and put in the order you want (the first will be tried; if that fails it will go to the next until all are exhausted):

    [code]
    GeoKit::Geocoders::provider_order = [:google,:yahoo]
    [/code]

    This completes the configuration of GeoKit.

    YM4R/GM

    YM4R/GM is another Rails plugin, one that facilitates the use of Google Maps APIs. We will be using YM4R/GM to display the kiosk locations on a customized Google Map. This API essentially wraps around the Google Maps APIs but also provides additional features to make it easier to use from Ruby. To install it, go to your Rails application folder and execute this at the command line:

    [code]
    $./script/plugin install svn://rubyforge.org/var/svn/ym4r/Plugins/GM/trunk/ym4r_gm
    [/code]

    During the installation, the JavaScript files found in the RAILS_ROOT/vendors/plugin/javascript folder will be copied to the RAILS_ROOT/public/javascripts folder.

    A gmaps_api_key.yml file is also created in the RAILS_ROOT/config folder. This file is a YAML representation of a hash, like the database.yml file in which you can set up a test, development, and production environment. This is where you will put in your Google Maps API key (in addition to the environment.rb you have changed earlier).

    For your local testing you will not need to change the values but once you deploy this in production on an Internet site you will need to put in a real value according to your domain.

    What we will be doing

    As this project is a mashup plugin, normally you would already have an existing Rails application you want to add this to. However for the purpose of this chapter, I show how the mashup can be created on a fresh project. This is what we will be doing:

    • Create a new Rails project
    • Install the Rails plugins (GeoKit and YM4R/GM) that will use the various mashup APIs
    • Configure the database access and create the database
    • Create the standard scaffolding
    • Populate the longitude and latitude of the kiosks
    • Create the find feature
    • Display the found kiosk locations on Google Maps

    Creating a new Rails project

    This is the easiest part:

    [code]
    $rails Chapter2
    [/code]

    This will create a new blank Rails project.

    Installing the Rails plugins that will use the various mashup APIs

    In this mashup plugin we’ll need to use GeoKit, a Ruby geocoding library created by Bill Eisenhauer and Andre Lewis, and YM4R/GM—a Ruby Google Maps mapping API created by Guilhem Vellut. Install them according to the instructions given in the section above.

    Next, we need to create the database that we will be using.

    Configuring database access and creating the database

    Assuming that you already know how database migration works in Rails, generate a migration using the migration generator:

    [code]
    $./script/generate migration create_kiosks
    [/code]

    This will create a file 001_create_kiosks.rb file in the RAILS_ROOT/db/migrate folder. Ensure the file has the following information:

    [code]
    class CreateKiosks < ActiveRecord::Migration
    def self.up
    create_table :kiosks do |t|
    t.column :name, :string
    t.column :street, :string
    t.column :city, :string
    t.column :state, :string
    t.column :zipcode, :string
    t.column :lng, :float
    t.column :lat, :float
    end
    end
    def self.down
    drop_table :kiosks
    end
    end
    [/code]

    GeoKit specifies that the two columns must be named lat and lng. These two columns are critical to calculating the closest kiosks to a specific location.

    Now that you have the migration script, run it to create the Kiosk table in your RAILS_ROOT folder:

    Now that you have the migration script, run migrate to create the Kiosk table in your RAILS_ROOT folder:

    [code]
    $rake db:migrate
    [/code]

    This should create the database and populate the kiosks table with a set of data. If it doesn’t work please check if you have created a database schema with your favorite relational database. The database schema should be named chapter2_development. If this name displeases you somehow, you can change it in the RAILS_ROOT/config/database.yml file.

    Creating scaffolding for the project

    You should have the tables and data set up by now so the next step is to create a simple scaffold for the project. Run the following in your RAILS_ROOT folder:

    [code]
    $./script/generate scaffold Kiosk
    [/code]

    This will generate the Kiosk controller and views as well as the Kiosk model. This is the data model for Kiosk, in the kiosk.rb file. This is found in RAILS_ROOT/app/models/.

    [code]
    class Kiosk < ActiveRecord::Base
    def address
    "#{self.street}, #{self.city}, #{self.state}, #{self.zipcode}"
    end
    end
    [/code]

    Just add in the address convenience method to have quick access to the full address of the kiosk. This will be used later for the display in the info box.

    Populating kiosk locations with longitude and latitude information

    Before we begin geolocating the kiosks, we need to put physical addresses to them. We need to put in the street, city, state, and zipcode information for each of the kiosks. After this, we will need to geolocate them and add their longitude and latitude information. This information is the crux of the entire plugin as it allows you to find the closest kiosks.

    In addition you will need to modify the kiosk creation screens to add in the
    longitude and latitude information when the database entry is created.

    Populate the database with sample data

    In the source code bundle you will find a migration file named 002_populate_kiosks.rb that will populate some test data (admittedly less than 500 kiosks) into the system. We will use this data to test our plugin. Place the file in RAILS_ROOT/db/migrate and then run:

    [code]
    $rake db:migrate
    [/code]

    Alternatively you can have some fun entering your own kiosk addresses into the database directly, or find a nice list of addresses you can use to populate the database by any other means.

    Note that we need to create the static scaffold first before populating the database using the migration script above. This is because the migration script uses the Kiosk class to create the records in the database. You should realize by now that migration scripts are also Ruby scripts.

    Bulk adding of longitude and latitude

    One of the very useful tools in Ruby, also used frequently in Rails, is rake. Rake is a simple make utility with rake scripts that are entirely written in Ruby. Rails has a number of rake scripts distributed along with its installation, which you can find out using this command:

    [code]
    $rake –tasks
    [/code]

    Rails rake tasks are very useful because you can access the Rails environment, including libraries and ActiveRecord objects directly in the rake script. You can create your own customized rake task by putting your rake script into the RAILS_ROOT/lib/tasks folder.

    We will use rake to add longitude and latitude information to the kiosks records that are already created in the database.

    Create an add_kiosk_coordinates.rake file with the following code:

    [code]
    namespace :Chapter2 do
    desc ‘Update kiosks with longitude and latitude information’
    task :add_kiosk_coordinates => :environment do
    include GeoKit::Geocoders

    kiosks = Kiosk.find(:all)
    begin
    kiosks.each { |kiosk|
    loc = MultiGeocoder.geocode(kiosk.address)

    kiosk.lat = loc.lat
    kiosk.lng = loc.lng
    kiosk.update
    puts "updated kiosk #{kiosk.name} #{kiosk.address} =>
    [#{loc.lat}, #{loc.lng}]"
    }
    rescue
    puts $!
    end
    end
    end
    [/code]

    In this rake script you first include the Geocoders module that is the main tool for discovering the coordinate information. Then for each kiosk, you find its longitude and latitude and update the kiosk record.

    Run the script from the console in the RAILS_ROOT folder:

    [code]
    $rake Chapter2:add_kiosk_coordinates
    [/code]

    Depending on your network connection (running this rake script will of course require you to be connected to the Internet) it might take some time. Run it over a long lunch break or overnight and check the next day to make sure all records have a longitude and latitude entry. This should provide your mashup with the longitude and latitude coordinates of each kiosk. However your mileage may differ depending on the location of the kiosk and the ability of the geocoding API to derive the coordinates from the addresses.

    Adding longitude and latitude during kiosk creation entry

    Assuming that you have a kiosks_controller.rb already in place (it would be generated automatically along with the rest of the scaffolding), you need to add in a few lines very similar to the ones above to allow the kiosk created to have longitude and latitude information.

    First, include the geocoders by adding GeoKit after the controller definition, in kiosks_controller.rb.

    [code]
    class KiosksController < ApplicationController
    include GeoKit::Geocoders
    [/code]

    Next, add in the highlighted lines in the create method of the controller.

    [code]
    def create
    @kiosk = Kiosk.new(params[:kiosk])
    loc = MultiGeocoder.geocode(@kiosk.address)
    @kiosk.lat = loc.lat
    @kiosk.lng = loc.lng

    if @kiosk.save
    flash[:notice] = ‘Kiosk was successfully created.’
    redirect_to :action => ‘list’
    else
    render :action => ‘new’
    end
    end
    [/code]

    Finally, modify the update method in the controller to update the correct longitude and latitude information if the kiosk location changes.

    [code]
    def update
    @kiosk = Kiosk.find(params[:id])
    address = "#{params[:kiosk][:street]}, #{params[:kiosk][:city]},
    #{params[:kiosk][:state]}"
    loc = MultiGeocoder.geocode(address)
    params[:kiosk][:lat] = loc.lat
    params[:kiosk][:lng] = loc.lng
    if @kiosk.update_attributes(params[:kiosk])
    flash[:notice] = ‘Kiosk was successfully updated.’
    redirect_to :action => ‘show’, :id => @kiosk
    else
    render :action => ‘edit’
    end
    end
    [/code]

    Creating the find closest feature

    Now that you have the kiosk data ready, it’s time to go down to the meat of the code. What you’ll be creating is a search page. This page will have a text field for the user to enter the location from which a number of kiosks closest to it will be displayed. However, to be user-friendly, the initial location of the user is guessed and displayed on the text field.

    Create a search action in your controller (called search.rhtml, and place it in RAILS_ROOT/app/views/kiosks/) to find your current location from the IP address retrieved from your user.

    [code]
    def search
    loc = IpGeocoder.geocode(request.remote_ip)
    @location = []
    @location << loc.street_address << loc.city << loc.country_code
    end
    [/code]

    The remote_ip method of the Rails-provided request object returns the originating IP address, which is used by GeoKit to guess the location from Hostip.info. The location is then used by search.rhtml to display the guessed location.

    Note that if you’re running this locally, i.e. if you are browsing the application from your PC to a locally running server (for example, off your PC as well), you will not get anything. To overcome this, you can use a dynamic DNS service to point an Internet domain name to the public IP address that is assigned to your PC by your ISP. You will usually need to install a small application on your PC that will automatically update the DNS entry whenever your ISP-assigned IP address changes. There are many freely available dynamic DNS services on the Internet.

    When accessing this application, use the hostname given by the dynamic DNS service instead of using localhost. Remember that if you’re running through an internal firewall you need to open up the port you’re starting up your server with. If you have a router to your ISP you might need to allow port forwarding.

    This is a technique you will use subsequently in Chapters 5 and 6.

    Create a search.rhtml file and place it in the RAILS_ROOT/app/view/kiosks folder with the following code:

    [code]
    <h1>Enter source location</h1>
    Enter a source location and a radius to search for the closest kiosk.
    <% form_tag :action => ‘find_closest’ do %>
    <%= text_field_tag ‘location’, @location.compact.join(‘,’) %>
    <%= select_tag ‘radius’, options_for_select({‘5 miles’ => 5, ’10
    miles’ => 10, ’15 miles’ => 15}, 5) %>
    <%= submit_tag ‘find’ %>
    <% end %>
    [/code]

    Here you’re asking for the kiosks closest to a specific location that are within a certain mile radius. We will be using this information later on to limit the search radius.

    After that, mix-in the ActsAsMappable module into the Kiosk model in kiosk.rb.

    [code]
    class Kiosk < ActiveRecord::Base
    acts_as_mappable
    end
    [/code]

    This will add in a calculated column called (by default) distance, which you can use in your condition and order options. One thing to note here is that the ActsAsMappable module uses database-specific code for some of its functions, which are only available in MySQL and PostgresSQL.

    Next, create the find_closest action to determine the location of nearest kiosks.

    [code]
    def find_closest
    @location = MultiGeocoder.geocode(params[:location])
    if @location.success
    @kiosks = Kiosk.find(:all,
    :origin => [@location.lat, @location.lng],
    :conditions => "distance < #{params[:radius]}",
    :order=>’distance’)
    end
    end
    [/code]

    The ActsAsMappable module mixed in also overrides the find method to include an originating location, either based on a geocode-able string or a 2-element array containing the longitude/latitude information. The returned result is a collection of kiosks that are found with the given parameters.

    Finally create a simple find_closest.rhtml view template (and place it in the RAILS_ROOT/app/view/kiosks/ folder) to display the kiosks that are retrieved. We’ll add in the complex stuff later on.

    [code]
    <h1><%= h @kiosks.size %> kiosks found within your search radius</h1>
    <ol>
    <% @kiosks.each do |kiosk| %>
    <li><%= kiosk.name%><br/></li>
    <% end %>
    </ol>
    [/code]

    Do a quick trial run and see if it works.

    [code]
    $./script/server
    [/code]

    Then go to http://localhost:3000/kiosks/search. If you have some data, put in a nearby location (e.g. from our source data: San Francisco) and click on ‘find’. You should be able to retrieve some nearby kiosks.

    Displaying kiosks on Google Maps

    Now that you know where the kiosks are located, it’s time to show them on Google Maps. For this we’ll be using the YM4R/GM plugin. If you haven’t installed this plugin yet, it’s time to go back and install it.

    To add display to Google Maps, you will need to change the find_closest action as well as the find_closest view template. First, add the find_closest action in the kiosks_controller.rb:

    [code]
    def find_closest
    @location = MultiGeocoder.geocode(params[:location])
    if @location.success
    @kiosks = Kiosk.find(:all,
    :origin => [@location.lat, @location.lng],
    :conditions => ["distance < ?", params[:radius]],
    :order=>’distance’)
    @map = GMap.new("map_div")
    @map.control_init(:large_map => true, :map_type => true)
    # create marker for the source location
    @map.icon_global_init( GIcon.new(:image =>
    "http://www.google.com/mapfiles/ms/icons/red-pushpin.png",
    :shadow => "http://www.google.com/
    mapfiles/shadow50.png",
    :icon_size => GSize.new(32,32),
    :shadow_size => GSize.new(37,32),
    :icon_anchor => GPoint.new(9,32),
    :info_window_anchor => GPoint.new(9,2),
    :info_shadow_anchor =>
    GPoint.new(18,25)),
    "icon_source")
    icon_source = Variable.new("icon_source")
    source = GMarker.new([@location.lat, @location.lng],
    :title => ‘Source’,
    :info_window => "You searched for kiosks
    <br>#{params[:radius]} miles around this source",
    :icon => icon_source)
    @map.overlay_init(source)
    # create markers one for each location found
    markers = []
    @kiosks.each { |kiosk|
    info = <<EOS
    <em>#{kiosk.name}</em><br/>
    #{kiosk.distance_from(@location).round} miles away<br/>
    <a href="http://maps.google.com/maps?saddr=#{u(@location.to_
    geocodeable_s)}&daddr=#{u(kiosk.address)}>directions here from
    source</a>
    EOS
    markers << GMarker.new([kiosk.lat, kiosk.lng], :title =>
    kiosk.name, :info_window => info)
    }
    @map.overlay_global_init(GMarkerGroup.new(true, markers),"kiosk_
    markers")
    # zoom to the source
    @map.center_zoom_init([@location.lat, @location.lng], 12)
    end
    end
    [/code]

    Google Maps API is a JavaScript library and YM4R/GM code is a library that creates JavaScript scripts to interact and manipulate the Google Maps API. Almost all classes in the library correspond with an equivalent Google Maps API class, so it is important that you are also familiar with the Google Maps API. The online documentation comes in very useful here so you might want to open up the Google Maps reference documentation (http://www.google.com/apis/maps/documentation/reference.html) as you are coding.

    Let’s go over the code closely.

    The first line creates a GMap object that is placed inside a

    tag with the id map_div while the second line sets some control options.

    [code]
    @map = GMap.new("map_div")
    @map.control_init(:large_map => true, :map_type => true)
    [/code]

    The next few lines then create a GMarker object from the source location that the user entered that uses a specific icon to show it then overlays it on the map. There are several options you can play around with here involving setting the image to be shown as the marker. For this chapter I used a red-colored pushpin from Google Maps itself but you can use any image instead. You can also set the text information window that is displayed when you click on the marker. The text can be in HTML so you can add in other information including images, formatting, and so on.

    [code]
    # create marker for the source location
    @map.icon_global_init( GIcon.new(:image =>
    "http://www.google.com/mapfiles/ms/icons/red-pushpin.png",
    :shadow => "http://www.google.com/
    mapfiles/shadow50.png",
    :icon_size => GSize.new(32,32),
    :shadow_size => GSize.new(37,32),
    :icon_anchor => GPoint.new(9,32),
    :info_window_anchor => GPoint.new(9,2),
    :info_shadow_anchor =>
    GPoint.new(18,25)), "icon_source")
    icon_source = Variable.new("icon_source")
    source = GMarker.new([@location.lat, @location.lng],
    :title => ‘Source’,
    :info_window => "You searched for kiosks
    <br>#{params[:radius]} miles around this source",
    :icon => icon_source)
    @map.overlay_init(source)
    [/code]

    The lines of code after that go through each of the located kiosks and create a GMarker object then overlay it on the map too. For each kiosk location, we put in an info window that describes the distance away from the source location and a link that shows the directions to get from the source to this kiosk. This link goes back to Google and will provide the user with instructions to navigate from the source location to the marked location.

    Note that you need to URL encode the location/address strings of the source and kiosks, so you need to include ERB::Util as well (along with GeoKit::Geocoders). This is the u() method. In kiosks_controller.rb,add:

    [code]
    include ERB::Util
    [/code]

    then add the following (beneath the code entered above):

    [code]
    # create markers one for each location found
    markers = []
    @kiosks.each
    { |kiosk|
    info = <<EOS
    <em>#{kiosk.name}</em><br/>
    #{kiosk.distance_from(@location).round} miles away<br/>
    <a href="http://maps.google.com/maps?saddr=#{u(@location.
    to_geocodeable_s)}&daddr=#{u(kiosk.address)}>directions here from
    source</a>
    EOS
    markers << GMarker.new([kiosk.lat, kiosk.lng],
    :title => kiosk.name, :info_window => info)
    }
    @map.overlay_global_init(GMarkerGroup.new(true, markers),
    "kiosk_markers")
    [/code]

    Finally the last line zooms in and centers on the source location.

    [code]
    # zoom to the source
    @map.center_zoom_init([@location.lat, @location.lng], 12)
    [/code]

    Now let’s look at how the view template is modified to display Google Maps. The bulk of the work has already been done by YM4R/GM so you need only to include a few lines.

    [code lang=”html”]
    <h1><%= h @kiosks.size %> kiosks found within your search radius</h1>
    <ol>
    <% @kiosks.each do |kiosk| %>
    <li><%= kiosk.name%><br/></li>
    <% end %>
    </ol>
    <%= GMap.header %>
    <%= javascript_include_tag("markerGroup") %>
    <%= @map.to_html%>
    <%= @map.div(:width => 500, :height => 450)%>
    [/code]

    Gmap.header creates the header information for the map, including YM4R/GM and Google Maps API JavaScript files. We are also using GMarkerGroups so we need to include the GMarkerGroup JavaScript libraries. Next, we need to initialize the map by calling map.to_html. Finally we’ll need to have a div tag that is the same as the one passed to the GMap constructor in the controller (map_div). This is done by calling the div method of the GMap object. To size the map correctly we will also need to pass on its dimensions (height and width here).

    And you’re ready to roll! Although the page doesn’t display the best layout, you can spice things up by adding the necessary stylesheets to make the view more presentable.

    Summary

    What we’ve learned in this chapter is to create a mashup with Ruby on Rails on a number of mapping and geocoding providers including Yahoo, Google, geocoder. us, geocoder.ca, and hostip.info. We learned to create a mashup that gives us a map of the closest kiosks to a particular location, given an existing database of kiosks that have location addresses. This is just an introduction to the synergistic value that mashups bring to the table, creating value that was not available in individual APIs. When they are all put together, you have a useful feature for your website.

    Filed Under: Ruby Tagged With: Rails

    WebSphere Messaging

    October 3, 2009 by itadmin Leave a Comment

    WebSphere Application Server 7.0 Administration Guide

    As a J2EE (Enterprise Edition) administrator, you require a secure, scalable, and resilient infrastructure to support and manage your J2EE applications and service-oriented architecture services.

    The WebSphere suite of products from IBM provides many different industry solutions and WebSphere Application Server is the core of the WebSphere product range from IBM.

    WebSphere is optimized to ease administration and improve runtime performance. It runs your applications and services in a reliable, secure, and high-performance environment to ensure that your core business opportunities are not lost due to application or infrastructure downtime.

    Whether you are experienced or new to WebSphere, this book will provide you with a cross-section of WebSphere Application Server features and how to configure these features for optimal use. This book will provide you with the knowledge to build and manage performance-based J2EE applications and service-oriented architecture (SOA) services, offering the highest level of reliability, security, and scalability.

    Taking you through by examples, you will be shown the different methods for installing
    WebSphere Application Server and will be shown how to configure and prepare WebSphere resources for your application deployments. The facets of data-aware and message-aware applications are explained and demonstrated, giving the reader real-world examples of manual and automated deployments.

    WebSphere security is covered in detail showing the various methods of implementing federated user and group repositories. Key administration features and tools are introduced, which will help WebSphere administrators manage and tune their WebSphere implementation and applications. You will also be shown how to administer your WebSphere server standalone or use the new administrative agent, which provides the ability to administer multiple installations of WebSphere Application Server using one single administration console.

    also read:

    • WebLogic Interview Questions
    • JBoss Portal Server Development
    • Tomcat Interview Questions

    What This Book Covers

    Chapter 1, Installing WebSphere Application Server covers how to plan and prepare your WebSphere installation and shows how to manually install WebSphere using the graphical installer and how to use a response file for automated silent installation. The fundamentals of application server profiles are described and the administrative console is introduced.
    Chapter 2, Deploying your Applications explains the make-up of Enterprise Archive (EAR) files, how to manually deploy applications, and how Java Naming and Directory Interface (JNDI) is used in the configuration of resources. Connecting to databases is explained via the configuration of Java database connectivity (JDBC) drivers and data sources used in the deployment of a data-aware application.
    Chapter 3, Security demonstrates the implementation of global security and how to federate lightweight directory access protocol (LDAP) and file-based registries for managing WebSphere security. Roles are explained where users and groups can be assigned different administrative capabilities.
    Chapter 4, Administrative Scripting introduces ws_ant, a utility for using apache Ant
    build scripts to deploy and configure applications. Advanced administrative scripting is demonstrated by using the wsadmin tool with Jython scripts, covering how WebSphere deployment and configuration can be automated using the extensive WebSphere Jython scripting objects.
    Chapter 5, WebSphere Configuration explains the WebSphere installation structure and key XML files, which make up the underlying WebSphere configuration repository. WebSphere logging is covered showing the types of log and log settings that are vital for administration. Application Server JVM settings and class loading are explained.
    Chapter 6, WebSphere Messaging explains basic Java message service (JMS) messaging concepts and demonstrates both JMS messaging using the default messaging provider and WebSphere Message Queuing (MQ) along with explanations of message types. Use of Queue Connection Factories, Queues, and Queue Destinations are demonstrated via a sample application.
    Chapter 7, Monitoring and Tuning shows how to use TivoliPerformance Monitor, request metrics, and JVM tuning settings to help you improve WebSphere performance and monitor the running state of your deployed applications.
    Chapter 8, Administrative Features covers how to enable the administrative agent for
    administering multiple application servers with a central administrative console. IBM HTTP Server and the WebSphere plug-in are explained.
    Chapter 9, Administration Tools demonstrates some of the shell-script-based utilities vital to the WebSphere administrator for debugging and problem resolution.
    Chapter 10, Product Maintenance shows how to maintain your WebSphere Application Server by keeping it up-to-date with the latest fix packs and feature packs.

    WebSphere Messaging

    Messaging in a large enterprise is common and a WebSphere administrator needs to understand what WebSphere Application Server can do for Java Messaging and/or WebSphere Message Queuing (WMQ) based messaging. Here, we will learn how to create Queue Connection Factories (QCF) and Queue Destinations (QD) which we will use in a demonstration application where we will demonstrate the Java Message Service (JMS) and also show how WMQ can be used as part of a
    messaging implementation.

    In this chapter, we will cover the following topics:

    • Java messaging
    • Java Messaging Service (JMS)
    • WebSphere messaging
    • Service integration bus (SIB)
    • WebSphere MQ
    • Message providers
    • Queue connection factories
    • Queue destinations

    Java messaging

    Messaging is a method of communication between software components or applications. A messaging system is often peer-to-peer, meaning that a messaging client can send messages to, and receive messages from, any other client. Each client connects to a messaging service that provides a system for creating, sending, receiving, and reading messages. So why do we have Java messaging? Messaging
    enables distributed communication that is loosely-coupled. What this means is that a client sends a message to a destination, and the recipient can retrieve the message from the destination. A key point of Java messaging is that the sender and the receiver do not have to be available at the same time in order to communicate.
    The term communication can be understood as an exchange of messages between software components. In fact, the sender does not need to know anything about the receiver; nor does the receiver need to know anything about the sender. The sender and the receiver need to know only what message format and what destination to use. Messaging also differs from electronic mail (email), which is a method of communication between people or between software applications and people.
    Messaging is used for communication between software applications or software components. Java messaging attempts to relax tightly-coupled communication (such as, TCP network sockets, CORBA, or RMI), allowing software components to communicate indirectly with each other.

    Java Message Service

    Java Message Service (JMS) is an application program interface (API) from Sun. JMS provides a common interface to standard messaging protocols and also to special messaging services in support of Java programs. Messages can involve the exchange of crucial data between systems and contain information such as event notification and service requests. Messaging is often used to coordinate programs in dissimilar systems or written in different programming languages. By using the JMS interface, a programmer can invoke the messaging services like IBM’s WebSphere MQ (WMQ) formerly known as MQSeries, and other popular messaging products. In addition, JMS supports messages that contain serialized Java objects and messages that contain XML-based data.

    A JMS application is made up of the following parts, as shown in the following diagram:

    • A JMS provider is a messaging system that implements the JMS interfaces and provides administrative and control features.
    • JMS clients are the programs or components, written in the Java programming language, that produce and consume messages.
    • Messages are the objects that communicate information between JMS clients.
    • Administered objects are preconfigured JMS objects created by an administrator for the use of clients. The two kinds of objects are destinations and Connection Factories (CF).


    As shown in the diagram above, administrative tools allow you to create destinations and connection factories resources and bind them into a Java Naming and Directory Interface (JNDI) API namespace. A JMS client can then look up the administered objects in the namespace and establish a logical connection to the same objects through the JMS provider.

    JMS features

    Application clients, Enterprise Java Beans (EJB), and Web components can send or synchronously receive JMS messages. Application clients can, in addition, receive JMS messages asynchronously. A special kind of enterprise bean, the message-driven bean, enables the asynchronous consumption of messages. A JMS message can also participate in distributed transactions.

    JMS concepts

    The JMS API supports two models:

    Point-to-point or queuing model

    As shown below, in the point-to-point or queueing model, the sender posts messages to a particular queue and a receiver reads messages from the queue. Here, the sender knows the destination of the message and posts the message directly to the receiver’s queue. Only one consumer gets the message. The producer does not have to be running at the time the consumer consumes the message, nor does the consumer need to be running at the time the message is sent. Every message successfully processed is acknowledged by the consumer. Multiple queue senders and queue receivers can be associated with a single queue, but an individual message can be delivered to only one queue receiver. If multiple queue receivers are listening for messages on a queue, Java Message Service determines which one will receive the next message on a first-come-first-serve basis. If no queue receivers are listening on the queue, messages remain in the queue until a queue receiver attaches to the queue.

    Publish and subscribe model

    As shown by the above diagram, the publish/subscribe model supports publishing messages to a particular message topic. Unlike the point-to-point messaging model, the publish/subscribe messaging model allows multiple topic subscribers to receive the same message. JMS retains the message until all topic subscribers have received it. The Publish & Subscribe messaging model supports durable subscribers, allowing you to assign a name to a topic subscriber and associate it with a user or application. Subscribers may register interest in receiving messages on a particular message topic.
    In this model, neither the publisher nor the subscriber knows about each other.

    By using Java, JMS provides a way of separating the application from the transport layer of providing data. The same Java classes can be used to communicate with different JMS providers by using the JNDI information for the desired provider. The classes first use a connection factory to connect to the queue or topic, and then populate and send or publish the messages. On the receiving side, the clients then receive or subscribe to the messages.

    JMS API

    The JMS API is provided in the Java package javax.jms. Below are the main interfaces provided in the javax.jms package:

    Messaging applications use the above listed interfaces in the Java code to implement JMS. The demo JMS Test Tool application contains code which you can look into to see how the above interfaces are used. We will cover the JMS Test Tool later in the chapter when we demonstrate how to deploy an application which uses messaging.

    WebSphere messaging

    WebSphere Application Server implements two main messaging sub-systems. The default-messaging-provider is internal to WebSphere and the WebSphere MQ messaging provider which uses WebSphere MQ. First, we will cover the default messaging provider which is implemented by using a SIB. Then, we will move onto the WebSphere MQ messaging provider. To demonstrate use of the SIB and the default Messaging provider, we will deploy an application which will use JMS via the SIB. Before we deploy the application, we will need to set up the JMS resources required for the application to implement Java messaging using the Java Message Service (JMS).

    Default JMS provider

    WebSphere Application Server comes with a default JMS provider as part of its installation and supports messaging through the use of the JMS. The default JMS provider allows applications deployed to WAS to perform asynchronous messaging without the need to install a third-party JMS provider. This is a very useful feature which runs as part of the WebSphere Application Server. The default JMS provider is utilized via the SIB and you can use the Administrative console to configure the SIB and JMS resources.

    Enterprise applications use JMS CF to connect to a service integration bus. Applications use queues within the SIB to send and receive messages. An application sends messages to a specific queue and those messages are retrieved and processed by another application listening to that queue. In WebSphere, JMS queues are assigned to queue destinations on a given SIB. A queue destination is where messages can be persisted over time within the SIB. Applications can also use topics for messages. Applications publish messages to the topics. To receive messages, applications subscribe to topics. JMS topics are assigned to topic spaces on the bus. The JMS topics are persisted in the SIB and accessed via appropriate connection factories which applications use to gain access to the bus.

    The following table gives a quick overview of the types of resource available for configuring JMS resources for the Default JMS provider running in the SIB.

    WebSphere SIB

    Before our applications can be installed and set up to use the default messaging provider, we must create a service integration bus. In a way, the SIB provides the backbone for JMS messaging when you are using the default provider. The default provider is internal to WebSphere Application Server and no third-party software is required utilize it.

    A service integration bus supports applications using message-based and service-oriented architectures. A bus is a group of interconnected servers and clusters that have been added as members of the bus. Applications connect to a bus at one of the messaging engines associated with its bus members.

    Creating a SIB

    To create a Service Integration Bus (SIB), log into the admin console and navigate to the Service integration section within the lefthand side panel and click on Buses,
    as shown in the following screenshot:

    Click New to enter the Create a new Service Integration Bus page where we will begin our SIB creation. Type InternalJMS in the Enter the name of your new bus field and uncheck the Bus security checkbox as shown below and then click Next.

    On the next screen, you will be prompted to confirm your SIB settings. Click Finish to complete the creation of the SIB. Once the wizard has completed, click Save to retain your configuration change. You will be returned to a screen which lists the available SIBs installed in your WebSphere configuration. Now that the SIB has been created, you can click on the SIB name to configure settings and operation of the SIB. We will not be covering managing a SIB in this book as it is beyond our scope. All we need to do is create a SIB so we can demonstrate an application using the default JMS provider which requires a SIB to operate.

    To complete the configuration, we must add an existing server as a member to the SIB so that we have a facility for message persistence. The SIB is just a service integration bus, almost like a connecting conduit, however we need and actual members, which in our case will be our application server called server1, which contain the actual implementation for the message store.

    To add a server as a bus member, click on the bus name called InternalJMS in the SIB list and then navigate to the Topology section and click Bus members as shown below.

    You will now be presented with a screen where you can add bus members. Click Add and you will be able to select the server you wish to add as a member to the bus. You will notice that the server is already pre-selected as shown below.

    Click Next to the final screen, where you will select the File store option from the option group field labeled Choose type of message store for the persistence of message state. Click Next to view the next configuration page where we will use the page defaults. Click Next to enter the Tune performance parameters page where we will also use the defaults. Clicking Next again will take you to the final summary page where you will click Finish to finalize adding the application server as a bus member. Click Save to retain the changes. You will now see the application server called server1 listed as a bus member. Now we can move on to configure the JMS resources.

    Configuring JMS

    Once we have created a SIB, we can configure JMS resources. The types of resources we need to create depend entirely upon the application you are deploying. In our demo JMS application, we are going to demonstrate putting a message on a queue using a sending Servlet which places messages on a queue, known as the sender, and then demonstrate receiving a message on the receiving Servlet, known as the receiver. This exercise will give you a detailed enough overview of a simple implementation of JMS. To continue, we will need to set up a queue connection factory which the application will use to connect to a message queue and an actual queue which the application will send messages to and receive messages from.

    Creating queue connection factories

    To create a queue connection factory, navigate to the Resources section of the left-hand-side panel in the Administrative console and click Queue connection factories from the JMS category as shown below.

    Select a scope of cell from the cell-scope pick-list and then click New to create a new
    QCF. In the Select JMS resource provider screen as shown below, select Default messaging provider from the available provider options and click OK.

    On the next page, you will be asked to fill in configuration settings for the QCF. We will only need to fill in a few fields. As shown below, type QCF.Test in the Name field, jms/QCF.Test in the JNDI name field and select the bus called InternalJMS
    from the Bus name field.

    Click Apply and then Save when prompted to do so in order to retain the changes.
    You will now see the QCF listed in the list of configured QCF.

    Creating queue destinations

    To create a queue, we will follow a similar process to creating a QCF. Select Queues from the JMS category located in the Resources section found in the left-hand-side panel of the Admin console.

    Select Default messaging provider from the list of messaging providers and then click on OK to enter the queue configuration page.

    On the queue configuration page, enter Q.Test in the Name field and jms/Q.Test in the JNDI name field.

    Select InternalJMS from the Bus name field found in the Connection section and select Create Service Bus destination from the Queue name field and click Apply. You will then be prompted to create a queue destination.

    In the Create a new queue for point-to-point messaging screen, type QD.Test in the identifier field and click Next.

    In the following screen of the wizard labelled Assign the queue to a bus member, you will see that server1 sis already pre-selected in the field called Bus member. The bus mentioned in the Bus member field is where the actual queue destination will be created Clicking Next will present you with the final step, a summary screen where you can click Finish and then Save to retain your queue configuration.

    To view your queue destination, you need to select the bus called InternalJMS from the list of buses found by navigating to the Service integration section of the left-hand-side panel from the Admin console and then click Buses. You will recognize this screen as the main bus configuration page we used when we created the SIB. Click the Destinations link located in the Destination resources section shown in the Destinations page as shown in the screenshot below.

    You will then be presented with a list of queue destinations in the SIB.

    To create Topics Connection Factories (TCF) and Topic Destinations (TD) for publish/subscribe messaging, you can follow a similar process. Publish/subscribe messaging will not be demonstrated in this book; however, you can use the process defined for creating QCF and QD as an example of how to create TCF and TD.

    Installing the JMS demo application

    To demonstrate the use of QCF and QD in the SIB, we will manually deploy an EAR file which contains two servlets that can be used to test JMS configurations.

    The JMS Test Tool application is a web application which provides a controller Servlet, which will process requests from an input page, which allows a user to put a simple message on a queue then get the message. The application is not industrial strength; however, it goes a long way to demonstrating the basics of JMS. The application can be downloaded from www.packtpub.com and it also contains all the source code, so you can look into the mechanics of simple JMS programming. We will not explain the code in this chapter as it detracts from administration; however, feel free to change the code and experiment in your learning of JMS.

    After you have downloaded the JMSTester.ear file to your local machine, use the Admin console to deploy it using the instructions in Chapter 2 as a guide. We will take you through some screens to ensure you correctly configure the appropriate resources as part of the installation.

    When you start the installation (deployment) of the EAR file, ensure you select the option called Detailed from the How do you want to install the application? section on the Preparing for the application installation screen as shown below to expose the configuration steps required by the EAR file, otherwise you will be given the default JMS configuration and you might not understand how JMS has been configured in the application. Another good reason for selecting the Detailed option is that the wizard will present extra screens which will allow you to optionally override the JNDI mappings for resource references.

    On the Install New Application screen, change the application name to JMS Test Tool, and then keep clicking Next until you come to Step 6, the Bind message destination references to administered objects page. When you get to this page, type jms/Q.Test in the Target Resource JNDI Name field, which means you want to bind the application’s internal resource reference called jms/Queue to the WebSphere configured JMS queue destination called jms/Q.Test (which we created earlier) as shown below.

    Using this level of JNDI abstraction means that the application does not need to know the actual JMS implementation technology, which in this case happens to be the internal WebSphere Default JMS provider. Click Next to proceed to the next step of the wizard. The next screen of the wizard will be the Map resource references to resources screen where you will be given the option of binding the applications JNDI resource declarations to the actual JNDI implementation as configured in WebSphere. In the image below you can see that the application has been configured to point to a QCF called jms/QCF, however in our configuration of WebSphere we have called our connection factory jms/QCF.Test. Type jms/QCF.Test into the Target Resource JNDI Name field.

    This concept of abstraction which WebSphere offers to J2EE applications which utilize indirect JNDI naming is a very powerful and very important part of configuring enterprise applications. Using indirect JNDI allows for the decoupling of the application from the application server actual implementation of JMS. The application is then pointed to the JNDI which it will use to look up the actual resource reference that has been configured in WebSphere. So, in simple words, the administrator decides what messaging sub-system the application will be using and is transparent to the application.

    We have now completed the configuration elements that require user intervention, so we can keep clicking Next until the application wizard is complete. If you get any warning as shown below, you can ignore it; the warnings come up due to WebSphere telling you that you have configured the QCF and queue destinations at cell level, and that other applications could be referencing them as well. Just click Continue to move on to the next steps.

    When you come to the Context root page, take note that the EAR file has been configured to use JMSTester as the web applications context root. We will leave this as defaulted for our demonstration; however, you could override it by typing in another context root. When you get to the Summary page of the wizard, click on Finish and Save to retain the applications deployment.

    JMS Test Tool application

    The JMS Test Tool application provides a simple test harness to send and receive messages to and from queues. The application can be downloaded from http://www.packtpub.com. To launch the deployed application, you can use the following URL:
    [code]
    http://<host_name>:9080/JMSTester/.
    [/code]

    If the application is deployed and has started error-free, you will be presented with the JMS Test Tool interface, which is a set of three HTML frames, as shown below.

    The main frame is the top-most frame where you enter a test message as shown below. The left-hand-side bottom frame provides help on how to use the tool, and the right-hand-side frame will show the results of a send or get message action.

    If you click Put Message, you will see that the left-hand-side bottom frame displays the status of the message being sent as shown below. Each time you click Put Message, a new message will be put on the queue.

    If you click Get Message, you will see that the left-hand-side bottom frame displays the contents of a given message retrieved from the queue as shown below.

    Each time you click Get Message, the next message will be read from the queue until there are no more messages.

    You can use this application to test both JMS and local MQ queue managers. This concludes our overview of using JMS and the default messaging provider.

    WebSphere MQ overview

    WebSphere MQ formerly known as MQ Series is IBM’s enterprise messaging solution. In a nutshell, MQ provides the mechanisms for messaging both in point-to-point and publish-subscribe. However, it guarantees to deliver a message only once. This is important for critical business applications which implement messaging. An example of a critical system could be a banking payments system where messages contain messages pertaining to money transfer between banking systems, so guaranteeing delivery of a debit/credit is paramount in this context. Aside from guaranteed delivery, WMQ is often used for messaging between dissimilar systems and the WMQ software provides programming interfaces in most of the common languages, such as Java, C, C++, and so on. If you are using WebSphere, then it is common to find that WMQ is often used with WebSphere when WebSphere is hosting message-enabled applications. It is important that the WebSphere administrator understands how to configure WebSphere resources so that application can be coupled to the MQ queues.

    Overview of WebSphere MQ example

    To demonstrate messaging using WebSphere MQ, we are going to re-configure the previously deployed JMS Tester application so that it will use a connection factory which communicates with a queue on a WMQ queue manager as opposed to using the default provider which we demonstrated earlier.

    Installing WebSphere MQ

    Before we can install our demo messaging application, we will need to download and install WebSphere MQ 7.0. A free 90-day trial can be found at the following URL:
    [code]
    http://www.ibm.com/developerworks/downloads/ws/wmq/.
    [/code]

    Click the download link as shown below.

    Similar to Chapter 1, you will be prompted to register as an IBM website user before you can download the WebSphere MQ Trial. Once you have registered and logged in, the download link above will take you to a page which lists download for different operating systems.
    Select WebSphere MQ 7.0 90-day trial from the list of available options as
    shown below.

    Click continue to go to the download page. You may be asked to fill out a questionnaire detailing why you are evaluating WebSphere MQ (WMQ). Fill out the question as you see fit and submit to move to the download page.

    As shown above, make sure you use the IBM HTTP Download director as it will
    ensure that your download will resume, even if your Internet loses a connection.
    [code]
    If you do not have a high-speed Internet connection, you can try
    downloading a free 90-day trial of WebSphere MQ 7.0 overnight
    while you are asleep.
    [/code]

    Download the trial to a temp folder, for example c:\temp, on your local machine. The screenshot above shows how the IBM HTTP Downloader will prompt for a location where you want to download it to. Once the WMQ install file has been downloaded, you can then upload the file using an appropriate secure copy utility like Winscp to an appropriate folder like /apps/wmq_install on your Linux machine. Once you have the file uploaded to Linux, you can then decompress the file and run the installer to install WebSphere MQ.

    Running the WMQ installer

    Now that you have uploaded the WMQv700Trial-x86_linux.tar file on your Linux machine, and follow these steps:

    1. You can decompress the file using the following command:
      gunzip ./WMQv700Trial-x86_linux.tar.gz
    2. Then run the un-tar command:
      tar -xvf ./ WMQv700Trial-x86_linux.tar
    3. Before we can run the WMQ installations, we need to accept the license agreement by running the following command:
      ./mqlicense.sh –accept
    4. To run the WebSphere MQ installation, type the following commands:
      rpm -ivh MQSeriesRuntime-7.0.0-0.i386.rpm
      rpm -ivh MQSeriesServer-7.0.0-0.i386.rpm
      rpm -ivh MQSeriesSamples-7.0.0-0.i386.rpm
    5. As a result of running the MQSeriesServer installation, a new user called mqm was created. Before running any WMQ command, we need to switch to this user using the following command:
      su – mqm
    6. Then, we can run commands like the dspmqver command which can be run to check that WMQ was installed correctly. To check whether WMQ is installed, run the following command:
      /opt/mqm/bin/dspmqver

    The result will be the following message as shown in the screenshot below:

    Creating a queue manager

    Before we can complete our WebSphere configuration, we need to create a WMQ queue manager and a queue, then we will use some MQ command line tools to put a test message on an MQ queue and get a message from an MQ queue.

    1. To create a new queue manager called TSTDADQ1, use the following command:
      crtmqm TSTDADQ1
    2. The result will be as shown in the image below.
    3. We can now type the following command to list queue managers:
      dspmq
    4. The result of running the dspmq command is shown in the image below.
    5. To start the queue manager (QM), type the following command:
      strmqm
    6. The result of starting the QM will be similar to the image below.
    7. Now that we have successfully created a QM, we now need to add a queue called LQ.Test where we can put and get messages.
    8. To create a local queue on the TSTDADQ1 QM, type the following commands in order:
      runmqsc TSTDADQ1
    9. You are now running the MQ scripting command line, where you can issue MQ commands to configure the QM.
    10. To create the queue, type the following command and hit Enter:
      define qlocal(LQ.TEST)
    11. Then immediately type the following command:
      end
    12. Hit Enter to complete the QM configuration, as shown by the following screenshot.


    You can use the following command to see if your LQ.TEST queue exists.
    [code]
    echo "dis QLOCAL(*)" | runmqsc TSTDADQ1 | grep -i test
    [/code]
    You have now added a local queue called Q.Test to the TSTDADQ1 queue manager.
    [code]
    runmqsc TSTDADQ1
    DEFINE LISTENER(TSTDADQ1.listener) TRPTYPE (TCP) PORT(1414)
    START LISTENER(TSTDADQ1.listener)
    End
    [/code]

    You can type the following command to ensure that your QM listener is running.
    [code]
    ps -ef | grep mqlsr
    [/code]
    The result will be similar to the image below.

    To create a default channel, you can run the following command.
    [code]
    runmqsc TSTDADQ1
    DEFINE CHANNEL(SYSTEM.ADMIN.SVRCONN) CHLTYPE(SVRCONN)
    End
    [/code]

    We can now use a sample MQ program called amqsput which we can use to put and get a test message from a queue to ensure that our MQ configuration is working before we continue to configure WebSphere.

    Type the following command to put a test message on the LQ.Test queue:
    [code lang=”java”]
    /opt/mqm/samp/bin/amqsput LQ.TEST TSTDADQ1
    [/code]
    Then you can type a test message: Test Message and hit Enter; this will put a message on the LQ.Test queue and will exit you from the AMQSPUTQ command tool.

    Now that we have put a message on the queue, we can read the message by using the MQ Sample command tool called amqsget. Type the following command to get the message you posted earlier:
    [code]
    /opt/mqm/samp/bin/amqsget LQ.TEST TSTDADQ1
    [/code]
    The result will be that all messages on the LQ.TEST queue will be listed and then the tool will timeout after a few seconds as shown below.

    We need to do two final steps to complete and that is to add the root user to the mqm group. This is not a standard practice in an enterprise, but we have to do this because our WebSphere installation is running as root. If we did not do this, we would have to reconfigure the user which the WebSphere process is running under and then add the new user to MQ security. To keep things simple, ensure that root is a member of the mqm group, by typing the following command:
    [code]
    usermod -a -G mqm root
    [/code]
    We also need to change WMQ security to ensure that all users of the mqm group have access to all the objects of the TSTDADQ1 queue manager. To change WMQ security to give access to all objects in the QM, type the following command:
    [code]
    setmqaut -m TSTDADQ1 -t qmgr -g mqm +all
    [/code]
    Now, we are ready to re-continue our configuring WebSphere and create the appropriate QCF and queue destinations to access WMQ from WebSphere.

    Creating a WMQ connection factory

    Creating a WMQ connection factory is very similar to creating a JMS QCF. However, there are a few differences which will be explained in the following steps. To create a WMQ QCF, log in to the Admin console and navigate to the JMS category of the Resources section found in the left-hand-side panel of the Admin console and click on Queue connection factories. Select the Cell scope and click on New. You will be presented with an option to select a message provider. Select WebSphere MQ messaging provider as shown below and click OK.

    You will then be presented with a wizard which will first ask you for the name of the QCF. Type QCF.LQTest in the Name field and type jms/QCF.LQTest in the JNDI
    name
    field, as shown below.

    Click on Next to progress to the next step of the wizard, where you will decide on how to connect to WMQ. As shown in the following screenshot, select the Enter all the required information into this wizard option and then click on Next.

    In the Supply queue connection details screen, you will need to type TSTDADQ1 into the Queue manager or queue sharing group name field and click on Next.

    On the next screen of the wizard, you will be asked to fill in some connection details.
    Ensure that the Transport field is set to Bindings then client. Type localhost in the hostname field and then add the value 1414 to the Port field, and type SYSTEM. ADMIN.SVRCONN into the Server connection channel field as shown below and then click on Next to move on to the next step of the wizard.

    On the next page, you will be presented with a button to test your connection to WMQ. If you have set up WMQ correctly, then you will be able to connect and a results page will be displayed confirming a successful connection to WMQ. If you cannot connect at this stage, then you will need to check your MQ setup. Most often it is security that is the problem. If you find you have an issue with security, you can search Google for answers on how to change WMQ security. Once your test is successful, click on Next to move on to the final Summary page which will list your QCF configuration. On the final page of the wizard, click Finish to complete the WMQ QCF configuration and click Save to retain your changes. You will now see two QCF configurations, one for JMS and one for WMQ, as shown below:

    Creating a WMQ queue destination

    The next step after creating a QCF is to create a queue destination. We will use the queue named LQ.Test which we created on the TSTDADQ1 queue manager. To create a new queue, navigate to the JMS category of the Resources section in the left-hand-side panel of the admin console and click Queues. Click on New to start the queue creation wizard. In the provider selector screen, select WebSphere MQ messaging provider and click on Next. You will then be presented with a page that allows you to specify settings for the queue. In the Name field, type LQ.Test and then type jms/LQ.Test in the JNDI name field. In the Queue name field, type LQ.TEST which is the actual name for the underlying queue, as shown below.
    [code]
    Useful tip: Optionally, you can type the Queue Manager name,
    for example, TSTDADQ1 into the Queue manager or queue
    sharing group name field, but if you ever use WMQ clustering,
    it is not required and will stop MQ clustering from working correctly.
    [/code]

    Click on Apply to submit the changes, and then click on Save to retain the changes to the WebSphere configuration repository. You will then be presented with a list of queues as shown below:

    We have now configured a WebSphere MQ queue connection factory and a WebSphere MQ queue destination which our test application will use to send and receive messages from WMQ.

    Reconfiguring the JMS demo application

    Now that we have created a QCF and queue destination using WMQ as the message provider, we will need to reconfigure the JMS Test Tool application to point to the WMQ JNDI names as opposed to the Default Provider JNDI names. When we deployed the application, the installation wizard allowed us the option of re-pointing of the JNDI names. This was because the application’s deployment descriptor declared resource references, which the installation wizard picked up and presented as configurable options in the installation wizard. Even after a deployment is complete, it is possible to reconfigure an application at any time by drilling down into the application configuration. We want to change the JNDI names the application is using for the QCF and queue destination. We are going to change jms/QCF.Test to jms/ QCF.LQTest and jms/Q.Test to jms/LQ.Test. This re-mapping of the applications JNDI will allow the application to use WMQ instead of JMS via the SIB. To change the application’s resource references, click Applications in the left-hand-side panel of the
    Admin console, and then expand the Application Types section and click WebSphere
    enterprise applications
    . Click on the JMS Test Tool from the application list. You will
    then be presented with the main application configuration panel. Look for a section called References as shown in the following screenshot:

    Click on the Resource references link and change the Target Resource JNDI Name
    field to jms/QCF.LQTest as shown below and then click on OK to return back to the
    previous page.

    Click on Continue if you get any warnings. We have now re-pointed the application’s QCF reference to the new WMQ QCF configuration.
    To change the queue destination, we click on the Message destination references link
    and change the Target Resource JNDI Name field to jms/LQ.Test as shown below.

    We have now completed the re-mapping of resources. Click on Save to make the changes permanent and restart the application server. When you next use the JMS Test Tool application, the sending and receiving of messages will be using WMQ instead of the Default Messaging Provider.
    [code]
    You can use the following command to show the messages sitting
    on the LQ.TEST queue if you wish to see the queue depth (how many
    messages are on the queue):

    echo "dis ql(*) curdepth where (curdepth gt 0)" |
    runmqsc TSTDADQ1
    [/code]

    Summary

    In this chapter, we learned that WebSphere provides a level of abstraction to messaging configuration by allowing resources to be referenced by JNDI. We deployed a message-enabled application which required a queue connection factory and queue destination which it used to send and receive messages. We configured two different implementations of JMS. One implementation used the internal Default Messaging Provider, which required a SIB to be created, and we covered how to create the QCF and queue destinations and bound the applications resource references to those configured in WebSphere.

    We then covered how to install WebSphere MQ and larned how to create a queue manager and a queue. Then, in WebSphere, we created a QCF and queue destination using the WebSphere MQ provider and demonstrated how to to re-map our applications resource references to re-point the application to use MQ messaging subsystem as opposed to the internal messaging subsystem.

    There are many uses of messaging in enterprise applications and we have essentially covered the key areas for configuring WebSphere to facilitate resources for message-enabled applications.

    Filed Under: Servers Tagged With: Messaging, WebSphere

    JasperReports 3.5 for Java developers

    October 1, 2009 by itadmin Leave a Comment

    JasperReports 3.5 for Java developers

    If you want to create easily understood, professional, and powerful reports from disordered, scattered data using a free, open source Java class library, this book on JasperReports is what you are looking for. JasperReports is the world’s most popular embeddable Java open source reporting library, providing Java developers with the power to create rich print and web reports easily.

    also read:

    • Java Tutorials
    • Java EE Tutorials
    • Design Patterns Tutorials
    • Java File IO Tutorials

    JasperReports allows you to create better looking reports with formatting and grouping, as well as adding graphical elements to your reports. You can also export your reports to a range of different formats, including PDF and XML. Creating reports becomes easier with the iReport Designer visual designing tool. To round things off, you can integrate your reports with other Java frameworks, using Spring or Hibernate to get data for the report, and Java Server Faces or Struts for presenting the report.
    This book shows you how to get started and develop the skills to get the most from JasperReports. The book has been fully updated to use JasperReports 3.5, the latest version of JasperReports. The previously accepted techniques that have now been deprecated have been replaced with their modern counterparts in this latest version.
    All the examples in this book have been updated to use XML schemas for report templates. Coverage of new datasources that JasperReports now supports has been added to the book. Additionally, JasperReports can now export reports to even more formats than before, and exporting reports to these new formats is covered in this new edition of the book.

    The book steers you through each point of report setup, to creating, designing, formatting, and exporting reports with data from a wide range of datasources, and integrating JasperReports with other Java frameworks.

    What This Book Covers

    Chapter 1, An Overview of JasperReports, introduces you to JasperReports and how it came to be. It gives you an insight to JasperReports’ capabilities and features, and also an overview of the steps involved in generating reports using JasperReports.

    Chapter 2, Adding Reporting Capabilities to Java Applications, teaches you how to add reporting capabilities to your Java applications. You will have your development and execution environment set up to successfully add reporting capabilities to your Java applications by the end of this chapter.

    Chapter 3, Creating Your First Report, shows you how to create, compile, and preview your first report in both JasperReports’ native format and web browser. It also briefs you about the JRXML elements corresponding to different report sections.

    Chapter 4, Creating Dynamic Reports from Databases, continues with report creation, exploring how to create a report from the data obtained from a database. It also teaches you to generate reports that are displayed in your web browser in the PDF format.

    Chapter 5, Working with Other Datasources, uses datasources other than databases, such as empty datasources, arrays or collections of Java objects, Maps, TableModels, XML, CSV files, and custom datasources to create reports, enabling you to create your own datasources as well.

    Chapter 6, Report Layout and Design, gets you creating elaborate layouts, by controlling report-wide layout properties and styles, dividing the report data into logical groups, adding images, background text, and dynamic data to the reports, conditionally printing the report data, and creating subreports.

    Chapter 7, Adding Charts and Graphics to Reports, takes you to more appealing reports by showing how to take advantage of JasperReports’ graphical features and create reports with graphical data like geometric shapes, images, and 2D and 3D charts.

    Chapter 8, Other JasperReports Features, discusses the JasperReports features that lets you create elaborate reports, such as displaying report text in different languages, executing Java code snippets using scriptlets, creating crosstab reports, running a query with the results of a different query, adding anchors, hyperlinks, and bookmarks to the reports.

    Chapter 9, Exporting to Other Formats, demonstrates how to export reports to the formats supported by JasperReports, such as PDF, RTF, ODT, Excel, HTML, CSV, XML, and plain text and how to direct the exported reports to a browser.

    Chapter 10, Graphical Report Design with iReport, helps you get your hands on a graphical report designer called iReport, so that you can design reports graphically, and also, using iReport’s graphical user interface.

    Chapter 11, Integrating JasperReports with Other Frameworks, explains how to integrate JasperReports with several popular web application frameworks and ORM tools, such as Hibernate, JPA, Spring, JSF, and Struts.

    Graphical Report Design with iReport

    So far, we have been creating all our reports by writing JRXML templates by hand. JasperSoft, the company behind JasperReports, offers a graphical report designer called iReport. iReport allows us to design reports graphically by dragging report elements into a report template and by using its graphical user interface to set report attributes.

    iReport started as an independent project by Giulio Toffoli. JasperSoft recognized the popularity of iReport and, in October 2005, hired Giulio Toffoli and made iReport the official report designer for JasperSoft. Like JasperReports, iReport is also open source. It is licensed under the GNU Public License (GPL).
    In 2008, iReport was rewritten to take advantage of the NetBeans platform. It is freely available both as a standalone product and as a plugin to the NetBeans IDE.

    In this chapter, we will be covering the standalone version of
    iReport; however, the material is also applicable to the iReport
    NetBeans plugin.

    By the end of this chapter, you will be able to:

    • Obtain and set up iReport
    • Quickly create database reports by taking advantage of iReport’s Report Wizard
    • Design reports graphically with iReport
    • Add multiple columns to a report
    • Group report data
    • Add images and charts to a report

    Obtaining iReport

    iReport can be downloaded from its home page at http://jasperforge.org/projects/ireport by clicking on the Download iReport image slightly above the center of the page.

    Once we click on the image, we are directed to an intermediate page where we can either log in with our JasperForge account or go straight to the download page.

    Either logging in or clicking on the No Thanks, Download Now button takes us to the iReport download page.

    The standalone iReport product is in the first row of the table on the page. To download it, we simply click on the Download link in the last column. Other downloads on the page are for older versions of JasperReports, iReport NetBeans plugin, and other JasperSoft products.

    iReport can be downloaded as a DMG file for Macintosh computers, as a Windows installer for Windows PCs, as a source file, as a ZIP file, or as a gzipped TAR file.

    To install iReport, simply follow the usual application installation method for your platform. If you chose to download the ZIP or gzipped TAR file, simply extract it into any directory. A subdirectory called something like iReport-nb-3.5.1 will be created. (The exact name will depend on the version of iReport that was downloaded.) Inside this directory, you will find a bin subdirectory containing an executable shell script called ireport and a couple of Windows executables, ireport.exe and ireport_w.exe. On Windows systems, either EXE file will start iReport.

    The difference between the two Windows executables is that the
    ireport.exe will display a command-line window when iReport
    is executed, and ireport_w.exe won’t. Both versions provide
    exactly the same functionality.

    On Unix and Unix-like systems, such as Linux and Mac OS, iReport can be started by executing the ireport shell script. The following screenshot illustrates how iReport looks when it is opened for the first time:

    Setting up iReport

    iReport can help us quickly generate database reports. To do so, we need to provide it with the JDBC driver and connection information for our database. iReport comes bundled with JDBC drivers for several open source relational database systems, such as MySQL, PostgreSQL, HSQLDB, and others. If we want to connect to a different database, we need to add the JDBC driver to iReport’s CLASSPATH. This can be done by clicking on Tools | Options and then selecting
    the Classpath tab.

    To add the JDBC driver to the CLASSPATH, click on the Add JAR button, and then navigate to the location of the JAR file containing the JDBC driver. Select the JAR file and click on the OK button at the bottom of the window.

    We won’t actually add a JDBC driver, as we are using MySQL for
    our examples, which is one of the RDBMS systems supported out of
    the box by iReport. The information just provided is for the benefit of
    readers using an RDBMS system that is not supported out of the box.

    Before we can create reports that use an RDBMS as a datasource, we need to create a database connection. In order to do so, we need to click on the Report Datasources icon in the toolbar:

    After doing so, the Connections / Datasources configuration window should
    pop up.

    To add the connection, we need to click on the New button, select Database JDBC connection, and then click on the Next> button.

    We then need to select the appropriate JDBC driver, fill in the connection information, and click on the Save button.

    Before saving the database connection properties, it is a good idea to click on the Test button to make sure we can connect to the database. If we can, we should see a pop-up window like the following:

    After verifying that we can successfully connect to the database, we are ready to create some database reports.

    Creating a database report in record time

    iReport contains a wizard that allows us to quickly generate database reports (very useful if the boss asks for a report 15 minutes before the quitting time on a Friday!). The wizard allows us to use one of the predefined templates that are included with iReport. The included report templates are divided into two groups: templates laid out in a “columnar” manner and templates laid out in a “tabular” manner. Columnar templates generate reports that are laid out in columns, and tabular templates generate reports that are laid out like a table.

    In this section, we will create a report displaying all the aircraft with a horsepower of 1000 or more. To quickly create a database report, we need to go to File | New | Report Wizard.

    We should then enter an appropriate name and location for our report and click on Next>.

    Next, we need to select the datasource or database connection to use for our report. For our example, we will use the JDBC connection we configured in the previous section. We can then enter the database query we will use to create the report.

    Alternatively, we can use the iReport query designer to design the query.

    For individuals with SQL experience, in many cases it is easier
    to come up with the database query in a separate database client
    tool and then paste it in the Query text area than using the
    query designer.


    The complete query for the report is:
    [code]
    select
    a.tail_num,
    a.aircraft_serial,
    am.model as aircraft_model,
    ae.model as engine_model
    from aircraft a, aircraft_models am, aircraft_engines ae
    where a.aircraft_model_code = am.aircraft_model_code
    and a.aircraft_engine_code = ae.aircraft_engine_code
    and ae.horsepower >= 1000
    [/code]
    The following window shows a list of all the columns selected in the query, allowing us to select which ones we would like to use as report fields:

    In this case, we want the data for all columns in the query to be displayed in the report. Therefore, we select all columns by clicking on the second button.
    We then select how we want to group the data and click on Next>. This creates a report group. (Refer to the Grouping Report Data section in Chapter 6, Report Layout and Design for details.)

    In this example, we will not group the report data. The screenshot illustrates how the drop-down box contains the report fields selected in the previous step. We then select the report layout (Columnar or Tabular). In this example, we will use the Tabular Layout.

    After selecting the layout, we click on Next> to be presented with the last step.
    We then click on Finish to generate the report’s JRXML template.

    While the template is automatically saved when it is created, the
    report generated by the Preview button is not automatically saved.

    We can then preview our report by clicking on Preview.

    That’s it! We have created a report by simply entering a query and selecting a few options from a wizard.

    Tweaking the generated report

    Admittedly, the report title and column headers of our report need some tweaking. To modify the report title so that it actually refl ects the report contents, we can either double-click on the report title on iReport’s main window and type an appropriate report title, or we can modify the value of the Text property for the title static text in the Properties window at the lower righthand side.

    Double-clicking on the title is certainly the fastest way to modify it. However, the Properties window allows us to modify not only the text, but also the font, borders, and several other properties.
    We can follow the same procedure for each column header. The following screenshot shows the resulting template as displayed in iReport’s main window:

    We’ll preview the report one more time to see the final version.

    There you have it! The boss can have his or her report, and we can leave work and enjoy the weekend!

    Creating a report from scratch

    In the previous section, we discussed how to quickly generate a database report by using iReport’s Report Wizard. The wizard is very convenient because it allows us to create a report very quickly. However, its disadvantage is that it is not very fl xible.

    In this section, we will learn how to create a report from scratch in iReport. Our report will show the tail number, serial number, and model of every aircraft in the FlightStats database.
    To create a new report, we need to go to the File | New | Empty report menu item.

    At this point, we should enter a Report name and Location.

    In this example, we will set the report name to iReportDemo and accept all the other default values. After clicking on the OK button, iReport’s main window should look like this:

    The horizontal lines divide the different report sections. Any item we insert between any two horizontal lines will be placed in the appropriate report section’s band. Horizontal lines can be dragged to resize the appropriate section(s). The vertical lines represent the left and right report margins. It is not possible to drag the vertical lines. To modify the left and right margins, we must select the report in the Report Inspector window at the top left.

    Then, we need to modify the margins from the Properties window at the bottom right.

    Properties for all the report sections and elements, such as variables,
    scriptlets, title, background, detail, and so on, can be modified by
    following the approach described here.

    Going back to our empty report template, let’s add a report title. For this, we will use the static text Aircraft Report. To add the static text, we need to use the Static Text component in the Palette.

    We then need to drag the Static Text component to the Title area of the report. iReport, by default, inserts the text Static text inside this field. To modify this default text, we can double-click anywhere inside the field and type in a more appropriate title. Alternatively, we can modify the Text property for the static text field in the Properties window at the lower righthand side.

    In the Properties window, we can modify other properties for our text. In the above screenshot, we modified the text size to be 18 pixels, and we made it bold by clicking on the checkbox next to the Bold property. We can center the report title within the Title band by right-clicking on it, selecting Position, and then Center.

    After following all of these steps, our report should now look like this:

    Applying the same techniques used for adding the report title, we can add some more static text fields in the page header. After adding the page header, our report now looks like this:

    We modified the Vertical Alignment of all three text fields in the page header by selecting the appropriate values in the Properties window for each one of them.

    Now it is time to add some dynamic data to the report. We can enter a report query selecting the report node in the Report Inspector window and then selecting Edit Query.

    As we type the report query, by default iReport retrieves report fields from it. This query will retrieve the tail number, serial number, and model of every aircraft in the database.

    Now that we have a query and report fields, we can add text fields to the report. We can do so by dragging the fields in the Report Inspector window to the appropriate location in the report template.

    After aligning each text field with the corresponding header, our report should now look like this:

    To avoid extra vertical space between records, we resized the Detail band by dragging its bottom margin up. The same effect can be achieved by double-clicking on the bottom margin.
    Notice that we have an empty Column Header band in the report template. This empty band will result in having some whitespace between each header and the first row in the Detail band. To avoid having this whitespace in our report, we can easily delete this band by right-clicking on it in the Report Inspector window and selecting Delete Band.

    We now have a simple but complete report. We can view it by clicking on Preview.

    That’s it! We have created a simple report graphically with iReport.

    Creating more elaborate reports

    In the previous section, we created a fairly simple database report. In this section, we will modify that report to illustrate how to add images, charts, and multiple columns to a report. We will also see how to group report data. We will perform all of these tasks graphically with iReport.

    Adding images to a report

    Adding static images to a report is very simple with iReport. Just drag the Image component from the Palette to the band where it will be rendered in the report.

    When we drop the image component into the appropriate band, a window pops up asking us to specify the location of the image file to display.

    After we select the image, we can drag it to its exact location where it will be rendered.
    As we can see, adding images to a report using iReport couldn’t be any simpler.

    Adding multiple columns to a report

    The report we’ve been creating so far in this chapter contains over 11,000 records. It spans over 300 pages. As we can see, there is a lot of space between the text fields. Perhaps it would be a good idea to place the text fields closer together and add an additional column. This would cut the number of pages in the report by half. To change the number of columns in the report, we simply need to select the root report node in the Report Inspector window at the top left and then modify its Columns property in the Properties window at the bottom right.

    When we modify the Columns property, iReport automatically modifies the Column Width property to an appropriate value. We are free, of course, to modify this value if it doesn’t meet our needs.

    As our report now contains more than one column, it makes sense to re-add the Column Header band we deleted earlier. This can be done by right-clicking on the band in the Report Inspector window and selecting Add Band.

    Next, we need to move the static text in the page header to the Column Header band. To move any element from one band to another, all we need to do is drag it to the appropriate band in the Report Inspector window.

    Next, we need to resize and reposition the text fields in the Detail band and the static text elements in the Column Header band so that they fit in the new, narrower width of the columns. Also, resize the Column Header band to avoid having too much whitespace between the elements of the Column Header and Detail bands. Our report now looks like this:

    We can see the resulting report by clicking on Preview.

    Grouping report data

    Suppose we are asked to modify our report so that data is divided by the state where the aircraft is registered. This is a perfect situation to apply report groups. Recall from Chapter 6, Report Layout and Design, that report groups allow us to divide report data when a report expression changes. Recall that our report query limits the result set to aircraft registered in the United States, and one of the columns it retrieves is the state where the aircraft is registered.

    To define a report group, we need to right-click on the root report node in the Report Inspector window, and then select Add Report Group.

    Then, enter the Group name and indicate whether we want to group by a field or by a report expression. In our case, we want to group the data by state field.

    After clicking on Next>, we need to indicate whether we want to add a group header and/or footer to our report.

    For aesthetic purposes, we move the static text fields in the Column Header band to the Group Header band, remove the column and page header bands, and add additional information to the Group Header band. After making all of these changes, our report preview will look like this:

    We can preview the report by clicking Preview.

    Adding charts to a report

    To add a chart to a report, we need to drag the Chart component from the Palette into the approximate location where the chart will be rendered in the report.

    When dropping the chart component into the report, the following window will pop up, allowing us to select the type of chart we want to add to the report:

    For this example, we will add a 3D bar chart to the report. All that needs to be done is to click on the appropriate chart type, and then click on the OK button.

    Our chart will graphically illustrate the number of aircraft registered in each state of the United States. (We will explain how to have the chart display the appropriate data later in this section.) We will place the chart in the Summary band at the end of the report. As the chart will illustrate a lot of data, we need to resize the Summary band so that our chart can fit. After resizing the Summary band, outlining the area of the report to be covered by the chart, and selecting the chart type, the Summary section of our report preview looks like this:

    To fine-tune the appearance of the chart, we can select it in the Report Inspector window and then modify its properties as necessary in the Properties window.

    To specify the data that will be displayed in the chart, we need to right-click on the chart in the Report Inspector window and select Chart Data. We then need to click on the Details tab in the resulting pop-up window.

    We then need to click on the Add button to add a new Category series.

    The Series expression field is the name of the series. Its value can be any object that implements java.lang.Comparable. In most cases, the value of this field is a string. The Category expression field is the label of each value in the chart. The value of this field is typically a string. In our example, each state is a different category, so we will use the state field ($F{state}) as our category expression.

    The Value expression field is a numeric value representing the value to be charted for a particular category. In our example, the number of aircraft in a particular state is the value we want to chart. Therefore, we use the implicit stateGroup_COUNT variable ($V{stateGroup_COUNT}) as our value expression.

    The optional Label Expression field allows us to customize item labels in the chart.

    Every time we create a group in a report template, an implicit variable named groupName_COUNT is created, where groupName is the name of the group.

    We can either type in a value for the Series expression, Category expression, and Value expression fields, or we can click on the icon to be able to graphically select the appropriate expression using iReport’s Expression editor.

    Using the Expression editor, we can select any parameter, field, or variable as our expression. We can also use user-defined expressions to fill out any of the fields that require a valid JasperReports expression.

    After selecting the appropriate expressions for each of the fields, our chart details are as follows:

    After clicking on OK and closing the Chart details window, we are ready to view our chart in action, which can be done simply by clicking on Preview.

    Help and support

    Although this chapter didn’t discuss every iReport feature, I’m confident that iReport is intuitive enough after you get comfortable with it. Some of the iReport features not covered in this chapter include subreport creation and adding crosstabs, lines, ellipses, and rectangles to a report. However, we have learned all these features the “hard” way by creating a JRXML template by hand. For someone familiar with JasperReports, adding these features to a report created by iReport should be trivial. If more help is needed, JasperSoft provides additional documentation for iReport, and lots of knowledgeable people frequent the iReport forums at http://jasperforge.org/plugins/espforum/browse.php?group_id=83&forumid=101.

    Summary

    This chapter taught us how to install and set up iReport, use iReport’s Report Wizard to quickly generate a report, and graphically design custom reports. Moreover, we learned how to group report data graphically with iReport, to add multiple columns to a report, and to add images and charts to a report graphically with iReport. iReport is a very powerful tool that can significantly reduce report design time. To use all of the features of iReport effectively, however, an iReport user must be familiar with basic JasperReports concepts, such as bands, report variables, report fields, and so on.

    Filed Under: Java Tagged With: Jasper Reports

    Django 1.0 Web Site Development

    September 30, 2009 by itadmin Leave a Comment

    Django 1.0 Web Site Development

    Django is a high-level Python web application framework designed to support the development of dynamic web sites, web applications, and web services. It is designed to promote rapid development and clean, pragmatic design. Therefore, it lets you build high-performing and elegant web applications quickly.

    also read:

    • HTML Tutorials
    • CSS Tutorials
    • JavaScript Tutorials

    In this book you will learn about employing this MVC web framework, which is written in Python—a powerful and popular programming language. The book emphasizes utilizing Django and Python to create a Web 2.0 bookmark-sharing application, with many common features found in the Web 2.0 sites these days. The book follows a tutorial style to introduce concepts and explain solutions to problems. It is not meant to be a reference manual for Python or Django. Django will be explained as we build features throughout the chapters, until we realize our goal of having a working Web 2.0
    application for storing and sharing bookmarks.

    I sincerely hope that you will enjoy reading the book as much as I enjoyed writing it. And I am sure that by its end, you will appreciate the benefits of using Python and Django for your next project. Both Python and Django are powerful and simple, and provide a robust environment for rapid development of your dynamic web applications.

    What This Book Covers

    Chapter 1 gives you an introduction to MVC web development frameworks, and explains why Python and Django are the best tools to achieve the aim of this book.
    Chapter 2 provides a step-by-step guide to installing Python, Django, and an appropriate database system so that you can create an empty project and set up the development server.
    Chapter 3 creates the main page so that we have an initial view and a URL. You will learn how to create templates for both the main page and the user page. You will also write a basic set of data models to store your application’s data.
    Chapter 4 is where the application really starts to take shape, as user management is implemented. Learn how to log users in and out, create a registration form, and allow users to manage their own accounts by changing email or password details.
    Chapter 5 explores how to manage your growing bank of content. Create tags, tag clouds, and a bookmark submission form, all of which interact with your database. Security features also come into play as you learn how to restrict access to certain pages and protect them against malicious input.
    Chapter 6 enables you to enhance your application with AJAX and jQuery, since users can now edit entries in place and do live searching. Data entry is also made easier with the introduction of auto-completion.
    Chapter 7 shows you how to enable users to vote and comment on their bookmark entries. You will also build a popular bookmarks page.
    Chapter 8 focuses on the administration interface. You will learn how to create and customize the interface, which allows you to manage content and set permissions for users and groups.
    Chapter 9 will give your application a much more professional feel through the implementation of RSS feeds and pagination.
    Chapter 10 tackles social networks providing the “social” element of your application. Users will be able to build a friend network, browse the bookmarks of their friends, and invite their friends to join the web site.
    Chapter 11 covers extending and deploying your application. You will also learn about advanced features, including offering the site in multiple languages, managing the site during high traffic, and configuring the site for a production environment.
    Chapter 12 takes a brief look at the additional Django features that have not been covered elsewhere in the book. You will gain the knowledge required to further develop your application and build on the basic skills that you have learned throughout the book.

    Building User Networks

    Our application is about “social” bookmarking. Running a social web application means having a community of users who have common interests, and who use the application to share their interests and findings with each other. We will want to enhance the social experience of our users. In this chapter we will introduce two features that will enable us to do this. We will let our users maintain lists of friends, see what their friends are bookmarking, and invite new friends to try out our application. We will also utilize a Django API to make our application more user-friendly and responsive by displaying feedback messages to users. So let’s get started!

    In this chapter you will learn how to:

    • Build a friend network feature
    • Let users browse bookmarks of friends
    • Enable users to invite friends to your web site
    • Improve the interface with status messages

    Building friend networks

    An important aspect of socializing in our application is letting users to maintain their friend lists and browse through the bookmarks of their friends. So, in this section we will build a data model to maintain user relationships, and then program two views to enable users to manage their friends and browse their friends’ bookmarks.

    Creating the friendship data model

    Let’s start with the data model for the friends feature. When a user adds another user as a friend, we need to maintain both users in one object. Therefore, the Friendship data model will consist of two references to the User objects involved in the friendship. Create this model by opening the bookmarks/models.py file and inserting the following code in it:
    [code lang=”java”]
    class Friendship(models.Model):
    from_friend = models.ForeignKey(
    User, related_name=’friend_set’
    )
    to_friend = models.ForeignKey(
    User, related_name=’to_friend_set’
    )
    def __unicode__(self):
    return u’%s, %s’ % (
    self.from_friend.username,
    self.to_friend.username
    )
    class Meta:
    unique_together = ((‘to_friend’, ‘from_friend’), )
    [/code]
    The Friendship data model starts with defining two fields that are User objects: from_friend and to_friend. from_friend is the user who added to_friend as a friend. As you can see, we passed a keyword argument called related_name to both the fields. The reason for this is that both fields are foreign keys that refer back to the User data model. This will cause Django to try to create two attributes called friendship_set in each User object, which would result in a name confl ict.
    To avoid this problem, we provide a specific name for each attribute. Consequently,each User object will contain two new attributes: user.friend_set, which contains the friends of this user and user.to_friend_set, which contains the users who added this user as a friend. Throughout this chapter, we will only use the friend_set attribute, but the other one is there in case you need it.

    Next, we defined a __unicode__ method in our data model. As you already know, this method is useful for debugging.
    Finally, we defined a class called Meta. This class may be used to specify various options related to the data model. Some of the commonly used options are:

    • db_table: This is the name of the table to use for the model. This is useful when the table name generated by Django is a reserved keyword in SQL, or when you want to avoid confl icts if a table with the same name already exists in the database.
    • ordering: This is a list of field names. It declares how objects are ordered when retrieving a list of objects. A column name may be preceded by a minus sign to change the sorting order from ascending to descending.
    • permissions: This lets you declare custom permissions for the data model in addition to add, change, and delete permissions that we learned about in Chapter 7. Permissions should be a list of two-tuples, where each two-tuple should consist of a permission codename and a human-readable name for that permission. For example, you can define a new permission for listing friend bookmarks by using the following Meta class:
      [code lang=”java”]
      class Meta:
      permissions = (
      (‘can_list_friend_bookmarks’,
      ‘Can list friend bookmarks’),
      )
      [/code]
    • unique_together: A list of field names that must be unique together.

    We used the unique_together option here to ensure that a Friendship object is added only once for a particular relationship. There cannot be two Friendship objects with equal to_friend and from_friend fields. This is equivalent to the following SQL declaration:
    [code lang=”java”]
    UNIQUE ("from_friend", "to_friend")
    [/code]
    If you check the SQL generated by Django for this model, you will find something similar to this in the code.
    After entering the data model code into the bookmarks/models.py file, run the following command to create its corresponding table in the database:
    [code]
    $ python manage.py syncdb
    [/code]
    Now let’s experiment with the new model and see how to store and retrieve relations of friendship. Run the interactive console using the following command:
    [code]
    $ python manage.py shell
    [/code]
    Next, retrieve some User objects and build relationships between them (but make sure that you have at least three users in the database):
    [code lang=”java”]
    >>> from bookmarks.models import *

    >>> from django.contrib.auth.models import User

    >>> user1 = User.objects.get(id=1)

    >>> user2 = User.objects.get(id=2)

    >>> user3 = User.objects.get(id=3)

    >>> friendship1 = Friendship(from_friend=user1, to_friend=user2)

    >>> friendship1.save()

    >>> friendship2 = Friendship(from_friend=user1, to_friend=user3)

    >>> friendship2.save()
    [/code]
    Now, user2 and user3 are both friends of user1. To retrieve the list of Friendship objects associated with user1, use:
    [code lang=”java”]
    >>> user1.friend_set.all()

    [<Friendship: user1, user2>, <Friendship: user1, user3>]
    [/code]
    (The actual usernames in output were replaced with user1, user2, and user3 for clarity.)
    As you may have already noticed, the attribute is named friend_set because we called it so using the related_name option when we created the Friendship model.

    Next, let’s see one way to retrieve the User objects of user1’s friends:
    [code lang=”java”]
    >>> [friendship.to_friend for friendship in
    user1.friend_set.all()]
    [<User: user2>, <User: user3>]
    [/code]
    The last line of code uses a Python feature called “list” comprehension to build the list of User objects. This feature allows us to build a list by iterating through another list. Here, we built the User list by iterating over a list of Friendship objects. If this syntax looks unfamiliar, please refer to the List Comprehension section in the Python tutorial.

    Notice that user1 has user2 as a friend, but the opposite is not true.
    [code lang=”java”]
    >>> user2.friend_set.all()
    []
    [/code]
    In other words, the Friendship model works only in one direction. To add user1 as a friend of user2, we need to construct another Friendship object.
    [code lang=”java”]
    >>> friendship3 = Friendship(from_friend=user2, to_friend=user1)

    >>> friendship3.save()

    >>> user2.friend_set.all()

    [<Friendship: user2, user1>]
    [/code]
    By reversing the arguments passed to the Friendship constructor, we built a relationship in the other way. Now user1 is a friend of user2 and vice-versa.

    Experiment more with the model to make sure that you understand how it works. Once you feel comfortable with it, move to the next section, where we will write views to utilize the data model. Things will only get more exciting from now on!

    Writing views to manage friends

    Now that we are able to store and retrieve user relationships, it’s time to create views for these features. In this section we will build two views: one for adding a friend, and another for listing friends and their bookmarks.

    We will use the following URL scheme for friend-related views:

    • If the view is for managing friends (adding a friend, removing a friend, and so on), its URL should start with /friend/. For example, the URL of the view that adds a friend will be /friend/add/.
    • If the view is for viewing friends and their bookmarks, its URL should start with /friends/. For example, /friends/username/ will be used to display the friends of username.

    This convention is necessary to avoid confl icts. If we use the prefix /friend/ for all views, what happens if a user registers the username add? The Friends page for this user will be /friend/add/, just like the view to add a friend. The first URL mapping in the URL table will always be used, and the second will become inaccessible, which is obviously a bug.

    Now that we have a URL scheme in mind, let’s start with writing the friends list view.

    The friends list view

    This view will receive a username in the URL, and will display this user’s friends and their bookmarks. To create the view, open the bookmarks/views.py file and add the following code to it:
    [code lang=”java”]
    def friends_page(request, username):
    user = get_object_or_404(User, username=username)
    friends = [friendship.to_friend
    for friendship in user.friend_set.all()]
    friend_bookmarks = Bookmark.objects.filter(
    user__in=friends
    ).order_by(‘-id’)
    variables = RequestContext(request, {
    ‘username’: username,
    ‘friends’: friends,
    ‘bookmarks’: friend_bookmarks[:10],
    ‘show_tags’: True,
    ‘show_user’: True
    })
    return render_to_response(‘friends_page.html’, variables)
    [/code]
    This view is pretty simple. It receives a username and operates upon it as follows:

    • The User object that corresponds to the username is retrieved using the shortcut method get_object_or_404.
    • The friends of this user are retrieved using the list comprehension syntax mentioned in the previous section.
    • After that, the bookmarks of the user’s friends are retrieved using the filter method. The user__in keyword argument is passed to filter in order to retrieve all the bookmarks of the user who exists in the friends list. order_by is chained to filter for the purpose of sorting bookmarks
      by id in a descending order.
    • Finally, the variables are put into a RequestContext object and are sent to a template named friends_page.html. We used the index syntax with friend_bookmarks to get only the latest ten bookmarks.

    Let’s write the view’s template next. Create a file called friends_page.html in the templates folder with the following code in it:
    [code lang=”xml”]
    {% extends "base.html" %}

    {% block title %}Friends for {{ username }}{% endblock %}
    {% block head %}Friends for {{ username }}{% endblock %}

    {% block content %}
    <h2>Friend List</h2>
    {% if friends %}
    <ul class="friends">
    {% for friend in friends %}
    <li><a href="/user/{{ friend.username }}/">
    {{ friend.username }}</a></li>
    {% endfor %}
    </ul>
    {% else %}
    <p>No friends found.</p>
    {% endif %}
    <h2>Latest Friend Bookmarks</h2>
    {% include "bookmark_list.html" %}
    {% endblock %}
    [/code]
    The template should be self-explanatory; there is nothing new in it. We iterate over the friends list and create a link for each friend. Next, we create a list of friend bookmarks by including the bookmark_list.html template.
    Finally, we will add a URL entry for the view. Open the urls.py file and insert the following mapping into the urlpatterns list:
    [code lang=”html”]
    urlpatterns = patterns(”,
    […]
    # Friends
    (r’^friends/(\w+)/$’, friends_page),
    )
    [/code]
    This URL entry captures the username portion in the URL using a regular expression, exactly the way we did in the user_page view.

    Although we haven’t created a view for adding friends yet, you can still see this view by manually adding some friends to your account (if you haven’t done so already). Use the interactive console to make sure that your account has friends, and then start the development server and point your browser to http://127.0.0.1:8000/ friends/your_username/ (replacing your_username with your actual username). The resulting page will look something similar to the following screenshot:

    So, we now have a functional Friends page. It displays a list of friends along with their latest bookmarks. In the next section, we are going to create a view that allows users to add friends to this page.

    Creating the add friend view

    So far, we have been adding friends using the interactive console. The next step in building the friends feature is offering a way to add friends from within our web application.
    The friend_add view works like this: It receives the username of the friend in GET, and creates a Friendship object accordingly. Open the bookmarks/views.py file and add the following view:
    [code lang=”java”]
    @login_required
    def friend_add(request):
    if ‘username’ in request.GET:
    friend = get_object_or_404(
    User, username=request.GET[‘username’]
    )
    friendship = Friendship(
    from_friend=request.user,
    to_friend=friend
    )
    friendship.save()
    return HttpResponseRedirect(
    ‘/friends/%s/’ % request.user.username
    )
    else:
    raise Http404
    [/code]
    Let’s go through the view line by line:

    • We apply the login_required decorator to the view. Anonymous users must log in before they can add friends.
    • We check whether a GET variable called username exists. If it does, we continue with creating a relationship. Otherwise, we raise a 404 page not found error.
    • We retrieve the user to be added as a friend using get_object_or_404.
    • We create a Friendship object with the currently logged-in user as the from_friend argument, and the requested username as the to_friend argument.
    • Finally, we redirect the user to their Friends page.

    After creating the view, we will add a URL entry for it. Open the urls.py file and add the highlighted line to it:
    [code lang=”java”]
    urlpatterns = patterns(”,
    […]
    # Friends
    (r’^friends/(\w+)/$’, friends_page),
    <b>(r’^friend/add/$’, friend_add),</b>
    )
    [/code]
    The “add friend” view is now functional. However, there are no links to use it anywhere in our application, so let’s add these links. We will modify the user_page view to display a link for adding the current user as a friend, and a link for viewing the user’s friends. Of course, we will need to handle special cases; you don’t want an “add friend” link when you are viewing your own page, or when you are viewing the page of one of your friends.

    Adding these links will be done in the user_page.html template. But before doing so, we need to pass a Boolean fl ag from the user_page view to the template indicating whether the owner of the user page is a friend of the currently logged-in user or not. So open the bookmarks/views.py file and add the highlighted lines into the user_page view:
    [code lang=”java”]
    def user_page(request, username):
    user = get_object_or_404(User, username=username)
    query_set = user.bookmark_set.order_by(‘-id’)
    paginator = Paginator(query_set, ITEMS_PER_PAGE)
    <b>if request.user.is_authenticated():
    is_friend = Friendship.objects.filter(
    from_friend=request.user,
    to_friend=user
    )
    else:
    is_friend = False</b>
    try:
    page_number = int(request.GET[‘page’])
    except (KeyError, ValueError):
    page_number = 1
    try:
    page = paginator.page(page_number)
    except InvalidPage:
    raise Http404
    bookmarks = page.object_list
    variables = RequestContext(request, {
    ‘username’: username,
    ‘bookmarks’: bookmarks,
    ‘show_tags’: True,
    ‘show_edit’: username == request.user.username,
    ‘show_paginator’: paginator.num_pages > 1,
    ‘has_prev’: page.has_previous(),
    ‘has_next’: page.has_next(),
    ‘page’: page_number,
    ‘pages’: paginator.num_pages,
    ‘next_page’: page_number + 1,
    ‘prev_page’: page_number – 1,
    <b>’is_friend’: is_friend,</b>
    })
    return render_to_response(‘user_page.html’, variables)
    [/code]
    Next, open the templates/user_page.html file and add the following highlighted lines to it:
    [code lang=”html”]
    […]
    {% block content %}
    <b>{% ifequal user.username username %}
    <a href="/friends/{{ username }}/">view your friends</a>
    {% else %}
    {% if is_friend %}
    <a href="/friends/{{ user.username }}/">
    {{ username }} is a friend of yours</a>
    {% else %}
    <a href="/friend/add/?username={{ username }}">
    add {{ username }} to your friends</a>
    {% endif %}
    – <a href="/friends/{{ username }}/">
    view {{username }}’s friends</a>
    {% endifequal %}</b>
    {% include "bookmark_list.html" %}
    {% endblock %}
    [/code]
    Let’s go through each conditional branch in the highlighted code:

    1. We check whether the user is viewing his or her page. This is done using a template tag called ifequal, which takes two variables to compare for equality. If the user is indeed viewing his or her page, we simply display a link to it.
    2. We check whether the user is viewing the page of one of their friends. If this is the case, we display a link to the current user’s Friends page instead of an “add friend” link. Otherwise, we construct an “add friend” link by passing the username as a GET variable.
    3. We display a link to the Friends page of the user page’s owner being viewed.

    And that’s it. Browse some user pages to see how the links at the top change, depending on your relationship with the owner of the user page. Try to add new friends to see your Friends page grow.

    Implementing the friends feature wasn’t that hard, was it? You wrote one data model and two views, and the feature became functional. Interestingly, the more Django experience you gain, the more easy and fast its implementation becomes.

    Our users are now able to add each other as friends and monitor their friends’ bookmarks, but what about friends who are not members of our site? In the next section we will implement an ” Invite a friend ” feature that will allow users to invite their friends to join our site via email.

    Inviting friends via email

    Enabling our users to invite their friends carries many benefits. People are more likely to join our site if their friends are already using it. After they join, they will also invite their friends, and so on, which means an increasing number of users for our application. Therefore, it is a good idea to offer an ” Invite a friend ” feature. This is actually a common functionality found in many Web 2.0 applications.

    Building this feature requires the following components:

    • An Invitation data model to store invitations in the database
    • A form in which users can type the emails of their friends and send invitations
    • An invitation email with an activation link
    • A mechanism for processing activation links sent in email

    Throughout this section, we will implement each component. But because this section involves sending emails, we first need to configure Django to send emails by adding some options to the settings.py file. So, open the settings.py file and add the following lines to it:
    [code lang=”java”]
    SITE_HOST = ‘127.0.0.1:8000’
    DEFAULT_FROM_EMAIL = \
    ‘Django Bookmarks <django.bookmarks@example.com>’
    EMAIL_HOST = ‘mail.yourisp.com’
    EMAIL_PORT = ”
    EMAIL_HOST_USER = ‘username’
    EMAIL_HOST_PASSWORD = ‘password’
    [/code]
    Let’s see what each variable does.

    • SITE_HOST: This is the host name of your server. Leave it as 127.0.0.1:8000 for now. When we deploy our server in the next chapter, we will change this.
    • DEFAULT_FROM_EMAIL: This is the email address that appears in the From field of emails sent by Django.
    • EMAIL_HOST: This is the host name of your email server. If you are using a development machine that doesn’t run a mail server (which is most likely the case), then you need to put your ISP’s outgoing email server here.
      Contact your ISP for more information.
    • EMAIL_PORT: This refers to the port number of the outgoing email server. If you leave it empty, the default value (25) will be used. You also need to obtain this from your ISP.
    • EMAIL_HOST_USER and EMAIL_HOST_PASSWORD : This refers to the username and password for the outgoing email server. For the host username, input your username and your email server (as shown in the previous code). Leave the fields empty if your ISP does not require them.

    To verify that your settings are correct, launch the interactive shell and enter
    the following:
    [code lang=”java”]
    >>> from django.core.mail import send_mail
    >>> send_mail(‘Subject’, ‘Body of the message.’,
    ‘from@example.com’,
    [‘your_email@example.com’])
    [/code]
    Replace your_email@example.com with your actual email address. If the above callto send_mail does not raise an exception and you receive the email, then all is set. Otherwise, you need to verify your settings with your ISP and try again.

    Once the settings are correct, sending an email in Django is a piece of cake! We will use send_mail to send the invitation email. But first, let’s create a data model for storing invitations.

    The invitation data model

    An invitation consists of the following information:

    • Recipient name
    • Recipient email
    • The User object of the sender

    We also need to store an activation code for the invitation. This code will be sent in the invitation email. The code will serve two purposes:

    • Before accepting the invitation, we can use the code to verify that the invitation actually exists in the database
    • After accepting the invitation, we can use the code to retrieve the invitation information from the database and create friendship relationships between the sender and the recipient

    With this in mind, let’s create the Invitation data model. Open the bookmarks/models.py file and append the following code to it:
    [code lang=”java”]
    class Invitation(models.Model):
    name = models.CharField(max_length=50)
    email = models.EmailField()
    code = models.CharField(max_length=20)
    sender = models.ForeignKey(User)

    def __unicode__(self):
    return u’%s, %s’ % (self.sender.username, self.email)
    [/code]
    There shouldn’t be anything new or difficult to understand in this model. We simply defined fields for the recipient name, recipient email, activation code, and the sender of the invitation. We also created a __unicode__ method for debugging, and enabled the model in the administration interface. Do not forget to run manage.py syncdb to create the new model’s table in the database.
    Next, we will add a method for sending the invitation email. The method will use classes and methods from several packages. So, put the following import statements at the beginning of the bookmarks/models.py file, and append the send method to the Invitation data model in the same file:
    [code lang=”java”]
    from django.core.mail import send_mail
    from django.template.loader import get_template
    from django.template import Context
    from django.conf import settings
    class Invitation(models.Model):
    […]
    def send(self):
    subject = u’Invitation to join Django Bookmarks’
    link = ‘http://%s/friend/accept/%s/’ % (
    settings.SITE_HOST,
    self.code
    )
    template = get_template(‘invitation_email.txt’)
    context = Context({
    ‘name’: self.name,
    ‘link’: link,
    ‘sender’: self.sender.username,
    })
    message = template.render(context)
    send_mail(
    subject, message,
    settings.DEFAULT_FROM_EMAIL, [self.email]
    )
    [/code]
    The method works by loading a template called invitation_email.txt and passing the following variables to it: the name of the recipient, the activation link, and the sender username. The template is then used to render the body of the invitation email. After that, we used send_mail to send the email as we did during the interactive session in the previous section.
    There are several observations to make here:

    • The format of the activation link is http://SITE_HOST/friend/accept/CODE/. We will write a view to handle such URLs later in this section.
    • This is the first time we use a template to render something other than a web page. As you can see, the template system is quite fl exible and allows us to build emails as well as web pages, or any other text.
    • We used the get_template and render methods to build the message body as opposed to the usual render_to_response call. If you remember, this is how we rendered templates early in the book. We are doing this here because we are not rendering a web page.
    • The last parameter of send_mail is a list of recipient emails. Here we are passing only one email address. But if you want to send the same email to multiple users, you can pass all of the email addresses in one list to send_mail.

    Since the send method loads a template called invitation_email.txt, create a file with this name in the templates folder and insert the following content into it:
    [code]
    Hi {{ name }},
    {{ sender }} invited you to join Django Bookmarks,
    a website where you can post and share your bookmarks with friends!
    To accept the invitation, please click the link below:
    {{ link }}
    — Django Bookmarks Team
    [/code]
    Once we write the send method, our Invitation data model is ready. Next, we will create a form that allows users to send invitations.

    The Invite A Friend form and view

    The next step in implementing the ” Invite a friend ” feature is providing users with a form to enter their friends’ details and invite them. We will create this form now. The task will be quite similar to compiling the forms that we have built throughout
    this book.

    First, let’s create a Form class that represents our form. Open the bookmarks/forms.py file and add this class to it:
    [code lang=”java”]
    class FriendInviteForm(forms.Form):
    name = forms.CharField(label=u’Friend\’s Name’)
    email = forms.EmailField(label=u’Friend\’s Email’)
    [/code]
    This form is simple. We only ask the user to enter the friend’s name and email. Let’s create a view to display and handle this form. Open the bookmarks/views.py file and append the following code to it:
    [code lang=”java”]
    @login_required
    def friend_invite(request):
    if request.method == ‘POST’:
    form = FriendInviteForm(request.POST)
    if form.is_valid():
    invitation = Invitation(
    name=form.cleaned_data[‘name’],
    email=form.cleaned_data[’email’],
    code=User.objects.make_random_password(20),
    sender=request.user
    )
    invitation.save()
    invitation.send()
    return HttpResponseRedirect(‘/friend/invite/’)
    else:
    form = FriendInviteForm()

    variables = RequestContext(request, {
    ‘form’: form
    })
    return render_to_response(‘friend_invite.html’, variables)
    [/code]
    Again, the view is similar to the other form processing views in our application. If a valid form is submitted, it creates an Invitation object and sends it. We used a method called make_random_password in User.objects to generate an activation code for the invitation. This method can be used to create random passwords.It takes the length of the password as a parameter and returns a random alphanumeric password.

    After this, we will add a template for the view. Create a file called friend_invite.html in the templates folder with the following code:
    [code lang=”html”]
    {% extends "base.html" %}
    {% block title %}Invite A Friend{% endblock %}
    {% block head %}Invite A Friend{% endblock %}
    {% block content %}
    Enter your friend name and email below,
    and click "send invite" to invite your friend to join the site:
    <form method="post" action=".">
    {{ form.as_p }}
    <input type="submit" value="send invite" />
    </form>
    {% endblock %}
    [/code]
    As you can see, the template displays a help message and the form below it. Finally, we will add a URL entry for this view, so open the urls.py file and add the highlighted line to it:
    [code]
    urlpatterns = patterns(”,
    […]
    # Friends
    (r’^friends/(\w+)/$’, friends_page),
    (r’^friend/add/$’, friend_add),
    (r’^friend/invite/$’, friend_invite),
    )
    [/code]
    The Invite A Friend view is now ready. Open http://127.0.0.1:8000/friend/ invite/ in your browser, and you will see a form similar to the following screenshot:

    Try to send an invitation to your email address. If everything is working correctly, you will receive an invitation with an activation link similar to the following screenshot:

    We are half-way through implementing the ” Invite a friend ” feature. At the moment, clicking the activation link produces a 404 page not found error. So we will now write a view to handle it.

    Handling activation links

    We have made good progress; users are now able to send invitations to their friends via email. The next step is building a mechanism for handling activation links in invitations. Here is an outline of what we are going to do:

    • We will build a view that handles activation links. This view verifies that the invitation code actually exists in the database, stores the invitation ID in the user’s session, and redirects to the registration page.
    • When the user registers an account, we check to see if they have an invitation ID in their session. If this is the case, we retrieve the Invitation object for this ID, and build friendship relationships between the user and the sender of the invitation.

    Let’s start by writing a URL entry for the view. Open the urls.py file and add the highlighted line from the following code to it:
    [code]
    urlpatterns = patterns(”,
    […]
    # Friends
    (r’^friends/(\w+)/$’, friends_page),
    (r’^friend/add/$’, friend_add),
    (r’^friend/invite/$’, friend_invite),
    (r’^friend/accept/(\w+)/$’, friend_accept),
    )
    [/code]
    As you can see, the view follows the URL format sent in invitation emails. The activation code is captured from the URL using a regular expression, and then it will be passed to the view as a parameter. Next, we will write the view. Open the bookmarks/views.py file and create the following view in it:
    [code lang=”java”]
    def friend_accept(request, code):
    invitation = get_object_or_404(Invitation, code__exact=code)
    request.session[‘invitation’] = invitation.id
    return HttpResponseRedirect(‘/register/’)
    [/code]
    The view is short and concise. It tries to retrieve the Invitation object that corresponds to the requested code (generating a 404 error if the code does not exist). After that, it stores the ID of the object in the user’s session. Lastly, it redirects to the registration page.

    This is the first time that we use sessions in our application. Django provides an easy-to-use session framework to store and retrieve data for each visitor. Data is stored on the server and can be accessed in views by using a dictionary-like object available at request.session.

    The session framework is enabled by default in the settings.py file. You can verify this by looking for ‘django.contrib.sessions’ in the INSTALLED_APPS variable.

    You can use request.session to do the following:

    • Store a key-value pair: request.session[key] = value
    • Retrieve a value by providing its key: value = request.session[key]. This raises KeyError if the key does not exist.
    • Check whether the session contains a particular key:
      if key in request.session:

    Each user has its own session dictionary. Sessions are useful for maintaining data across requests, especially for anonymous users. Unlike cookies, sessions are stored on the server side so that they cannot be tampered with.

    All of these properties make sessions ideal for passing the invitation ID to the register_page view. After this quick overview of the session framework, let’s get back to our current task. Now that the friend_accept view is ready, we will modify the register_page view a little to make use of the invitation ID in the user’s session. If the ID exists, we will create friendship relations between the user and the sender, and delete the invitation to prevent reusing it. Open the bookmarks/views.py file and add the highlighted lines from the following code:
    [code lang=”java”]
    def register_page(request):
    if request.method == ‘POST’:
    form = RegistrationForm(request.POST)
    if form.is_valid():
    user = User.objects.create_user(
    username=form.cleaned_data[‘username’],
    password=form.cleaned_data[‘password1′],
    email=form.cleaned_data[’email’]
    )
    if ‘invitation’ in request.session:
    # Retrieve the invitation object.
    invitation = Invitation.objects.get(
    id=request.session[‘invitation’]
    )
    # Create friendship from user to sender.
    friendship = Friendship(
    from_friend=user,
    to_friend=invitation.sender
    )
    friendship.save()
    # Create friendship from sender to user.
    friendship = Friendship (
    from_friend=invitation.sender,
    to_friend=user
    )
    friendship.save()
    # Delete the invitation from the database and session.
    invitation.delete()
    del request.session[‘invitation’]</b>
    return HttpResponseRedirect(‘/register/success/’)
    else:
    form = RegistrationForm()
    variables = RequestContext(request, {
    ‘form’: form
    })
    return render_to_response(‘registration/register.html’, variables)
    [/code]
    The highlighted code should be easy to understand. It starts by checking for an invitation ID in the user’s session. If there is one, it creates the relation of friendship in both directions between the sender of the invitation and the current user. After that, it deletes the invitation and removes its ID from the session.

    Feel free to create a link to the Invite A Friend page. The Friends list page is a good place to do so. Open the templates/friends_page.html file and add the highlighted line from the following code:
    [code lang=”xml”]
    {% extends "base.html" %}
    {% block title %}Friends for {{ username }}{% endblock %}
    {% block head %}Friends for {{ username }}{% endblock %}
    {% block content %}
    <h2>Friend List</h2>
    {% if friends %}
    <ul class="friends">
    {% for friend in friends %}
    <li><a href="/user/{{ friend.username }}/">
    {{ friend.username }}</a></li>
    {% endfor %}
    </ul>
    {% else %}
    <p>No friends found.</p>
    {% endif %}
    <a href="/friend/invite/">Invite a friend!</a>
    <h2>Latest Friend Bookmarks</h2>
    {% include ‘bookmark_list.html’ %}
    {% endblock %}
    [/code]
    This should be all that we need to do to implement the ” Invite a friend ” feature. It was a bit long, but we were able to put various areas of our Django knowledge to good use while implementing it. You can now click on the invitation link that you received via email to see what happens—you will be redirected to the registration page. Create a new account there, log in, and notice how the new account and your original one have become friends with each other.

    Improving the interface with messages

    Although our implementation of user networks is working correctly, there is something missing. The interface does not tell the user whether an operation succeeded or failed. After sending an invitation, for example, the user is redirected back to the invitation form, with no feedback on whether the operation was successful or not. In this section, we are going to improve our interface by providing status messages to the user after performing certain actions.

    Displaying messages to users is done using the message API, which is part of the authentication system. The API is simple. To create a message, you can use the following call:
    [code lang=”java”]
    request.user.message_set.create(
    message=u’Message text goes here.’
    )
    [/code]
    This call will create a message and store it in the database. Available messages are accessible from within templates through the variable messages. The following code iterates over messages and displays them in a list:
    [code lang=”xml”]
    {% if messages %}
    <ul>
    {% for message in messages %}
    <li>{{ message }}</li>
    {% endfor %}
    </ul>
    {% endif %}
    [/code]
    This information covers all that we need to utilize the message framework in our project. Let’s start by placing the above template code in the base template of our application. Open the templates/base.html file and add the highlighted section of the following code:
    [code lang=”xml”]
    <body>
    <div id="nav">
    […]
    </div>
    <h1>{% block head %}{% endblock %}</h1>
    {% if messages %}
    <ul class="messages">
    {% for message in messages %}
    <li>{{ message }}</li>
    {% endfor %}
    </ul>
    {% endif %}
    {% block content %}{% endblock %}
    </body>
    </html>
    [/code]
    We placed the code below the heading of the page. To give messages a distinctive look, add the following CSS code to the site_media/style.css file:
    [code lang=”html”]
    ul.messages {
    border: 1px dashed #000;
    margin: 1em 4em;
    padding: 1em;
    }
    [/code]
    And that’s about it. We can now create messages and they will be displayed automatically. Let’s start with sending invitations. Open the bookmarks/views.py files and modify the friend_invite view as follows:
    [code lang=”java”]
    import smtplib
    @login_required
    def friend_invite(request):
    if request.method == ‘POST’:
    form = FriendInviteForm(request.POST)
    if form.is_valid():
    invitation = Invitation(
    name=form.cleaned_data[‘name’],
    email=form.cleaned_data[’email’],
    code=User.objects.make_random_password(20),
    sender=request.user
    )
    invitation.save()
    try:
    invitation.send()
    request.user.message_set.create(
    message=u’An invitation was sent to %s.’ %
    invitation.email
    )
    except smtplib.SMTPException:
    request.user.message_set.create(
    message=u’An error happened when ‘
    u’sending the invitation.’
    )
    return HttpResponseRedirect(‘/friend/invite/’)
    else:
    form = FriendInviteForm()
    variables = RequestContext(request, {
    ‘form’: form
    })
    return render_to_response(‘friend_invite.html’, variables)
    [/code]
    The highlighted code works as follows: send_mail raises an exception if it fails, so we wrap the call to invitation.send in a try/except block. The reader is then notified accordingly.

    You can try the new message system now. First, send an invitation and notice how a message appears confirming the success of the operation. Next, change the EMAIL_HOST option in the settings.py file to an invalid value and try sending an invitation again. You should see a message indicating failure this time. Our interface is more responsive now. Users know exactly what’s going on.

    You can do the same for the friend_add view. Open the bookmarks/views.py file and modify the view like this:
    [code lang=”java”]
    @login_required
    def friend_add(request):
    if ‘username’ in request.GET:
    friend = get_object_or_404(
    User, username=request.GET[‘username’]
    )
    friendship = Friendship(
    from_friend=request.user,
    to_friend=friend
    )
    try:
    friendship.save()
    request.user.message_set.create(
    message=u’%s was added to your friend list.’ %
    friend.username
    )
    except:
    request.user.message_set.create(
    message=u’%s is already a friend of yours.’ %
    friend.username
    )
    return HttpResponseRedirect(
    ‘/friends/%s/’ % request.user.username
    )
    else:
    raise Http404
    [/code]
    The highlighted code displays a success message if the call to friendship.save was successful. If an exception is thrown by the call, it means that the unique_together condition was violated and that the requested username is already a friend of the current user. An error message that says so is displayed.

    The message API is simple, yet effective. You can use it for all sorts of things, such as displaying status messages, errors, notifications, and so on. Try to utilize it in other parts of the application if you want, such as after adding or editing a bookmark.

    Summary

    We developed an important set of features for our project in this chapter. Friend networks are very important in helping users to socialize and share interests together. These features are common in Web 2.0 applications, and now you are able to incorporate them into any Django web site.

    Here is a quick summary of the Django features covered in this chapter:

    • To manually specify a name for the related attribute in a data model, pass a keyword argument called related_name to the field that creates the relationship between models.
    • You can specify several options for data models by defining a class called Meta inside the data model. Some of the possible attributes in this class are: db_table, ordering, permissions, and unique_together.
    • To send an email in Django, use the send_mail function. It’s available from the django.core.mail package.
    • The Django session framework provides a convenient way to store and retrieve user data across requests. The request.session object provides a dictionary-like interface to interact with session data.
    • To create a message, use the following method call:
      request.user.message_set.create.
    • To display messages in a template, use the template variable messages.

    also read:

    • DOJO Tutorials
    • jQuery Tutorials

    In the next chapter we will learn about improving various aspects of our application— mainly, performance and localization. We will also learn how to deploy our project on a production server. The next chapter comes with a lot of useful information, so keep reading!

    Filed Under: Internet Tagged With: Django

    • « Previous Page
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • Next Page »

    Follow Us

    • Facebook
    • Pinterest

    As a participant in the Amazon Services LLC Associates Program, this site may earn from qualifying purchases. We may also earn commissions on purchases from other retail websites.

    JavaBeat

    FEATURED TUTORIALS

    Answered: Using Java to Convert Int to String

    What is new in Java 6.0 Collections API?

    The Java 6.0 Compiler API

    Copyright © by JavaBeat · All rights reserved