Quantcast
Channel: Suryakand's Blog
Viewing all 63 articles
Browse latest View live

JAVA: How to capture java exception stack trace in String

$
0
0
Many a times you might need to capture java exception (stack trace) in string, here is a small code snippet to do that:
StringWriter sw = new StringWriter();
PrintWriter pw = new PrintWriter(sw, true);
throwable.printStackTrace(pw);
String stackTrace = sw.getBuffer().toString();
This same method can be used to capture string from other streams and assign it to a String/StringBuffer.

Servlet Filter in OSGi enabled web application

$
0
0
Servlet filters are an important component of web application and it can be used in different scenarios, e.g.:

1) Authenticating requests (ServletRequest) before delegating them to your actual business class (Servlet/Controller).
2) Formatting request (ServletRequest) and response data (ServletResponse)
3) Global/Application level Error handling

There are many more uses cases where we can use Servlet filters.
A traditional web application is composed of couple of Business classes & data (model), JSP (views), controllers/servlets and few other resources like css, images etc. All these components are packaged as a WAR file and we deploy it on a web/application server. To add/configure a servlet filter in traditional web application we user web.xml configuration file and tag as:


springSecurityFilterChain
org.springframework.web.filter.DelegatingFilterProxy



springSecurityFilterChain
/*
REQUEST
ERROR



An OSGi web application is different as compared to traditional web applications. Below are few major differences:

1) Traditional web applications are deployed on web/application server and run under a servlet container whereas OSGi web application are deployed in an OSGi container that has servlet container as an OSGi HTTP service.
2) In traditional web application required libraries/jars are part of WAR file whereas in OSGi web application jar files are installed as OSGi bundles in OSGi container.
3) In traditional web application Filter are configured using web application whereas in OSGi web application Filter are configured/registered using OSGi HTTP service.

As I mentioned, in OSGi web application we don’t configure filters in web.xml file rather we need to register it through OSGi HTTP service, there are two different ways to do this:

1) Create a Filter and register it as an OSGi service in to OSGi HTTP service (ExtHttpService)
2) Create a Filter and make it as an OSGi component (Whiteboard method)
In this post we are going to create a simple filter to encode all request parameters using UTF-8 encoding before passing that request to a servlet/controller. We have used this concept in Adobe/Day CQ5 that uses Apache Felix as OSGi framework for managing OSGi components and services.

1) Create a Filter and register it to OSGi HTTP service
a) Create a class that implements javax.servlet.Filter and add you logic to doFilter() method. Also, if required implement the init() and destroy() to do some initialization and cleanup activities.

public class CharacterEncodingFilter implements Filter {
private String encoding = "UTF-8";
private Boolean forceEncoding = true;
public void init(FilterConfig filterConfig) throws ServletException {
try {
String encoding = filterConfig.getInitParameter("init.charencoding.filter.encoding");
if(encoding != null && encoding.trim().length() >0) {
this.encoding = encoding;

}

if(filterConfig.getInitParameter("init.charencoding.filter.forceencoding") != null) {
Boolean forceEncoding = Boolean.parseBoolean(
filterConfig.getInitParameter("init.charencoding.filter.forceencoding"));
this.forceEncoding = forceEncoding;

}
} catch (Exception ex) {
ex.printStackTrace();
}
}

public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
ExtraParamWrappedRequest reqwrapper = new ExtraParamWrappedRequest((HttpServletRequest) request, null);

if (this.encoding != null && (this.forceEncoding || request.getCharacterEncoding() == null)) {
Map additionalParams = new HashMap();
additionalParams.put("_charset_", new String[]{encoding});
reqwrapper = new ExtraParamWrappedRequest((HttpServletRequest) request, additionalParams);
reqwrapper.setCharacterEncoding(this.encoding);

if (this.forceEncoding) {
response.setCharacterEncoding(this.encoding);
}

}

chain.doFilter(reqwrapper, response);
}

public void destroy() {
// TODO Auto-generated method stub
}

class ExtraParamWrappedRequest extends HttpServletRequestWrapper {
private final Map modifiableParameters;
private Map allParameters = null;

public ExtraParamWrappedRequest(final HttpServletRequest request, final Map additionalParams) {
super(request);
modifiableParameters = new TreeMap();

if(additionalParams != null) {
modifiableParameters.putAll(additionalParams);
}

}

@Override
public String getParameter(final String name) {
String strings = getRequest().getParameter(name);

if (strings != null) {
return strings;
}
return null;
}

@SuppressWarnings("unchecked")
@Override
public Map getParameterMap() {
if (allParameters == null) {
allParameters = new TreeMap();
allParameters.putAll(super.getParameterMap());
allParameters.putAll(modifiableParameters);
}
return Collections.unmodifiableMap(allParameters);
}

@Override
public Enumeration getParameterNames() {
return Collections.enumeration(getParameterMap().keySet());

}

@Override
public String[] getParameterValues(final String name) {
return getParameterMap().get(name);
}

}

}

b) Every OSGi bundle has an Activator class, we need to register the filter that we have created in step (a) to OSGi HTTP service in our Activator class by getting the reference of HTTP service (ExtHttpService).


public class Activator implements BundleActivator {
private static final Logger log = LoggerFactory.getLogger(Activator.class);

public void start(BundleContext context) throws Exception {
ServiceReference sRef = context.getServiceReference(ExtHttpService.class.getName());

if (sRef != null) {
Dictionary properties = new Hashtable();
properties.put("service.pid", CharacterEncodingFilter.class.getName());
properties.put("init.charencoding.filter.encoding", "UTF-8");
properties.put("init.charencoding.filter.forceencoding", "true");
ExtHttpService service = (ExtHttpService) context.getService(sRef);
service.registerFilter(new CharacterEncodingFilter(), "/.*", properties, 0, null);
}
}

public void stop(BundleContext context) throws Exception {
System.out.println(context.getBundle().getSymbolicName() + " stopped");
}

}


2) Create a Filter and make it as an OSGi component

Second method of registering a filter to OSGi container is, creating it as a OSGi component/service. Below are the steps that we need to follow:

a) Create a class that implements javax.servlet.Filter and add you logic to doFilter() method. Also, if required implement the init() and destroy() to do some initialization and cleanup activities. Basically, we can use the same class (CharacterEncodingFilter) that we have created above, the only difference we have in this method is the way of registering a filter.

b) Get hold of OSGi bundle context and register filter as a service as shown below:

public class Activator implements BundleActivator {
private ServiceRegistration registration;
public void start(BundleContext context) throws Exception {
Hashtable props = new Hashtable();
props.put("pattern", /.*");
props.put("init.message", "Character encoding filter!");
props.put("service.ranking", "1");
this.registration = context.registerService(Filter.class.getName(), new CharacterEncodingFilter (), props);
}

public void stop(BundleContext context) throws Exception {
this.registration.unregister();
}

}

We have seen two different ways of registering filter for a web application that is running under OSGi container so, what’s the difference between these two methods?

An OSGi container may contain multiple OSGi web application and each web application runs under its own bundle context. HTTP service is the service that manages HTTP request/response for all web application deployed under an OSGi container. First method registers filter at HTTP service level (global) and is applied to all request/response across all web application deployed in an OSGi container whereas second method registers a filter at bundle (individual web application) level and gets only executed only when a request is for that particular application/bundle. When we have multiple filters, they can be positioned based on need (which one to execute first and after that and so on) at HTTP service (fourth argument of service.registerFilter method) and individual application bundle level (service.ranking property).
I hope this will help you to understand how Servlets filters works in OSGi enabled web applications.

References:
http://felix.apache.org/site/apache-felix-http-service.html
http://www.eclipse.org/equinox/server/


Thanks
-- Surya

OSGi and Modular Java Applications

$
0
0
What is Modular Application?
In simple words, an application that is divided into many independent/isolated functional or non-functional modules/components (user interface or business logic) is a modular application. Modules can be installed and uninstalled dynamically from application’s core framework. When a module is installed, the application serves features/functionalities available in that module. Similarly, when a module is uninstalled from application, certain features/functionalities from removed module will also be removed from application without affecting rest of the application and its functionality.

“Eclipse Editor” is a well-known Java editor and it is a very good example of modular application. Whenever we need a feature, we install appropriate plug-in (i.e. module) and that feature is available in our eclipse. We can also uninstall a plug-in if we don’t need specific feature, uninstalling a plug-in will only remove a specific feature from eclipse without affecting other plug-ins or editor.

Modular applications are more flexible and extendable as compared to traditional application and the concept of modularization can be applied to wide range of applications.

Benefits of Modular Application
You are probably already building a well-architected application using assemblies, interfaces, and classes, and employing good object-oriented design principles. Even so, unless great care is taken, your application design may still be "monolithic" (where all the functionality is implemented in a tightly coupled way within the application), which can make the application difficult to develop, test, extend, and maintain.

The modular application approach, on the other hand, can help you to identify the large scale functional areas of your application and allow you to develop and test that functionality independently. This can make development and testing easier, but it can also make your application more flexible and easier to extend in the future. The benefit of the modular approach is that it can make your overall application architecture more flexible and maintainable because it allows you to break your application into manageable pieces. Each piece encapsulates specific functionality, and each piece is integrated through clear but loosely coupled communication channels.

OSGi, JBoss Modules, JSF2 & CDI are some example technologies for developing modular Java applications. OSGi is the most popular technology with clear specification and we can use it as an underlying framework for developing “Modular Java Applications” (web as well as desktop). Eclipse, GlassFish, Apache Sling, DataNucleus, Adobe CQ, Atlassian Confluence and JIRA are some well-known examples that uses OSGi framework for modularization.

What is OSGi?
OSGi (Open Services Gateway initiative) is a specification. The core of the OSGi specification defines a component and service model for Java. The components and services can be dynamically activated, de-activated, updated and uninstalled.

A very practical advantage of OSGi is that every bundle must define its exported Java packages and its required dependencies. This way you can effectively control the provided API and the dependencies of your plug-ins/bundles.

The OSGi has a layered model that is depicted in the following figure.

Figure 1: OSGi Building Blocks

The following list contains a short definition of the terms:
• Bundles - Bundles are the OSGi components made by the developers. From technical point of view bundles are jar files which have a bit extended META-INF/MANIFEST.MF file and OSGI-INF.
• Services - The services layer connects bundles in a dynamic way by offering a publish-find-bind model for plain old Java objects.
• Life-Cycle - The API to install, start, stop, update, and uninstall bundles.
• Modules - The layer that defines how a bundle can import and export code.
• Security - The layer that handles the security aspects.
• Execution Environment - Defines what methods and classes are available in a specific platform.

OSGi has several implementations, for example Equinox (used by eclipse IDE), Knopflerfish OSGi or Apache Felix. Core concept of OSGi remains same across all implementation only certain services and features vary from one implementation to another.


We’ll take an example use case to see how OSGi can be used to develop a “Modular Java Application”. A modular application itself is composed of various loosely coupled bundles/plugin-ins, every bundle that has been developed using OSGi and is being used by application will show up in the “Bundles” section of Figure 1.

Benefits of OSGi
1. Reduced Complexity - Developing with OSGi technology means developing bundles: the OSGi components. Bundles are modules. They hide their internals from other bundles and communicate through well-defined services.
2. Reuse - The OSGi component model makes it very easy to use many third party components in an application.
3. Dynamic Updates - The OSGi component model is a dynamic model. Bundles can be installed, started, stopped, updated, and uninstalled without bringing down the whole system.
4. Adaptive - The OSGi component model is designed from the ground up to allow the mixing and matching of components. This requires that the dependencies of components need to be specified and it requires components to live in an environment where their optional dependencies are not always available. The OSGi service registry is a dynamic registry where bundles can register, get, and listen to services. This dynamic service model allows bundles to find out what capabilities are available on the system and adapt the functionality they can provide.
5. Transparency - Bundles and services are first class citizens in the OSGi environment. The management API provides access to the internal state of a bundle as well as how it is connected to other bundles.
6. Supported by Key Companies - OSGi counts some of the largest computing companies from a diverse set of industries as its members. Members are from: Oracle, IBM, Samsung, Nokia, Progress, Motorola, NTT, Siemens, Hitachi, Deutsche Telekom, Redhat, Ericsson, and many more.

Disadvantages of OSGi
1. Every dependency/jar/bundle that we want to use in OSGi container must include OSGi META information for exported services and packages. E.g. If we want to use write a plug-in for accessing SOAP web services using apache axis, then we have make sure that the axis.jar is OSGi enabled (contains OSGI-INF along with META-INF) else we cannot use it inside OSGi container.
2. Cannot use traditional/existing J2EE infrastructure & concepts. Everything in OSGi is a module/plug-in; even a web application (WAR) is deployed inside OSGi container (not on a web server) as a bundle and exposed to end users using OSGi HTTP Service Bridge.
3. OSGi is useful while building modular applications but, this is not for free. It is rather a heavy standard which may disallow you to do some things and force you to do the things in “The OSGi way”.

Use Case 
Let’s say, we are developing a banking software product and that product will be used by various countries around the globe. Since it is a banking application we have to consider some facts, e.g.:
1) Taxation rules that varies from country to country.
2) Bank policies with in a country may vary from bank to bank.
3) Customer from different countries may ask for different look and feel of application.
4) Some paid features that are only available in premium version of product.

If we develop this application using a traditional approach, we have to maintain a separate code base for individual customers/countries and lot of things gets replicated in various code bases. Or, a single code base with lots of “ifs” and “programmatic conditions” to enable/disable features based on customer, which not a good practice.

How can OSGi help us to develop this application? To develop an OSGi application we have to divide our application mainly in two parts:
1) Core Application Bundle This is the core part of application that manages loading/unloading of various plug-in(s), provides access to application’s infrastructure (e.g. database), glue various bundles together. This part of application does not contain any business logic, user interface or customer specific implementation.
2) Modules (plug-ins) Modules are smaller building blocks that can be dynamically added or removed from application as and when required. Each module is responsible for specific features/functionality. In our case we’ll have plug-ins/modules for country/customer specific requirements:
a) Taxation
b) Banking policies
c) User interface
d) Premium features

Below diagram shows a very high level structure of banking application running inside an OSGi container and bundles associated with banking application.


Figure 2: Banking Application inside OSGi Container

Bundles can come and go dynamically as and when required. E.g. If we want to remove a feature e.g. premium features from our banking application, we just need to uninstall the “premium feature” bundle from OSGi container and OSGi container will take care of removing all services, UI and logic related to premium functionality. Similarly, if want to upgrade a classic version of product to premium version, then we just need to add “premium feature” bundle in OSGi container. We don’t need to re-build the complete application; OSGi bundles can be installed and uninstalled dynamically.

Conclusion
OSGi is a great framework for developing modular applications and we have witnessed many successful products around us developed using OSGi. OSGi is in market from almost last 10+ years but, it was not popular among developers. Recently, OSGi has picked up momentum in developer and business stake holder’s community.

Initially, OSGi may look complex and expensive because of learning curve, inadequate availability of experienced OSGi developers and time required during design phase of application to break down everything into smaller functional units (plug-ins). But, in long term OSGi will pay off the initial investment. In today’s fast moving world with constantly changing business requirements everyone want to reduce the time to hit market and who’d like to change majority of application when application can adapt quickly by installing/uninstalling modules.

Any framework or technology cannot be set as a default development standard for all applications. Every application has different requirement therefore, we need to think and glue various technologies together to come up with an optimized solution. OSGi has lots of potential and great future so; it’s worth considering it for development.

Programming CSS Using LESS Preprocessor & Maven Integration

$
0
0

In this article I am going to introduce you to a couple of tools and development practices which are very cool from UI development perspective and I am sure you’ll be tempted to adopt it for your next project.

You can download full working example/source code for this article from my SVN repository at googlecodes http://suryakand.googlecode.com/svn/trunk/boilerplate/

Since last 7 years I am working on web application development using Java/J2EE and various other frameworks. I am not an UI developer and my work is mainly focused on server side development that includes designing application framework and coding them in java (web service, performance tuning, transaction management etc.) Writing code takes less time but, designing stable and maintainable applications is something that needs more time. One of the various must have qualities of a software is reusability. The general rule of software development is “develop reusable components/code”. It’s very easy to achieve reusability when we are writing code/functionality using Java/.NET (or any other language that supports Object Oriented Programming i.e. OOP) but, how to develop reusable and maintainable UI resources mainly CSS and JS? This is the topic that we are going to cover in this article. We’ll mainly focus on CSS in this article but, similar concept and tools are available for JS as well.

CSS, the style sheet language used to format markup on web pages. CSS itself is extremely simple, consisting of rule sets and declaration blocks, what to style, how to style it and it does pretty much everything you want, right? Well, not quite. As far as we all know CSS is static resource and have some limitations. These limitations, like the inability to set variables or to perform operations, this means that we inevitably end up repeating the same pieces of styling in different places. Not a good best practices to follow and difficult to maintain in log run.

There is a solution out there to overcome some of these limitations. The solution is “CSS preprocessor”. In simple terms, CSS preprocessing is a method of extending the feature set of CSS by first writing the style sheets in a new extended language, then compiling the code to classic CSS so that it can be read by web browsers. Several CSS preprocessors are available today, most notably Sassand LESS.

Ok, what’s the deal and how it is useful to me? This might be your first question but, believe me by end of this tutorial you’ll see the real benefit. Let’s start to make our hand dirty. For better understanding, we’ll try to integrate the LESS CSS preprocessor in a real application (developed using Spring MVC and Maven). Here is the focus of this article:

1)      How to write CSS in extended language that will be processed by LESS preprocessor.
2)      Integrate LESS CSS preprocessor in a Maven project.
3)      Use case & how LESS preprocessor is beneficial to us?

Maven is a build tool that is used by a lot of developers/organizations for building and continuous integration of projects. If you want to read more about maven, please visit http://maven.apache.org/. Also, you can read more about LESS CSS preprocessor at http://lesscss.org/.

1.      How to write CSS in extended language that will be processed by LESS preprocessor.

LESS extends CSS with dynamic behavior such as variables, mixins, operations and functions. We’ll see few example in this section that will give us better understanding of these features. It’s like programming CSS not just designing/writing CSS (CSS developers, more fun is coming in your way). First thing, you’ll learn is that the file extension of LESS compatible CSS file is (*.less, e.g. mysite.less). Let’s see how to define variables, functions and how to perform some operations:

a) Defining Variables: Here is an example that shows how to define variables in LESS. A variable name starts with “@” symbol.

/* ==== CSS LESS Variables ===*/
@default-font-family: Helvetica, Arial, sans-serif;
@default-radius: 8px;
@default-color: #5B83AD;



b) Defining Functions: Here is an example that shows an example function definition in LESS. A function definition starts with a “.” symbol. In below example “@top-left”, “@top-right” are function parameters.

/* Generic rounded corner function example */
.rounded-corners(@top-left: @default-radius, @top-right: @default-radius, @bottom-left: @default-radius, @bottom-right: @default-radius) {
-moz-border-radius: @top-left @top-right @bottom-right @bottom-left;
-webkit-border-radius: @top-left @top-right @bottom-right @bottom-left;
border-radius: @top-left @top-right @bottom-right @bottom-left;
}


c) Performing Operations: LESS allows us to perform operations like addition, subtraction, multiplication etc. Here are few examples that show how we can use variables/values to calculate CSS style attributes:

@base-height: 20%;
@body-height: (@base-height * 2);

@default-color: #5B83AD;
@dark-color: default-color + 222;


d) Mixins: In LESS, it is possible to include properties from one CSS ruleset into another CSS ruleset, pretty much similar to extending a Java class from another Java class. So let’s say we have a class “.general-text-color” and we want everything from this class with an additional style attribute to define a CSS class for “h4” tag, this is how we can do it using LESS:

.general-text-color {
color: @default-color;
}

h4 {
.general-text-color; /* Mixin example: including other CSS class in current style. See general-text-color style */
font-size: @heading;
}

Here is an example LESS CSS that I have developed for this example.site.less (Before Preprocessing)

/* ============ CSS LESS Variables and Functions (START) ===========*/
@default-font-family: Helvetica, Arial, sans-serif;
@default-radius: 8px;
@default-color: #5B83AD;
@icon-pencil: 0 -144px;
@heading: 16px;
@base-height: 20%;
@body-height: (@base-height * 2);
@logo-height: 30px;
@logo-width: 30px;

/*Generic rounded corner function example*/
.rounded-corners(@top-left: @default-radius, @top-right: @default-radius, @bottom-left: @default-radius, @bottom-right: @default-radius) {
-moz-border-radius: @top-left @top-right @bottom-right @bottom-left;
-webkit-border-radius: @top-left @top-right @bottom-right @bottom-left;
border-radius: @top-left @top-right @bottom-right @bottom-left;
}

/* rounded corenrs function for well class*/
.well-rounded-corners (@radius: @default-radius) {
.rounded-corners(@radius, @radius, 5px, 5px)
}
/* ================= END ===================*/


/* ================= Actual CSS Starts Here =============== */
body {
font-family: @default-font-family;
background-color: @default-color; /* background-color change example using css less. See variable default-color */
padding-bottom: 40px;
padding-top: 60px;
}

.icon-pencil {
background-position: @icon-pencil; /* background-position change example. See variable icon-pencil */
}

.well {
.well-rounded-corners; /* rounded corner change example using LESS function call. See function call well-rounded-corners */
}

.general-text-color {
color: @default-color;
}

h4 {
.general-text-color; /* Mixin example: including other CSS class in current style. See general-text-color style */
font-size: @heading;
}

.body-height {
height: @body-height; /*Operations example: body-height is calculated by multiplying base-heigh with 2 (base-heigh*2) */
}

.logo {
height: @logo-height;
width: @logo-width;
}


This is what we get (in site.css) once site.less is preprocessed by LESS.site.css (After Preprocessing)

body {
font-family: Helvetica, Arial, sans-serif;
background-color: #5b83ad;
padding-bottom: 40px;
padding-top: 60px;
}

.icon-pencil {
background-position: 0 -144px;
}

.well {
-moz-border-radius: 8px 8px 5px 5px;
-webkit-border-radius: 8px 8px 5px 5px;
border-radius: 8px 8px 5px 5px;
}

.general-text-color {
color: #5b83ad;
}

h4 {
color: #5b83ad;
font-size: 16px;
}

.body-height {
height: 40%;
}

.logo {
height: 30px;
width: 30px;
}


For full set of features and examples please visit http://lesscss.org/#docs.


2. Integrate LESS CSS preprocessor in a Maven project

Ok, now you have basic idea about what is LESS and how to write basic CSS using LESS and what happens before and after CSS preprocessing. In this section we’ll integrate LESS preprocessor with a Maven project so that the CSS that we have written (in extended language) for our project gets preprocessed during build process automatically to generate classic CSS understood by browsers.

We’ll use “lesscss-maven-plugin” maven plugin for preprocessing our CSS. Here is example configuration that you’ll need to add to your pom.xml file to enable LESS CSS preprocessing:




org.lesscss
lesscss-maven-plugin
1.3.0

${project.basedir}/src/main/webapp/themes
${project.build.directory}/${project.build.finalName}/themes
true




compile





Few things to note:
i) “sourceDirectory” tag: this is directory path in our project where.less files (CSS files coded using LESS extended language) resides.
ii) “outputDirectory” tag: This is the destination director where we want our final complied CSS to be placed after build process is finished.
iii) “compress” tag: This tags tells LESS maven plugin whether to compress/minify the preprocessed CSS or not.

For more configuration options visit https://github.com/marceloverdijk/lesscss-maven-plugin(official site of LESS maven plugin).

Here is the maven project structure for the example that I have developed for this article. Download full source code from http://suryakand.googlecode.com/svn/trunk/boilerplate.


Once project is built, we’ll get preprocessed classic CSS in our target folder (as show in above project structure).

3) Use case & how LESS preprocessor is beneficial to us?

In above example I have used LESS with spring’s THEME feature. Basically, I have created single LESS CSS file for two themes “default” and “olive” and have defined variables that’ll be replaced by LESS preprocessor during build process to generate two different theme specific CSS files from a single (site.less) source file. So, one CSS (basically a LESS) file (written using LESS’s extended language) will give us 2 CSS files just by preprocessing/replacing variables during build process.

Here are screen shots of both themes:

Fig. 3: Default Spring Theme


Fig. 3: Olive Spring Theme
 Here are some situations that’ll tempt you to use LESS in your next project:

i) Think of situation where you have to add more themes in future to your project. In traditional way you’ll copy an existing CSS and will manually find all the places in CSS and will replace it with theme specific values but, with LESS you just need to change the variable values and preprocessor will do everything for us.

ii) Another use case is, let’s say a customer asked us to change color and fonts on all existing themes and if we have 20 themes (each CSS of 1000 lines) then manually finding and replacing is time consuming and error prone but, LESS will reduce the time and with no room for error.

There are several other ways in which we can integrate LESS with our projects. To read more about LESS and different ways to use, please visit: http://lesscss.org/#usage

Also, as I mentioned earlier in this article there are various preprocessor available but, mainly LESS and SASS are used. What’s the difference? Sass was designed to both simplify and extend CSS, so things like curly braces were removed from the syntax. LESS was designed to be as close to CSS as possible, so the syntax is identical to your current CSS code. This means you can use it right away with your existing code. Recently, Sass also introduced a CSS-like syntax called SCSS (Sassy CSS) to make migrating easier.

I hope this article will help you to understand basics of LESS and how you can integrate LESS preprocessor with Maven. If you have any question or suggestions please feel free to post them.

Thank you very much for reading. Happy Coding!

Resources


Working with Spring Embedded Database

$
0
0
Sometimes it is very useful to work with an in memory databases when you want to demonstrate certain database centric features of an application during development phase. Spring supports HSQL, H2, and Derby as default embedded databases but, you can also use an extensible API to plug in new embedded database types and “DataSource” implementations.

In this tutorial we’ll see how to configure and use embedded database (HSQL) using spring. Spring supports XML as well as Programmatic configuration of beans. For, simplicity I’ll use XML based configuration in this article. Here are the items that we’ll cover in this article

1.    Configuring embedded database using spring.
2.    Define a simple domain object (User) and creating a simple DAO layer for accessing underling embedded database.
3.    Create two Spring MVC controllers & views to create users and display users.

I’ll suggest you to download the example code for this article from SVN before you start, this will help you to follow the article and refer the actual code. You can download working code of this article from my SVN repository at Google: http://suryakand.googlecode.com/svn/trunk/boilerplate

I’ll be using Maven for this project. Before you start, add following dependencies to your project’s pom.xml file       

org.springframework
spring-jdbc
${spring.version}



org.hsqldb
hsqldb
2.2.9


Above dependencies are specifically needed for HSQL DB and spring’s JDBC template. You’ll also need to add other dependencies, for detailed configuration about dependencies please refer the actual pom.xml (it is self explanatory) of project (http://suryakand.googlecode.com/svn/trunk/boilerplate/pom.xml)

Once you are done with setting up pom.xml file, you’ll need to perform following steps to get up and running with an embedded database:

1.    Configure embedded database using Spring XML
Spring has provided “jdbc” namespace for easy and quick configuration of embedded JDBC Data Sources. Here is an example configuration that you’ll need to add in application context file:






•    “id”: this is the bean ID for our “dataSource” that we’ll be referring from other bean definition (DAO) to get hold of database.
•    “type”: this is the type of database that we want use. In this example we’ll be using HSQL DB.
•    “location”: this the location where spring will look for SQL script files to create schema and insert sample data (if you want to). In this case we have stored SQL script files “schema.sql” (to create database schema) and “test-data.sql” (to insert sample data) in maven’s main/resource directory.

See "applicationContext.xml" file for more details.

2.    Define a simple domain object & DAO
We’ll create a simple domain object (POJO) called as User.java and a DAO class UserDaoImpl.java that will facilitate representation of database rows as java objects and database access.

public class User {
private int userId;
private int groupId;
private String username;
private String password;
private String firstName;
private String middleName;
private String lastName;
private int phoneNumber;
private String verificationCode;
private String resetPaswordCode;
private String passwordQuestion;
private String passwordAnswer;

// Omitted getters and setter
}

UserDao Interface
public interface UserDao {
public List getAllUsers();
public User getUserByUserName(String userName);
public void createUser(User user);
}

UserDao Implementation
We’ll use the “dataSource” that we have defined in step 1 and spring’s “JdbcTemplate” for accessing database. For simplicity, I have used “JdbcTemplate” and have manually populated our domain object “User.java” but, you can use any ORM framework like Hibernate, MyBatis etc. to do this task in more elegant way.
public class UserDaoImpl implements UserDao {
private DataSource dataSource;
private JdbcTemplate jdbcTemplate;

public void createUser(User user) {
jdbcTemplate.update("insert into users (group_id,username,password,first_name," +
"middle_name,last_name,phone_number) " +
"values (?,?,?,?,?,?,?)", new Object[] {new Integer(1), user.getUsername(), user.getPassword(),
user.getFirstName(), user.getMiddleName(), user.getLastName(),
new Integer(user.getPhoneNumber())});
}

public List getAllUsers() {
return jdbcTemplate.query("SELECT * from users", new UserMapper());
}

public User getUserByUserName(String userName) {
User user = null;

if(StringUtils.isNotBlank(userName)) {
List users = jdbcTemplate.query("SELECT * from users where username = ?", new UserMapper(), new Object[] {userName});

if(users != null && users.size() > 0) {
user = users.get(0);
}
}

return user;
}

private static final class UserMapper implements RowMapper {
public User mapRow(ResultSet rs, int rowNum) throws SQLException {
User user = new User();
user.setUserId(rs.getInt("user_id"));
user.setGroupId(rs.getInt("group_id"));
user.setUsername(rs.getString("username"));
user.setPassword(rs.getString("password"));
user.setFirstName(rs.getString("first_name"));
user.setMiddleName(rs.getString("middle_name"));
user.setLastName(rs.getString("last_name"));
user.setPhoneNumber(rs.getInt("phone_number"));
user.setVerificationCode(rs.getString("verification_code"));
user.setResetPaswordCode(rs.getString("reset_pasword_code"));
user.setPasswordQuestion(rs.getString("password_question"));
user.setPasswordAnswer(rs.getString("password_answer"));
return user;
}
}

public void setDataSource(DataSource dataSource) {
this.dataSource = dataSource;
this.jdbcTemplate = new JdbcTemplate(this.dataSource);
}

public DataSource getDataSource() {
return dataSource;
}
}


Once the domain object and DAO is implemented, we’ll define a DAO in our bean definition file and will inject the “dataSource” dependency that we have defined in step 1.  Here is the DAO bean definition:





3.    Define controllers and views for creating and displaying users

We’ll create two controllers “CreateUserController” and “UserListController” to create and display users respectively from a web page/browser.
@Controller
public class CreateUserController {
private String viewName;
private String createSucessView;
private UserDao userDao;

@RequestMapping(value = {"/user/create"} , method = RequestMethod.POST)
public ModelAndView createUser(@ModelAttribute("userModel") User userModel) {
userDao.createUser(userModel);
ModelAndView mv = new ModelAndView(createSucessView);
return mv;
}

@RequestMapping(value = {"/user/create"} , method = RequestMethod.GET)
public ModelAndView createUserForm(User userModel) {
ModelAndView mv = new ModelAndView(viewName);
mv.addObject("userModel", new User());
return mv;
}

@RequestMapping(value = {"/user/isavailbale"} , method = RequestMethod.POST)
public @ResponseBody Boolean createUser(@RequestParam("username") String username){
User user = userDao.getUserByUserName(username);
if(user != null) {
return Boolean.FALSE;
}
return Boolean.TRUE;
}
}


"Create User" Form
<%@ taglib uri="http://www.springframework.org/tags/form" prefix="form"%>
<%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c" %>
<%@ taglib uri="http://java.sun.com/jsp/jstl/fmt" prefix="fmt"%>







Create New User























@Controller
public class UserListController {
private String viewName;
private UserDao userDao;

@RequestMapping("/user/all")
public ModelAndView showUserList() {
List userList = userDao.getAllUsers();

ModelAndView mv = new ModelAndView(viewName);
mv.addObject("userList", userList);

return mv;
}
}

User List View
<%@ taglib uri='http://java.sun.com/jsp/jstl/core' prefix='c'%> 



















User IDFirst NameUser Name
${userRow.userId}${userRow.firstName}${userRow.username}





Fig. 1: Create User Form

Fig. 2: User List
I hope this article will help you to understand and quickly get started with spring embedded database feature. If you have any questions or suggestion, please feel free to post them.

Thank you very much for reading!

Adobe CQ/AEM Useful Links

$
0
0
  1. /crx/explorer/index.jsp  - CRX Explorer
  2. /crx/de/index.jsp – CRXDE Lit
  3. /damadmin     - DAMAdmin
  4. /libs/cq/search/content/querydebug.html – Query debug tool
  5. /libs/granite/security/content/admin.html – New user manager standalone ui  [5.6 only?]
  6. /libs/cq/contentsync/content/console.html – Content sync console
  7. /system/console/bundles – Felix web admin console
  8. /system/console/jmx/com.adobe.granite.workflow%3Atype%3DMaintenance - Felix web admin console JMX / Workflow maintenance tasks
  9. /system/console/jmx/com.adobe.granite%3Atype%3DRepository - Felix web admin console JMX / Repository maintenance tasks
  10. /system/console/depfinder – This new 5.6 tool will help you figure out what package exports a class and also prints a Maven Dependency for the class.
  11. /libs/granite/ui/content/dumplibs.rebuild.html?rebuild=true – Helpful link for debugging caching problems. Wipes the clientlibs and designs and forces it to rebuild it. Thanks to Mark Ellis for this link.
  12. /system/console/adapters – This link shows you the Adapters are registered in the system. This helps you figure out what you can adaptTo() from resource to resource.
Params
  1. wcmmode=DISABLED - This handy publisher parameter turns off CQ authoring features so you can preview a page cleanly

CQ/AEM Dialog v/s Design Dialog

$
0
0
A component is CQ is a smallest unit that can dropped on a page and content author can fill in any content in it. Content can be of two types:
  1. Page specific (in this case component dialog is used)
  2. Design/Global (in this case design_dialog is used)
The way a dialog or design_dialog is defined is exactly same, there are only two differences:
  1. The obvious difference is in their name i.e. “dialog” and “design_dialog”
  2. The way properties (stored content/values) are access:
  • Retrieve values from dialog (widget) to jsp: String var= properties.get("","");
  • Retrieve values from design_dialog to jsp: String var= currentStyle.get("","");
Content/values stored via get stored at page level under component’s node. On the other hand, content/value stored via a design dialog is store at design path of your teamplate (see cq:designPath property of root node of your application/page), usually this location is under /etc/design/

Create an AEM (CQ) project using Maven

$
0
0

This article is mainly focused on setting up only project structure for CQ/AEM project using maven and guides you through how you can do your day to day development of AEM/CQ project with eclipse. In next article we’ll see how to develop templates, components and other things in details.

Before you go through this article it highly recommended that you have fair knowledge of Maven and at least folder structure of an AEM/CQ project. If you are in hurry, I’ll try to provide a very high level explanation of these two things but, I’ll recommend you to explore that in detail.

1) Typical AEM/CQ project: any web application is mainly composed of view (HTML, JSP etc.), CSS, JavaScript and some server side code. An AEM application is nothing different than this. A typical AEM application will have following folder structure:
  • /apps (Top level folder that is parent for all the code that you’ll develop)
  • /apps/[YOUR_APP_NAME]/components/[CQ templates and components] (contains mostly JSP or other scripts responsible for rendering) 
  • /apps/[YOUR_APP_NAME]/config (contais configuration for a project)
  • /apps/[YOUR_APP_NAME]/install (contains bundles/jar for server side logic) /etc/design/[design assets for your site] (Typically all CSS, JavaScript etc. are stored here and access using CQ Client lib. More information about ClientLibs can be found here: http://suryakand-shinde.blogspot.in/2012/01/clientlib-cq-static-resource-management.html) 
  • /content/dam/[images, docs, any other digital assets] 
  • /content/[YOUR_SITE_ROOT]
2) Maven is a tool for managing project dependences, build management and automation framework. There are many maven plugin available in market to automate various tasks. In this article also we’ll use few maven plug-ins to automate the task of CQ package creation & installation etc.

Before we go into details, let’s see few benefits of configuring CQ project with maven and doing development in eclipse:
  1. You can break your project in to smaller modules that can be managed individually. 
  2. You can do build automation for maven projects easily using Jenkins etc. 
  3. Unit testing becomes easier. Writing test cases becomes easier when you have broken down your project in separate modules (components, templates in one module and you java code for bundles in another module)
Before we start creating actual project, following installations should be there on your machine:
  1. Maven 
  2. CQ/AEM author environment 
  3. Eclipse with Vault Plug-in (VLT Plug-in for eclipse can be downloaded from http://sourceforge.net/projects/vaultclipse) 

Create AEM/CQ Maven Project

Source code for this project can be found here: https://suryakand.googlecode.com/svn/aem/simple-aem-maven-project

We have seen some benefits of maven for CQ projects and have basic knowledge of CQ project structure by now. Typically, a CQ/AEM maven project will be a modular maven project with multiple modules. For this article we’ll create a top level project that contains two modules “bundles” and “content”.

Follow these steps to create a maven project:
Step 1: create a maven project using adobe aem's maven archetype. When you run the command it’ll as for project information on command prompt that you’ll need to provide:
mvn archetype:generate -DarchetypeGroupId=com.day.jcr.vault -DarchetypeArtifactId=multimodule-content-package-archetype -DarchetypeVersion=1.0.2 -DarchetypeRepository=adobe-public-releases

This will create a maven project with two modules one for bundle and another one for content as shown in Fig: AEM eclipse project. a) content: this is where you'll put your templates, components etc. b) bundle: all java code that is responsible for creating bundle will go here.

Step 2: Verify project After the project is created, run the following command just to make sure that everything generated as expected. The build should complete without any error:
mvn clean install

Step 3: Import project into eclipse Once the build is successful, import the project in to eclipse using File-> Import -> Existing Maven Projects -> browse to the directory where you have parent pom.xml file -> Finish


 Fig. 1: Typical AEM/CQ project structure

Step 4: Import project into CQ Now that we have a project ready, next thing that we’ll do is try to install the project (content and bundle) in local author (http://localhost:4502). Execute following command from the directory where we have parent project’s pom.xml file:
mvn clean install –PautoInstallPackage

NOTE:“autoInstallPackage” is a maven profile that is defined in “content” project’s pom.xml file. Go to the CRXDLite and check whether the apps folder contains your project or not?

Most of the stuff has been done for us by maven plug-in but, it is important to understand what is going on behind the scene before we start our day to day development so, let’s see what maven did for us. As evident from the screen shot above maven created a parent project with 2 module projects (bundle and content) for us. “bundle” and “content” are also maven project and they have their own pom.xml file. Let’s look at each pom.xml carefully:

1) pom.xml of parent project: See the comments/NOTE in below XML file for more information about various sections and their usage. I have omitted some part of XML to keep it short so that we can focus on important sections:

4.0.0
com.surya.aem
blog-sample
1.0-SNAPSHOT

pom

Blog Sample - Reactor Project
Maven Multimodule project for Blog Sample.


3.0.2




localhost
4502
admin
admin
localhost
4503
admin
admin
UTF-8
UTF-8










adobe
Adobe Public Repository
http://repo.adobe.com/nexus/content/groups/public/
default




adobe
Adobe Public Repository
http://repo.adobe.com/nexus/content/groups/public/
default









org.apache.sling
maven-sling-plugin
2.1.0

${crx.username}
${crx.password}




com.day.jcr.vault
content-package-maven-plugin
0.0.20
true

true
${crx.username}
${crx.password}



org.eclipse.m2e
lifecycle-mapping
1.0.0












autoInstallBundle



org.apache.sling
maven-sling-plugin


install-bundle

install











bundle
content



2) pom.xml of bundle module:

4.0.0

com.surya.aem
blog-sample
1.0-SNAPSHOT


blog-sample-bundle
bundle
Blog Sample Bundle









org.apache.felix
maven-scr-plugin


generate-scr-descriptor

scr





org.apache.felix
maven-bundle-plugin
true


com.surya.aem.blog-sample-bundle





org.apache.sling
maven-sling-plugin


http://${crx.host}:${crx.port}/apps/blog/install
true







3) pom.xml of content project:

4.0.0

com.surya.aem
blog-sample
1.0-SNAPSHOT


blog-sample-content
content-package
Blog Sample Package



${project.groupId}
blog-sample-bundle
${project.version}





src/main/content/jcr_root
false

**/.vlt
**/.vltignore






org.apache.maven.plugins
maven-resources-plugin

true




com.day.jcr.vault
content-package-maven-plugin
true

Blog Sample
src/main/content/META-INF/vault/filter.xml



${project.groupId}
blog-sample-bundle
/apps/blog/install


http://${crx.host}:${crx.port}/crx/packmgr/service.jsp








autoInstallPackage



com.day.jcr.vault
content-package-maven-plugin


install-content-package
install

install









autoInstallPackagePublish



com.day.jcr.vault
content-package-maven-plugin


install-content-package-publish
install

install


http://${publish.crx.host}:${publish.crx.port}/crx/packmgr/service.jsp
${publish.crx.username}
${publish.crx.password}










At this point you have a working AEM/CQ maven project in eclipse. With this maven project you can do lot of things like build automation, better unit testing and deployment management. One aspect that we have not covered so far is how to use this project for everyday development.

Let’s consider a very common use case. You have an AEM project for which you maintain your source code in SVN (or any other SCM) and you are using eclipse & CRXDELite for development. The reason I mentioned CRXDELite along with eclipse is that, there are certain tasks that are easy to perform with CRXDELite e.g. creating new component or template with a wizard. When you create anything in CRXDELite, you need a way to pull content created in CQ repository out in maven project (or file system) so that it can be source controlled in SVN so, how to do that? There are two ways to do this and both these ways uses “vlt” tool provided by AEM/CQ. Here are two ways:

  1. Eclipse Vault (vlt) plug-in OR, 
  2. Maven Vault plug-in
I’ll try to provide some information on how to use both these methods in my other blog posts as soon as possible.

Check out source code for this project: https://suryakand.googlecode.com/svn/aem/simple-aem-maven-project
Thank you for reading, if you have any question or find any error please feel free to leave a comment.

Scala: Another way of handling exceptions using Scala Try

$
0
0
A nicer way to handle a scenario where you are expecting an operation to return a value or an exception and based on result you want to perform some action.

Let’s consider a very simple example/function below:

def divide(x:Integer, y:Integer) = x/y

   
All we are doing here accepting 2 numbers as parameter to divide function and dividing them to return a result. When the parameter value of “y” is “0” we’ll get an error (divide by zero) and we want to handle that error.

There are various ways in which this can be done:
1)    Using try catch block Or,
2)    Using Try class

In this example we’ll use second method. Before look at the example, “Try” class in Scala provides a function where you can wrap a result in “Try” and it provides two methods to explicitly check whether the operation was success and “Try” has valid result or not.

Let’s have a look at the example:
val tryWrapper = Try(divide(1, 2));

tryWrapper.isSuccess match {

case true => println("Success: " + tryWrapper.get)

case _ => println("Error")

}


Here we are wrapping divide() function call in a Try and we are explicitly check whether the operation was successful or not using isSuccess function of Try. When the operation is successful we also want to extract the result from Try, this can be done using get()  or getOrElse() function (similar to get or getOrElse operation on Option[]).

JVM: Java Heap & Stack

$
0
0
    When does Heap gets created?
         Heap gets create on the startup of JVM. 
            What is Heap structure (young space/nursery and old space) 
                When an object created it is first stored and placed on young space or nursery and if it lives there for longer time it is moved to old space. The design goal behind young space and old space is that newly created objects are short lived and since they are placed on young space their access is faster. 
                    How memory is allocated to Objects? 
                        While assigning memory on Heap, Objects are first identified whether they are small or large? Small objects are allocated in Thread Local Areas (TLA). TLA are free chunk of space reserved from Heap and is given to thread for exclusive use. Thread can then allocate objects in its TLA without synchronizing with other threads. When the TLA becomes full, the thread simply requests a new TLA. The TLAs are reserved from the nursery if such exists; otherwise they are reserved anywhere in the heap. Large objects that don’t fit inside a TLA are allocated directly on the heap. 
                            What are Shallow and retained sizes? 
                                Shallow size of an object is the amount of memory allocated to store the object itself, not taking into account the referenced objects. Shallow size of a regular (non-array) object depends on the number and types of its fields. Shallow size of an array depends on the array length and the type of its elements (objects, primitive types). Shallow size of a set of objects represents the sum of shallow sizes of all objects in the set. 
                                  Retained size of an object is its shallow size plus the shallow sizes of the objects that are accessible, directly or indirectly, only from this object. In other words, the retained size represents the amount of memory that will be freed by the garbage collector when this object is collected. 
                                      What is allocated on stack and heap? 
                                          'Local variables' like function arguments or variables inside a function are only 'allocated' on the stack (primitive value or reference). Object Instance and static variables are stored on the heap.

                                            Running Spec2 Unit test case from eclipse

                                            $
                                            0
                                            0
                                            In this article I’ll give some quick hints about how you can run your Scala unit test cases written using Spec2 from eclipse.

                                            Writing unit test cases is fun and important aspect of any application where quality is important and large groups/teams are working on different parts of application.

                                            You may want to run unit test cases directly from editor where you are writing test cases. Almost every other editor support execution of Java Junit test cases from eclipse.

                                            When you are working with Scala code based you may want to write unit test cases using Spec2 and want to run directly from eclipse. As such you cannot do with making additional changes to test cases; here are the changes that you’ll have to add to your test cases so that you can run them from eclipse:

                                            1)    Add below import statements to you Unit test class
                                            import org.junit.runner.RunWith
                                            import org.specs2.runner.{JUnitRunner}

                                            2)    Annotate your test class with following annotation:
                                            @RunWith(classOf[JUnitRunner])

                                            Now if you right click on your test case class you should see an option to execute your test case (Right click on test class -> Run As -> Scala Unit Test)

                                            Hope this’ll help you to save some time while writing test cases.

                                            How to create a page with predefined components?

                                            $
                                            0
                                            0

                                            Templates and components are core assets/building blocks of any AEM project and provide a capability to content authors to update pages/content.

                                            Many times we want to create pages with some default components prefilled/configured/dropped on a newly created page with an option for authors where they can remove any component (if it is not required).

                                            Let’s consider a scenario/use case where this can be applied. Let’s say we have photo album site where author can create new albums/pages using a photo album template. A photo album page has some space reserved on right hand side for displaying popular images and newly added images. Both of these components are dropped inside a parsys.


                                            Now, if author wants to create an album page he needs to create a page and then drop components one by on the newly create page. Is there a way to have a page prefilled with all the components so that author doesn’t need to add them one by one? Yes, this is what we are going to cover in this article.

                                            Follow these steps to have a template preconfigured with some default components:

                                            1) Create a new page and drop all required components on that page. In our case, we’ll create a new album page and will drop “photo album”, “popular images” and “newly added” component.
                                            2) Go the CRXDELite and explore the newly created page and nodes under it. You’ll see that with there are 2 parsys nodes one of them has “photo album” and another parsys node have “popular images” and “newly added” component nodes.
                                            3) Go the template in apps folder using which page has been created (e.g. (/apps/photogallery/templates/albumpage) and as you can see it by default contains “jcr:content” node.
                                            4) Now copy both parsys nodes from album page that we have created in step#1 to jcr:content node of template (albumpage). At this point the structure of jcr: content should match the with node structure of actual album page.
                                            5) Create a new album page using the same template (i.e. (/apps/photogallery/templates/albumpage) and now when you open the page you’ll see that newly created page already contains all the components on it.

                                            So, how does this works? A template node in CQ/AEM is of type cq:template and the default behaviors this node is doing all the work for us. Basically whatever properties/nodes that we have at template’s jcr:content level will be copied over to a newly created page’s node. 


                                            Custom synchronisation or rollout action for blueprint ?

                                            $
                                            0
                                            0
                                            Blueprint is an important feature for all those sites which needs to deals with multiple locale and region specific sites. Consider that we have 20 different regional sites and each regional site have 2 locales (English and regional language). Now let’s say you need to do a small update on a English locale page and that changes is required on all sites, there will be 20 pages that you’ll need to visit and make changes one by one (if you have not used blueprint and live copy).

                                            With blueprint  we can create a master copy (which is called as blueprint) and create many live copies (which are actual sites) and the benefit you get is that as soon as you do any change in blueprint that same change will be propagated to all live copies pages when changes are rolled out by authors…this is cool. With blueprint you’ll need to apply changes only to English locale page on blueprint and all English locale pages (live copies) will receive that change implicitly.

                                            So, how this whole thing works? Whenever a blueprint is created we need to define rollout configurations. Rollout configurations tell what to do when a blueprint page is updated. Rollout configuration consist A Trigger and Single/Multiple Actions:

                                            1) Trigger: this is a string value (either of “rollout”, “modification”, “publish” etc.) which tells when to trigger (rollout) a particular action on blueprint.

                                            2) Action (cq: LiveSycnAction): this is actual action performed on blue print. Action can be content update, content copy, content delete, references update, order children etc.



                                            AEM provides default triggers and actions that you can combine to suffice your need. Sometimes you may want to perform a custom action (e.g. send an email to author when a blueprint page is updated). In this article we are going to see how to develop custom LiveSyncAction. If you want to understand blueprint, live copy and MSM completely then please go through the links mentioned in reference section of this article.

                                            As you can see in above diagram, a rollout configuration is combination of trigger and multiple (or single) actions. Each node in above diagram represents an action, and “jcr:primaryType” of these action nodes is “cq:LiveSyncAction”.


                                            To create a custom LiveSyncAction we need to understand following classes:

                                            1) com.day.cq.wcm.msm.api.LiveActionFactory and com.day.cq.wcm.msm.api.LiveAction
                                            a. LiveActionFactory: An Factory used to create LiveActions. Used to register LiveAction implementations with the ServiceComponentRegistration (in Felix Console) It enables to create LiveActions with a Configuration given by a Resource.
                                            b. LiveAction: Represent an Action to be performed upon a Roll-out from a Source to a Target. Actions are created by a LiveActionFactory that provide instances of LiveActions set-up with given configuration. LiveActions are called during the process of roll-out which on acts on Resources A LiveAction must therefore act within the boundary of the given Resource

                                            2) com.day.cq.wcm.msm.impl.actions.BaseActionFactory and com.day.cq.wcm.msm.commons. BaseAction
                                            a. BaseActionFactory: Base implementation of a LiveActionFactory service. We do have other action factory class that provides fileting options. FilteredActionFactoryBase is a BaseActionFactory extension that provides basic support to set-up filtering configuration.
                                            b. BaseAction: Base implementation for the LiveAction interface. This abstract class offers some basic configuration tasks for building a LiveAction and also default implementations for the deprecated methods. It will speed up the implementation of a LiveAction by only implementing the handles and doExecute abstract methods.

                                            To create a custom LiveAction we need to create a factory by extending “BaseActionFactory” class and implement actual action by extending “BaseAction” class and fill login in doExecute() method. All live actions in AEM are registered with OSGi/Felix console.




                                            Here is a sample class that implements a custom LiveAction to send emails (ignore errors shown below as I don't have dependencies in my maven repo).


                                            Once a live action has been created and corresponding OSGi bundle has been installed in OSGi/Felix container go to Felix console and open OSGi-> Services tab and verify that “SendMailActionFactory” has been register similar to figure 2 above. If action factory has been registered then we are good to use newly create live actions.

                                            How to use newly created custom live action:
                                            1) Go Tools (https:///miscadmin#/etc/msm/rolloutconfigs) 
                                            2) From right hand side menu select [New… -> New Page…] enter title and name (e.g. test and test for both)



                                            3) Go to CRXDELite and open navigate to /etc/msm/rolloutconfigs
                                            4) You should see a page/node (test) for newly create page in step# 2
                                            5) Under the “test” node navigate to “jcr:content” node (/etc/msm/rolloutconfigs/test/jcr:content)
                                            6) Create a new node with name “sendMailAction” of type “cq:LiveSyncAction” and click “Save All…”.
                                            Note that node name (i.e. “sendMailAction”) should match with the “LIVE_ACTION_NAME” static final variable defined in SendMailActionFactory class because we have registered our custom action in AEM container with that name.



                                            7) We have created a rollout configuration and it is ready for use. Now when you go to “Blueprint” tab on page properties, you should get a new rollout config option.




                                            References:
                                            http://docs.adobe.com/docs/en/cq/current/administering/multi_site_manager.html
                                            http://docs.adobe.com/docs/en/cq/current/developing/multi_site_manager_dev.html#par_title

                                            Deep copy VS shallow copy of OBJECTS!

                                            $
                                            0
                                            0
                                             Java provides a mechanism for creating copies of objects called cloning. There are two ways to make a copy of an object called shallow copy and deep copy.
                                            Shallow copy is a bit-wise copy of an object. A new object is created that has an exact copy of the values in the original object. If any of the fields of the object are references to other objects, just the references are copied. Thus, if the object you are copying contains references to yet other objects, a shallow copy refers to the same subobjects.

                                            Deep copy is a complete duplicate copy of an object. If an object has references to other objects, complete new copies of those objects are also made. A deep copy generates a copy not only of the primitive values of the original object, but copies of all subobjects as well, all the way to the bottom. If you need a true, complete copy of the original object, then you will need to implement a full deep copy for the object.
                                            Java supports shallow and deep copy with the Cloneable interface to create copies of objects. To make a clone of a Java object, you declare that an object implements Cloneable, and then provide an override of the clone method of the standard Java Object base class. Implementing Cloneable tells the java compiler that your object is Cloneable. The cloning is actually done by the clone method.

                                            Request Object and Spring DataBinders

                                            $
                                            0
                                            0

                                            I am fan of spring framework and I love to use it all the times. In this post I am going to show you how you can bind the data from ServletRequest object to any Java Bean (POJO). Let’s say you have a spring web application and you are working on piece of code which is not directly interacting with Web layer (i.e. don’t have direct access of Request and Response Objects), so 2 questions rises here are:

                                            1) How can I get access of my current request object in plain java class which is not a web component?
                                            2) How can I bind the parameters in request object to my POJO bean?

                                            • Request Object access in non web classes
                                            There are ways with which you can make you springs beans aware of ServletRequest object by implementing the interface ServletRequestAware (interface from opensymphony-webwork). But, there is another easier way of doing it with the help of spring Utility classes.

                                                    ServletRequestAttributes requestAttributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
                                                    HttpServletRequest request = requestAttributes.getRequest();

                                            • Now have the request object and let’s try to bind it to a POJO.
                                            Spring DataBinders is a simple answer. There are many out of box implementations are available for various type of data binding operations. One of the data binders which springs has is ServletRequestDataBinder, spring internally uses this binder to wrap a POJO (command class) with the values from request object. So, let say you have a “User” class with 2 instance variables “username” and “password” also, you have a form which contains many text boxes (let’s say 30) and two of them are for userName and password. Now, in normal scenario we have a command class assigned to a controller and spring automatically populates the properties/instance variables in command class. Let’s say we want to populate our command/POJO manually using the request object in some other class (non-controller class), how to do that? Here is the code

                                            class User {
                                                private String userName;
                                                private String password;

                                                public String getUserName() {
                                                    returnuserName;
                                                }
                                                publicvoid setUserName(String userName) {
                                                    this.userName = userName;
                                                }
                                                public String getPassword() {
                                                    returnpassword;
                                                }
                                                publicvoid setPassword(String password) {
                                                    this.password = password;
                                                }
                                            }

                                            class MyNonControllerClass {
                                                private User user;

                                                public MyNonControllerClass() {
                                                    user = new User();
                                                    //We can write this code in more better place (apart from constructor) based on need.
                                                    ServletRequestAttributes requestAttributes = (ServletRequestAttributes) RequestContextHolder.getRequestAttributes();
                                                    HttpServletRequest request = requestAttributes.getRequest();
                                                    ServletRequestDataBinder binder = new ServletRequestDataBinder(user);
                                                    binder.bind(request);
                                                }

                                                public User getUser() {
                                                    returnuser;
                                                }

                                                publicvoid setUser(User user) {
                                                    this.user = user;
                                                }

                                            }

                                            I hope this handy trick will help you to utilize the request objects and spring binders at many places in your code. Enjoy!

                                            ServletRequest - some useful features

                                            $
                                            0
                                            0
                                            We play with ServletRequest object every day but there are few things which we usually don't use.

                                            One of the nice features of Java servlets is that all of this form parsing is handled automatically. You simply call the getParameter method of the HttpServletRequest, supplying the parameter name as an argument. Note that parameter names are case sensitive. You do this exactly the same way when the data is sent via GET as you do when it is sent via POST. The return value is a String corresponding to the uudecoded value of the first occurrence of that parameter name. An empty String is returned if the parameter exists but has no value, and null is returned if there was no such parameter. If the parameter could potentially have more than one value, as in the example above, you should call getParameterValues instead of getParameter. This returns an array of strings. Finally, although in real applications your servlets probably have a specific set of parameter names they are looking for, for debugging purposes it is sometimes useful to get a full list. Use getParameterNames for this, which returns an Enumeration, each entry of which can be cast to a String and used in a getParameter call.

                                            Hope this will help!

                                            Increasing JVM heap size for Maven

                                            $
                                            0
                                            0
                                            There might be few instances when you are working on bigger project and it needs to do lot of processing while building/packaging final package of the project. In such projects there are chances that you’ll get JVM heap size error while building maven project. You can change the JVM options for Maven2 by setting following environment properties on your system:

                                            Environment variable name: MAVEN_OPTS
                                            JVM value: -Xms512m -Xmx1024m (adjust these values based on your project requirement).

                                            Hope this will help you!

                                            When to use JSP include directive and jsp:include tag?

                                            $
                                            0
                                            0
                                            The JSP include directive includes static content at compile time and jsp:include includes static or dynamic content at run time.

                                            1) Use the include directive
                                            a. if the file includes static text
                                            b. if the file is rarely changed (the JSP engine may not recompile the JSP if this type of included file is modified)
                                            c. if you have a common code snippet that you can reuse across multiple pages (e.g. headers and footers)
                                            d. Static includes are faster than dynamic includes

                                            2) Use jsp:include
                                            a. for content that changes at runtime
                                            b. to select which content to render at runtime (because the page and src attributes can take runtime expressions)
                                            c. for files that change often
                                            d. dynamic include can accept a parameter

                                            I hope I have covered everything, please feel free to post additional difference…

                                            Understanding Day's CQ & Underlying frameworks - Part 1

                                            $
                                            0
                                            0
                                            Recently I have got an opportunity to work on a great CMS tool from www.day.com (CQ). CQ is an abstraction on top of all great JAVA frameworks/tools (JCR, Sling, OSGI and DAY’s own component based framework) and fits well for almost all enterprise application. Initially when I started working on it I thought it is a propriety tool and have very limited scope to show your innovations and doing experiments but, after taking a deep dive of underling technology/frameworks I realized that it is a great combination of various great frameworks. CQ is based on following technologies/frameworks (completely JAVA centric):

                                            1)       Sling (http://sling.apache.org/site/index.html): A REST based web framework for accessing resources (JCR – Java Content repository)
                                            2)       Felix (http://felix.apache.org/site/index.html  - An OSGI specification implementation): A lightweight container that is very different from JVM for handling class loading and provides a class level SOA platform.
                                            3)       CRX/Jackrabbit (http://jackrabbit.apache.org - A JCR specification implementation): A specification which tells how we can manage our data (that includes images, text files, string, long to everything else…) as structured nodes.

                                            For those who are not well versed with CQ’s underlying frameworks I’ll try to cover it in other posts that I’ll be posting in coming days. In this post my main focus is to explain CQ architecture and best practices (just an overview). I’ll also cover the best practices for various design and development concepts (creating templates, pages, components, JCR repository manager, writing custom JCR nodes, JCR queries and authenticators) in individual posts (later).

                                            Ok, so the CQ is not a new framework and you don’t need to learn new programming language. If you are a developer from Java/JSP background with decent experience of JavaScript, AJAX, XML, JSON and CSS you can do magic with CQ. CQ follows a template, page and component based development methodology.

                                            ·         Template (cq:Template): Every page that we build for our website should extend from some template. Template itself does not have any rendering logic, a template is just a logical unit in CQ environment which groups certain pages that shares common features (features can be functional or non functional). For example, we have a group of pages that users can access without logging in (these are static/public pages), these pages have common feature (i.e. they are public, it is functional feature) and share common headers and footers (this is non-functional/rendering feature). As I mentioned above that template itself does not have any rendering logic then a general question that you might ask “how the pages are getting rendered?”, well we need to define a resource/page (cq:Page) that can will render the template.

                                            ·         Page (cq:Page): To create a page on our web site we need a template and to render a template we need a page. A page is combination of one or more resources (Java classes, JSP etc.), and the primary goal of a page to create page structure (ex. Two column with a header or one column with header and footer) in which components can be placed. So a page renders blank container and we need to place components in it, this is real power of CQ. We can add and remove components on a page, we can change their position of components and even we can extend a page and add/remove components from extended pages.

                                            ·         Component (cq:Component): Component is a reusable entity that we can place on any number of pages. As pages can be extended to add/remove functionality similarly a component can also be extended to add/remove functionality. Components are the smallest building block of a page and usually a component is composed of various resources (Java classes, JSPs, JS).

                                            Let’s see how Sling, JCR and Felix contribute in CQ framework and what role they are playing as a building block.

                                            1)       Sling - Request Resolution to a Resource/Script/Servlet (JCR Node/Script): We a request comes to CQ the first thing that happens is request wrapping and resource/page/script resolution. This is where sling comes in to picture, sling looks for the incoming request (HttpServletRequest) and adds a wrapper on it SlingHttpServletRequest. The SlingHttpServletRequest wrapper provides some additional information to sling framework for resolving a particular Resource/Servlet/Scrip on server (in JCR repository). Once the request is wrapped as a SlingHttpServletRequest, sling parses the incoming request URL and  breaks it down in to following pieces with the help of additional information that we have in SlingHttpServletRequest wrapper:

                                            NOTE: Scripts and servlets are resource in Sling and thus have a resource path, this is the location in the JCR repository (sling:resourceType). Scripts and Servlets can be extended using the sling:superResourceType property (I’ll cover this in another post “Component and Page inheritance”).

                                            a)       Servlet/Script (sling:resourceType): incoming request is parsed and a servlet/script/resource name is extracted from it. A script can be a JSP file, Java class or ActionScript (Flex/Flash) file., the type of script that will be executed depends on the extension and selectors (see below). Internally sling calls [request.getResource().getResourceType()] to get sling:resourceType. Type of supported script is configurable, to see which scripts are supported in your environment navigate to http://localhost:4502/system/console/scriptengines
                                            b)       Selector: based on the URL sling decides which type of script to execute, internally sling makes a call [request.getRequestPathInfo().getSelectorString()] to extract selector(s). Let’s say we have a requirement where we want send response in three different formats (XML, JSON, TXT) for same URL, this can be achieved with the help of selectors.
                                            c)       Extension: incoming request is parsed and an extension is extracted out of it for script file, internally sling makes a call [request.getRequestPathInfo().getExtension()]. It is possible to have a multiple script files with different extensions and based on the selector(s) provided in incoming URL appropriate script will be executed.
                                            d)       Request Method: Request method is required when the request is not GET or HEAD.

                                            Let’s try to tie all 4 pieces together, The resourceType is used as a (relative) parent path to the Servlet/Script in JCR repository while the Extension or Request Method is used as the Servlet/Script(base) name. The Servlet is retrieved from the Resource tree (Repository) by calling the [ResourceResolver.getResource(String)] method which handles absolute and relative paths correctly by searching relative paths in the configured search path [ResourceResolver.getSearchPath()] and sling:resourceType (and sling:resourceSuperType) of the requested resource. To see and configure the path where sling performs looks for resources, navigate to (JCR resource revolver tab on Felix console) http://localhost:4502/system/console/jcrresolver, if required we can map additional paths with various regular expression.

                                            Here is an example URL and its decomposition, let’s say the URL (http://suryakand-shinde.blogspot.com/reports/june/expense.format.pdf.htmlis used to get the expense reports in PDF format for the month of June (it is stored in JCR repository under /reports/june/expense/) :

                                            ·         Server: suryakand-shinde.blogspot.com
                                            ·         Script/Servlet (resourceTypeLabel): /reports/june/expense (The last path segment of the path created from the resource type)
                                            ·         Selector: format/pdf (we can have a JSON and TXT selectors if we want to get the same report in various formats)
                                            ·         Extension (requestExtension): html

                                            If we have multiple selectors and extensions in request URL then the following rule is applied to resolve a resource:

                                            ·         Numbers of selectors in request URL are given first preference.
                                            ·         Requests with extension are given more preference over request without extension.
                                            ·         A script found earlier matches better than a script found later in the processing order. This means, that script closer to the original resource type in the resource type hierarchy is considered earlier.


                                            For more information on servlet/script resolution please see: http://sling.apache.org/site/servlet-resolution.html

                                            NOTE: Sling treats request methods (GET, PUT, POST, HEAD) differently. So, it’s really important to understand and choose the right request method while designing applications. Only for GET and HEAD requests will the request selectors and extension be considered for script selection. For other requests the servlet or script name (without the script extension) must exactly match the request method. Here is quick example of how sling extracts Servlet/Script,
                                             
                                            2)       JCR – The data/resource storage: In any application we need a data base to store data (user information, text data, images etc.) so in case of CQ JCR (CRX) is plays role of a database. Data in JCR (Java Content Repository) is structured as nodes; a node can be a folder, file or a representation of any real time entity. Let’s try to co-relate a traditional database (like MySQL) with JCR. In traditional database we store information/data in tables, each table has multiple columns (few of them are mandatory, few of them have data constraints and few of them are optional) and each table has multiple rows. In case of JCR we store data in JCR node of a particular type (so treat this as our table), each node type have multiple properties (so treat this as table columns) few node properties are mandatory, few node properties have constrains (like the property value should be a string, long etc.) and few node properties are optional. We can have multiple nodes (so treat this as out table rows) of a particular type in our JCR repository. To fetch the required data from database tables we write SQL queries similarly, JCR also supports SQL (Query.JCR_SQL2) for querying nodes in JCR repository. JCR also supports the XPath queries (Query.XPATH) to find/query nodes based on path.

                                            Let’s say we have multiple portals and we want to store portal configurations (e.g. a unique id for portal, portal name, home page URL etc.) in a database tables so, we’ll create a table called as Portal with Columns (portal_id, portal_name, portal_home_page etc.) to store portal configurations, each portal will have a row in database with its own configurations. How to do this in JCR?? In JCR we’ll define a node type config:Portal (that will be registered against a namespace so that it is not conflicting with other nodes that have same name) and node Properties (portalId, portalName, portalHomePage etc.) and each portal will have a separate node in JCR with its own configurations. Here is a diagrammatic mapping to traditional database and JCR:


                                            Figure: Traditional Database V/S JCR Node comparison

                                            What extra we are getting from JCR?

                                            ·         Traditional database supports SQL but JCR supports SQL (the format of queries is little different) and XPATH.
                                            ·         Structure of database tables are predefined and we can not add or remove certain columns for an individual row (all rows have same columns), in JCR with the help of nt:unstructured and mixin nodes we can add and remove properties of individual nodes.
                                            ·         In traditional database files/images and large text are represented as BLOB/CLOB with some limitations but, in JCR they are stored as node types and search and retrieval is easy.
                                            ·         JCR has its own access control mechanism (ACL) and user management framework.
                                            ·         XML Import & Export
                                            ·         Provides fast text search (using the Lucene).
                                            ·         Locking, versioning and Notifications.

                                            3)        Felix – managing class dependencies and services: Felix is and OSGI specification implementation that is embedded in CQ for managing service components and their dependencies. Main benefit of using OSGI as an underlying technology for managing service/component dependencies is, it allows us to start/stop services (components) and host multiple version of same service. A service or a component can be configured via the Felix web console and Configuration Admin. Let’s take a smile example, I have an application that is interacting with underlying MySQL database and after few month I found that MySQL team has fixed a major bug in their new version of mysql-connector library release so in order to incorporate this new library in my traditional application I have to stop my application and re-package it (or just replace the older one) but, with OSGI we don’t need to stop the whole application because everything is exposed either as a component or as a service therefore we just need to install new component/service in OSGI container. As and when the services/components are updated in OSGI container there are various event listeners that propagate the service/component update event to service/component consumers and accordingly consumers adapts themselves to use new version of web service (on the consumer side we need to listen for various events so that consumers can decide whether to respond for change or not?).

                                            No framework provides everything that we need built-in, we need to understand the platform/framework that we have chosen for development, and we need to think about how we can utilize it in better way. So, to use the CQ in its full capacity it’s really important to understand the concept and idea behind having Templates, Pages, Components, JCR data modeling and how services/components can be utilized and designed. Each underlying technology (Sling, JCR and OSGI) itself is very vast and I am just a new learner of it, please feel free to comment and share your ideas.


                                            Resources that you can refer for further reading:
                                            Sling:
                                             JCR:
                                             Felix:
                                              
                                            -- Ideas can change everything


                                            AEM & SAML: Detailed Installation and Config. (LDAP and Identity Provider)

                                            $
                                            0
                                            0

                                            In this article we’ll see end-to-end setup and configuration for:
                                            1)      Local LDAP Server
                                            2)      Shibboleth2 (as Identity Provider aka IdP)
                                            3)      Configure AEM as Service Provider and do SSO login with SAML using Shibboleth 2

                                            Before even getting into the installation and too many technical details let’s first try to understand what is SMAL and Idp.

                                            Security Assertion Markup Language (SAML) is an XML-based, open-standard data format for exchanging authentication and authorization data between parties, in particular, between an identity provider (Idp i.e. Shibboleth in our case) and a service provider (SP, i.e. AEM in our case). The SAML specification defines three roles: the principal (typically a user), the Identity provider (IdP), and the service provider (SP). In the use case addressed by SAML, the principal requests a service from the service provider. The service provider requests and obtains an identity assertion from the identity provider. On the basis of this assertion, the service provider can make an access control decision – in other words it can decide whether to perform some service for the connected principal.

                                            Before delivering the identity assertion to the SP, the IdP may request some information from the principal – such as a user name and password – in order to authenticate the principal. SAML specifies the assertions between the three parties: in particular, the messages that assert identity that are passed from the IdP to the SP. In SAML, one identity provider may provide SAML assertions to many service providers. Similarly, one SP may rely on and trust assertions from many independent IdPs.

                                            Systems Requirements (I am assuming that you’ll be trying below installation on a windows machine):
                                            ·         Make sure that you have java installed on your machine and JAVA_HOME is set.

                                            Assumptions:
                                            1)      INST_HOME: create a directory called “installation” anywhere on your computer and in this article we’ll refer this directory as INST_HOME.
                                            2)      IdP_HOME: this is the directory where Shibboleth will be installed.
                                            3)      TOMCAT_HOME: directory where tomcat is installed.

                                            Few Terms that you should be familiar with to understand this tutorial better:
                                            1)      IdP or Identity Provider (e.g. Shibboleth): Identity provider is centralized system which is responsible for connection with user database (RDMS, LDAP etc.), retrieving and supplying user/principal information to Service Providers. Once IdP can server more than one service provider.
                                            2)      Service Provider (SP, e.g. AEM): delegates the task of user authentication and management to IdP. In some cases,  a Service Provider may contact multiple IdPs to do user authentication.
                                            3)      Attributes: When we are talking about SAML, IdP and Service Provider then attributes are mainly related to user or organization attributes (e.g. name, empID, group, mail etc.) that IdP fetches from user database.
                                            4)      Relying Party: In nearly all cases an IdP communicates with a service provider. However, in some more advanced cases an IdP may communicate with other entities (like other IdPs). The IdP configuration uses the generic term relying party to describe any peer with which it communicates. A service provider, then, is simply the most common type of relying party.

                                            Let's begin installation!!!

                                            1.       LDAP Installation: We need to install a LDAP server and we’ll also need a LDAP data browser so that we can see user, groups etc.  Please refer this tutorial “Local LDAP Installation” to see full instruction on how to install LDAP server step by step. If you are not following LDAP installation that I have recommended then please note that you’ll need to update LDAP configuration in “login.config” and “attribute-resolver.xml” files (that we’ll see later in this tutorial) based on your LDAP installation.

                                            Step 1: Download Required Software
                                            Apache Directory Studio RCP Application (for connecting to LDAP server and looking at records): https://directory.apache.org/studio/downloads.html

                                            Installation of software mentioned above is self-explanatory and you just have to follow the instructions on screen.

                                            Step 2: Import Sample User Data for LDAP
                                            You can download a LDIF file for https://github.com/suryakand/tutorials/blob/master/ldap/sample-ldap.ldifand import it directly. You can use import sample user data using “Apache Directory Studio RCP Application” that you have downloaded in step# 1.

                                            2.       Configure your HOST file (I am assuming that you are on a windows machine)
                                            Add following configuration to host file:
                                            127.0.0.1                                      www.blogsaml.com

                                            3.       Tomcat Installation (Preferred is 8 but, Tomcat 7 will also work)
                                            Download tomcat from https://tomcat.apache.org/download-80.cgi(Direct link: http://www.eu.apache.org/dist/tomcat/tomcat-8/v8.0.27/bin/apache-tomcat-8.0.27.zip) and copy extracted tomcat archive file into INST_HOME directory.

                                            4.       Shibboleth
                                            Step 1: Download Shibboleth & extract it in INST_HOME

                                            Step 2: Install Shibboleth
                                            Installation of Shibboleth is fairly easy process. Go the “INST_HOME/shibboleth-identityprovider-2.4.4” directory (created after extracting the downloaded shibboleth zip in Step# 1). Run the “install.bat” file and it’ll ask you few inputs on screen. Enter following values (make sure that you are using below values only, I’ll tell the reason why later in this article):
                                            Installation Directory: c:/saml_idp (we’ll refer this directory as IdP_HOME in this article)
                                            Host Name: www.blogsaml.com
                                            Password for key store: saml

                                            Once installation is completed you’ll see idp.war file in IdP_HOME/war folder. Copy this war file to your TOMCAT_HOME/webapps folder and start the tomcat. You’ll also need to copy few additional jar file in TOMCAT_HOME/webapps/idp/WEB-INF/lib folder (you can download these jar files from https://github.com/suryakand/tutorials/tree/master/shibboleth/saml_idp/extra-libs) folder. Restart your tomcat server after copying files.

                                            Step 3: Shibboleth Configuration
                                            I have uploaded a preconfigured set of files that you can download from https://github.com/suryakand/tutorials/tree/master/shibboleth/saml_idp. I’ll recommend to keep these files handy so that you can follow this tutorial better. When you install Shibboleth these files will be created by installer in IdP_HOME/conf directory for you based on the parameters that you have supplied while installation (in step #2 above). There are 8 configuration files and all these configuration files are available in IdP_HOME/conffolder. We’ll look at the important section of these files one by one.

                                            Main goal of this tutorial is to make you familiar with IdP (Shibboleth), Service Provider (AEM) and various configurations so that you understand everything end to end. If you are short of time and just want to get up and running with IdP and play with AEM configuration then you can just copy the configuration files (conf, credentials and meta) and replace it in your IdP_HOME folder and go to Step# 4.

                                            First we’ll look at files on IdP_HOME/conf folder

                                            a)      Attribute Resolver Configurations (attribute-resolver.xml): Attribute resolver’s main function is to read attributes (related to a principal/user mainly) from source database (RDMS, LDAP etc.) and provide it to Service Provider (in our case AEM) after filtering within IdP (in our case Shibboleth). There are two main components/elements in this file that we need to focus on:
                                            ·         Attributes: There are different type of attributes that we can define. e.g. simple attribute (this contains just name-value combinations), scoped attribute (similar to simple attribute you can define scope), here scope refers to Service Provider (i.e. to which Service Provider a scoped attribute is available). Attributes have dependency on DataConnectors.
                                            ·         Data Connector: with the help of DataConnector element to tell IdP how to connect to a database of users (RDMS, LDAP etc.) to fetch information/records.
                                            So, to connect dots we can say that attributes are fetched by Identity Provider (IdP) using data connector and are filtered (how, we’ll see in next section) before giving it to service provider.
                                            You can read more about attribute resolver at https://wiki.shibboleth.net/confluence/display/SHIB2/IdPAddAttribute

                                            b)      Attribute Filter File Configurations (attribute-filter.xml): An attribute filter policy describes which attributes are sent to a particular service provider. The default attribute filter policy file is IdP_HOME/conf/attribute-filter.xml. You can read more about attribute filters at https://wiki.shibboleth.net/confluence/display/SHIB2/IdPAddAttributeFilter

                                            c)       Relying Party configurations (relying-party.xml): This file contains relying party configurations for the IdP, for example, whether SAML assertions to a particular relying party should be signed.  It also includes metadata provider and credential definitions used when answering requests to a relying party. This file mainly contains three components/elements:
                                            RelyingParty: The IdP recognizes three classifications of relying parties:
                                            ·         anonymous - a relying party for which the IdP has no metadata
                                            ·         default - a relying party for which the IdP does have metadata but for which there is no specific configuration
                                            ·         specified - a relying party for which the IdP has metadata and a specific configuration
                                            MetadataProvider: This file contains meta data information for relying parties (IdP and Service Providers). The IdP uses metadata to drive a significant portion of its internal communication logic with a relying party. The metadata contains information such as what keys to use, whether certain information needs to be digitally signed, which protocols are supported, etc. A relying party is identified within metadata by an element with an entityID attribute whose value corresponds to the relying party's entity ID. Entities may be grouped within an element and this group may be given a name by means of the name attribute. Entity groups may be nested.
                                            When creating a specified relying party configuration you may specify either a specific entity or a group of entities. In that event that there is overlap the most specific configuration is used, no settings are "inherited" because of this overlap. As was mentioned above, a relying party for which the IdP can find no metadata is termed an anonymous relying party.
                                            Credential: In this section we provide certificate and key of IdP.

                                            d)      Handler Configuration (handler.xml): In this file we define/map ProfileHandlers (for handling SMAL 1 & 2 authentication, artifice resolution  request) and LoginHandlers (for form based login, session based login etc.). When connecting to a LDAP serer, LoginHalder also needs to know the LDAP configurations which are configured in login.config file (discussed below).

                                            e)      login.config: For login handler to connect to a LDAP server we need to provide LDAP server information,  this information is configured in login.config file.

                                            Now, let’s look at files in IdP_HOME/metadata folder

                                            a)      Identity Provider Metadata (idp-metadata.xml): In this file we provide information about Identity Provider i.e. which certificates will be used by Identity Provider for SSO and Attribute signing while communicating with Service Provider.
                                            b)      Service Provider Metadata (adobecq.xml): In this file we provide information about Service Provider i.e. which certificates will be used for signing SMAL message, where to POST the SAML response after authenticating user and Which URL service provider can use to log out.

                                            Finally, let’s look at files in IdP_HOME/credentials folder
                                            This folder contains the certificates & keys for Service provider.

                                            NOTE: If you have not does this already then, you’ll also need to copy few additional jar file in TOMCAT_HOME/webapps/idp/WEB-INF/lib folder (you can download these jar files from https://github.com/suryakand/tutorials/tree/master/shibboleth/saml_idp/extra-libs) folder. Restart your tomcat server after copying files.

                                            Installation verification:
                                            Make sure that your tomcat is running and after that you can navigate to https://www.blogsaml.com:8443/idp/statusand if everything has been configured correctly you should see a similar screen as below:

                                            Fig 1: IdP Status Page

                                            Step 4: Configure AEM
                                            At this point you are done with IdP installation and now we’ll configure AEM to work with IdP so that we can login to AEM using SSO. I am assuming that you are working on AEM 6.x version.
                                            NOTE: If you are running AEM 6.0 then you need to first install SP1 (https://docs.adobe.com/docs/en/aem/6-0/release-notes-sp1.html) else you’ll get 403 HTTP Error while authenticating with IdP.

                                            a)      Start your AEM Server and make sure that you have AEM SP1 installed on your AEM instance if you are running AEM 6.0.
                                            b)      Configure SMAL handler: Go to Felix console and open SMAL Handler Configuration window as shown below and configure values as shown in below screen shot:

                                            Fig 2: AEM SAML Handler

                                             Fig 3: AEM SAML Handler Details


                                            Couple of things to note here:
                                            ·         Path: this is the path where SMAL authentication handler will come into picture and will redirect you to IdP login page.
                                            ·         IdP URL: This is the URL where user will be redirect for login on IdP web server.
                                            ·         Service Provider ID: This is just and identifier for Service Provider and this can be any string value. The point that we have to note here is that this value should match with the value that you have configured in “entityID” attribute of Service Provider metadata file (adobecq.xml) in IdP_HOME/metadata folder. Also, this should match with “provider” attribute of RelyingParty in IdP_HOME/conf/relying-party.xml file
                                            ·         User encryption: if you have proper certificate and keys select this configuration and your SMAL messages will be signed before transporting over HTTP.
                                            ·         Add to Group & Group Membership: If this check box is selected and a value is configured against “Group Membership” configuration then AEM will look for configured attribute in SMAL response (i.e. in our case it’ll look for “group” attribute) and will add user to that group in AEM.

                                            c)       Configure Referrer Configurations: The referrer filter service is an OSGi service that allows you to configure:
                                            ·         which http methods should be filtered
                                            ·         whether an empty referrer header is allowed
                                            ·         and a white list of servers to be allowed in addition to the server host.

                                            Fig 4: AEM Referrer Config.

                                             Fig 4: AEM Referrer Config. Detail


                                            At this point you are all set to test your configuration. If you try to access the URL http://www.blogsaml.com:4502/content/geometrixx/en.htmlyou’ll be redirect to IdP’s login page (https://www.blogsaml.com:8443/idp/Authn/UserPassword) as shown below:

                                            Fig 4: IdP Login Screen.

                                                    Also, if you want to see the SMAL XML being transported between IdP and Service Provider then you can use Firefox’s SAML Tracker plugin, here is quick screen shot of SMAL Tracker:
                                            Fig 4: SAML Request

                                            Fig 4:SAML Response


                                                    Now, it’s time to recap and connect dots from beginning. Here is what had happened when you have tried to access secure URL http://www.blogsaml.com:4502/content/geometrixx/en.html:
                                            1)      Since the URL “/content/geometrixx” is configured in AEM SAML authentication handler as secure URL, SMAL handler will intercept this URL and will redirect user to IdP’s server (with SMAL request XML, see screen shot above “SAML Request”).
                                            2)      Once IdP receives request for authentication (from AEM) it check whether the Service Provider is registered with IdP or not (Service Provider is registered with IdP in relying-party.xml file)? Once IdP recognizes that Service Provider is valid and is registered then only it redirects to login screen (https://www.blogsaml.com:8443/idp/Authn/UserPassword).
                                            3)      Once user enter username and password and submits the from IdP looks for LoginHandler in “handler.xml” file and connects with LDAP server using the LDAP configurations provided in “login.config” file.
                                            4)      Once connection is setup with LDAP server, IdP validates the username and password that user has entered against records in LDAP server and prepares SAML response that needs to go back to Service Provider.
                                            5)      To prepare SAML response with appropriate attributes IdP consults the attribute-resolver.xml and attribute-filter.xml file. These file tells IdP which attributes to release (send back) to Service Provider in SAML response (See screen shot “SAML Response” for sample response).
                                            6)      Once response is prepared by IdP it is sent back to Service Provider via a POST request on the configure end point in Service Provider metadata file (i.e. adobecq.xml). AEM by default list for SMAL POST response at /saml_login path.
                                            7)      Once SAML response is received on AEM side, AEM reads the “uid” and “group” parameter from response and adds user in AEM (based on whether AEM has been configured to add user or not?).

                                            I hope this article has helped you to understand the SAML and how it works hand-in-hand with a service provider.
                                            Viewing all 63 articles
                                            Browse latest View live