Tuesday, December 24, 2013

Disable WCS SOLR search features

The steps may vary based on how far you are with your configuration, I wish there was a simpler global setting to enable / disable this feature.

Step 1: Disable "store-enhancements" Feature

Replace /apps/websphere/wcs70 with WCS home directory
Replace -DinstanceName with your WCS instance name
Replace -DdbUserPassword with db passoword used by WCS instance

Here is a sample command

/apps/websphere/wcs70/bin/config_ant.sh -buildfile /apps/websphere/wcs70/components/common/xml/disableFeature.xml -DinstanceName=guest -DfeatureName=store-enhancements -DdbUserPassword=wcs7lab01

When you run the script to disable SOLR search it won't rollback WAS level profile changes, essentially the script will only rollback WCS EAR file and database changes.

Step 2: Delete SOLR WAS Profile

To delete WAS profile we need to follow couple of manual steps, if we don't delete SOLR profile any future re-enablement of search feature will report "SOLR profile already exists" error.

-- Delete profile
/apps/websphere/ws70/bin/manageprofiles.sh -delete -profileName guest_solr

-- Update Registry
/apps/websphere/ws70/bin/manageprofiles.sh -validateAndUpdateRegistry

-- Manually delete the solr profiles folder, E.g.
rm -rf /apps/websphere/ws70/profiles/guest_solr


Step 3: Database Tweaks

Let us assume you don't want to re-run the disablement script, instead want to do few DB tweaks to toggle back and forth between SOLR and non - SOLR based features

select * from EMSPOT where name like '%search%'
and storeent_id in (select storeent_id from storeent where identifier='AuroraStorefrontAssetStore')

You should notice a record with usagetype as "STOREFEATURE" for your store id, if this is the case delete this record
NOTE: Usually this is enabled for the Store asset store

select * from SEOURLKEYWORD where storeent_id in (select storeent_id from storeent where identifier='AuroraStorefrontAssetStore')
Mark all records as inactive by setting status column value as 0

NOTE: Usually this is enabled for the Store asset store As of V7, FEP5 there is no way to enable / disable SOLR from wc-server.xml

Step 4: Re-enable SOLR

use these steps if you ever want to re-enable SOLR alone and assuming the store enhancements feature is already enabled in WCS EAR and WCS DB.

This command can be run both in WCS box or any remote box to setup SOLR WAS profile.
Here is a sample.

/apps/websphere/wcs70/bin/./config_ant.sh -debug -buildfile /apps/websphere/wcs70/components/foundation/subcomponents/search/deploy/deploySearch.xml -DinstanceName=guest -DdbUserPassword=wcs7lab01 -DsolrHome=/apps/websphere/wcs70/instances/guest/search/solr/home -DautoConfigSearchWebserver=true -DisShareWCWebserverProduct=true > /apps/websphere/wcs70/instances/guest/logs/only_search_enable.log

Further Reading

Monday, December 23, 2013

WCS Feature Pack 6 Upgrade

1. su - root
2. Upgrade WAS/IHS and Plugin fix pack level to
Refer "Recommended fixes and settings for WebSphere Commerce Version 7" for further details

Download WAS for your os version from here

upgrade IHS, Plugin and WAS using filent install files
<WAS_UPDATE_INSTALLER>/update.sh -options "/home/wasuser/git/wcs_scripts/install_files/ihs.update.response.txt" -silent
<WAS_UPDATE_INSTALLER>/update.sh -options "/home/wasuser/git/wcs_scripts/install_files/plg.update.response.txt" -silent
<WAS_UPDATE_INSTALLER>/update.sh -options "/home/wasuser/git/wcs_scripts/install_files/was.update.response.txt" -silent

The install files used of this upgrade can be cloned from my git project locate here

Check the output from <WC_HOME>/bin/versionInfo.sh to validate upgrade was successful

Name                     IBM WebSphere Application Server - ND
ID                       ND
Build Level              cf271250.01
Build Date               12/13/12
Architecture             AMD (64 bit)

2. For GUI install refer this section
Here are the steps to install on developer toolkit

3. Silent install can be used to upgrade server environment.

/WC_V7.0_-_FEP6_FOR_MP_ML/server/install -options /media/sf_linux_share/WC_V7.0_-_FEP6_FOR_MP_ML/server/responsefile.txt -silent -is:javaconsole

Check the output from <WC_HOME>/bin/versionInfo.sh to validate upgrade was successful

Installed Product
Name                     IBM WebSphere Commerce
ID                       wc.fep6
Build Level              130322dev
Build Date               22/03/13

3. Restart DMGR, node agents and WCS instance to validate your instance.

Friday, December 20, 2013

WCS Quick Performance Tuning Cheat sheet

Performance tuning of an application is best achieved and measured through an iterative approach, Although not a comprehensive list the performance gains from these steps should be worth it, there isn't any silver bullet here, every application is different and hence tuning settings will differ based on your application / infrastructure / architecture and performance workload profile.

1. Tune Application

Begin with the basics, ensure you do not have any long running queries, you can easily debug WCS SQL using the RAD SQL profiler, make use of Oracle AWR reports to refine SQL, index etc..
Validate logging from application makes use of Apache logging level appropriately, on production you don't want to log details unnecessarily and cause high I/O, this will have impact on CPU and application performance.
Refer my previous blog on SQL tuning to read more on this topic.

2. JDBC Thin vs. Thick Driver

While using the JDBC OCI driver with RAC, the database can be specified with an Oracle Net keyword-value pair. The Oracle Net keyword-value pair substitutes for the tnsnames entry.
E.g. jdbc:oracle:oci:@MyHostString
This configuration requires Oracle client installed on WCS JVM and a TNS entry for MyHostString
Thick drivers are written in native language C and JNI code, they can potentially reduce the amount of GC, but they add an overhead of JNI calls.
Recommendation is to go with thick driver for RAC setup.

3. JVM Heap Size

There is a general misconception that more Heap means applications will perform better, WRONG... as a best practice never cross max JVM heap size of 1.5 GB, more heap means lesser GC cycles, but you might spend more time in every GC cycle, every GC cycle will pause JVM for a while and hence it will queue up the requests until GC completes, a longer GC cycle under high load could actually overwhelm the JVM and be a perfect recipe for a JVM crash.
On a 64 bit machine you could potentially set heap size higher than 2 GB in case you are using JVM for dynacache.

4.Webcontainer thread.

Every thread means consumption of additional CPU and Heap, you need to size this based on your server configuration, thumb rule is to have 3x times of the cores available on the box, so on a 8 core box we could have close to 25 threads
It is always good practice to leave it at default of 25/25 on a 8 core box and then reduce it if it consumes more CPU.

5. Database Connection Pool size

This should normally be equivalent to Thread pool size, it is quite rare that you will find all threads occupied and busy concurrently, Try to keep this value slightly higher than that thread pool size.
DataSource Conn Pool Size = (# Of Webcontainer Threads)
+ (# Of WC Scheduler Threads)
+ 1 (for WC Key Manager)
+ 1 (for WC Auditing)
+ (# of Parallel and Serial MQ listener threads)
+ (# of Non-Webcontainer Threads used by Custom Code).

You can also explore the usage of WAS reserve pool configuration for large application deployments, this feature is available through a  feature pack, more information is available here.


6. Server ULIMIT

This is one of the parameters which you would most certainly tune in every production environment, by default these are set at 1024, you can safely set this to 65 K or unlimited for WCS installtion.

7. WebServer Load Balancer Configuration

IHS supports many different types of algorithm, Here is an Excellent tech note on how to setup plugin configuration, ensure you define "LoadBalanceWeight" as recommended by this technote.

8. Generic JVM arguments

To generate a system dump automatically on an OOM you will need to set the following  JVM generic argument:-Xdump:system:events=systhrow,filter=java/lang/OutOfMemoryError,request=nodumps+exclusive+prepwalk
In order to generate java core dump, system core dump, heap dump and a snap dump at user signal, the dump agents must be configured through JVM options as follows.-Xdump:java+heap+system+snap:events=user

Further Reading

WCS WAS Tuning

Wednesday, December 18, 2013

Websphere Commerce 64 bit Install


Very recently, WCS V7 FEP2 introduced support for 64 bit WAS, I decided to take a potshot at this on my Oracle Linux 6.3 VM, review the excellent article from IBM in "Further Reading" section if you want to weigh in the merits of 64 vs 32 bit WCS installation.

Pre-Install Task

Although we are working on a 64 bit install here, WAS and WCS installers still depend upon 32 bit OS libraries,  the installation will error out if you are missing 32 bit OS libraries.

By default "yum install" command will only install the 64 bit libraries if the host OS is 64 bit, to install a 32 bit version first execute the yum search to locate a library and then install it with .i686 extension. An example is shown here to install libstdc++ 32 bit version

yum search libstdc++

Loaded plugins: refresh-packagekit, security
============================ N/S Matched: libstdc++ ============================
compat-libstdc++-296.i686 : Compatibility 2.96-RH standard C++ libraries
compat-libstdc++-33.i686 : Compatibility standard C++ libraries
compat-libstdc++-33.x86_64 : Compatibility standard C++ libraries
libstdc++.i686 : GNU Standard C++ Library
libstdc++.x86_64 : GNU Standard C++ Library
libstdc++-devel.i686 : Header files and libraries for C++ development
libstdc++-devel.x86_64 : Header files and libraries for C++ development
libstdc++-docs.x86_64 : Documentation for the GNU standard C++ library

Now install 32 bit version of compat-libstdc++-33 as follows.

yum install compat-libstdc++-33.i686
yum install gtk2.i686

For record, you will notice following error if 32 bit version of this library is missing from the host OS.
Exception in thread "AWT-EventQueue-0" java.lang.UnsatisfiedLinkError: ic_jni (libstdc++.so.5: cannot open shared object file: No such file or directory)

Installation Task

Begin with a 64 OS platform supported by WCS, I have tried this on Oracle Linux 6.3 , so these steps should work flawlessly on centos and redhat.
All of the silent install files, scripts etc.. are available in https://github.com/hariinfo/wcs_scripts

Step 1: Install Oracle 64 bit client
Make a note of the Oracle home directory and setup this is .profile or .bash_profile of wasuser and oracle user profile.
I usually make use of Oracle's excellent pre-install packege before installing Oracle client on the host OS
This package does all the magic needed to bring your host OS up to required configuration level for Oracle install.

yum install oracle-rdbms-server-11gR2-preinstall

Step 2: Install  IHS and WAS
IHS version packaged in the 64 bit WAS ND software is 32 bit install in reality, only WAS and more importantly the JVM is 64 bit, so don't be surprised if you continue to see a 32 bit version of IHS after the installtion, begin installtion from the downloaded copy of WAS ND 64 bit.
Part number for IBM WebSphere Commerce V7.0 WebSphere Application Server Network Deployment V7.0 (64-bit) for Linux on AMD and Intel as of this writing is "CZFA3ML"

unzip the downloaded tar file and begin your silent install using following steps, or you can perform a GUI install as well.
CZFA3ML\WAS\disk1\IHS\install -options "/home/wasuser/ihs_install_response.txt" -silent
CZFA3ML\WAS\disk2\WAS\install -options "/home/wasuser/was_install_response.txt" -silent

Step 3:WAS upgrade
Now update WAS, Plugin and IHS to level.

Ensure that the 64 bit versions of both WAS and WCS installer is installed before the WAS upgrade.
Following were the latest version as of this writing
Install WAS update installer, download the 64 bit version

Install WCS update installer, download the 64 bit version

<WAS_UPDATE_INSTALLER>/update.sh -options "/home/wasuser/ihs.update.response.txt" -silent
<WAS_UPDATE_INSTALLER>/update.sh -options "/home/wasuser/plg.update.response.txt" -silent
<WAS_UPDATE_INSTALLER>/update.sh -options "/home/wasuser/was.update.response.txt" -silent

Step 4: WCS Install
Install WCS V7, do note that WCS does not come in 32 or 64 bit flavor, so the installer you used for 32 bit should just work fine for 64 bit, but the support for 64 bit WAS was introduced in fix pack level 2 of WCS V7.

Step 5: WCS Fixpack 7
Upgrade WCS to fix pack 7 (this is the latest as of this writing, you need to be at fix pack 1 for 64 bit support)

Step 6: WCS Instance Creation
Create WCS instance and configure with Oracle database

Step 7: Federate WCS instance with DMGR

 Refer My previous blog on WAS, WCS, IHS installation which includes detailed steps for Step 4 through 7

Post-Install Validation

Name                     IBM WebSphere Application Server - ND
ID                       ND
Build Level              cf251235.04
Build Date               8/30/12
Architecture             AMD (64 bit)

The version info output should display 64 bit as the WAS version.

Save and sysnchronize the DMGR, now stop and start the server, once the server is started grep the PID to validate the min and max heap size
ps -elf |grep server
you should notice following in the grep output, this indicates WAS was started with more than 2048 heap size, "-Xms1024m -Xmx3024m"

Further Reading


WCS install silent files and utility scripts

Saturday, November 23, 2013

Data import options with Elasticsearch

Architecturally there are two approaches for dataload, at the outset, you will have to decide between "push" vs "pull" model based on your requirements and performance goals, in this article we will explore ES dataload options for both of these categories.

I have sourced much of this information from ES mailing group , in fact this is a compilation of everything that I found on the ES mailing list while I was researching on this topic and did not find any tutorial or article that has a comprehensive information on this topic.

Before we jump deep into the topic there are few basic things to remember when it comes to indexing the data in ES,  with ES, the best load performance is with more shards, and best query performance is with more replicas, so you need to find a sweet spot with your setup, In ES all indexing goes through the primary shards , it is important that you follow an iterative approach to data indexing needs to arrive at a sweet spot, don't start with tuning at first place, instead let tuning recommendations trickle down based on what you learn from your setup and do remember that It takes significantly more time to index on an existing index than on an empty index

Pull Model

River plugin 

These are built as custom plugin code that can be deployed within ES and runs within the ES node, they are a good fit when you are expecting a constant flow of data that needs to be indexed and you don't want to write another external application to push data into ES for indexing. a very good use case if when you are indexing analytics and server logs or data coming out of nosql store like cassendra or mongodb.

River plugins also support import using Bulk API, this is useful in cases where the river plugin
can accumulate the data for certain threshold before performing an import / indexing, since the client is running within the ES node it is cluster aware.

Push Model

curl -XPUT
This is perhaps the simplest way to index a document, you just perform a PUT on a REST endpoint,
this works best during during development phase to index documents for performing few quick validations from command line.
curl -XPOST '' -d '{"partnumber":"HLG028_281201","name":"Modern Houseware Hanging Lamp","shortdescription":"A red hanging lamp in a triangular shape.","longdescription":"A hanging lamp with red ambient shades to add a romantic mood to your room. Perfect for your bedroom or your children's room. Easy set up so you do not have to pay electricians to set it up."}'

Connectionless datagram protocol. This is faster but not so reliable as you don't have any acknowledgement of success or failure. 
E.g. cat bulk.txt | nc -w 0 -u localhost 9700
if you have an external application that consolidates the data in a timely manner
and then formats it to JSON to be indexed. This is much more reliable as compared to UDP bulk import as you get an acknowledgement of index operation and can take corrective steps based on the response.

Java TransportClient bulk indexing 

Can be used within a custom ETL load that runs outside of ES nodes, you can connect to ES node from a remote host, you can index with multiple threads it saves a bit of HTTP overhead by using the native ES protocol, Bulk is always best as it would try and group the requests per shard and minimize the network round trips, Transport Client is thread safe and it is built to be reused by several threads, while doing bulk load coding do ensure you do not create Transport client in a loop, instead send all the requests through one TransportClient instance per JVM, perhaps create TransportClient as a singleton.

Internally the Transport client sends each request asynchronously and is thread safe
Another nice thing about using a Transportclient is that it will automatically internally round robin to a ES node, and then that node will spread the bulk requests to the respective "shard bulks"

Here is a sample snippet that can be used for connecting to the ES cluster.

   ImmutableSettings.Builder clientSettings = ImmutableSettings.settingsBuilder()
              .put("http.enabled", "false")
              .put("discovery.zen.minimum_master_nodes", 1)
              .put("discovery.zen.ping.multicast.ttl", 4)
              .put("discovery.zen.ping_timeout", 100)
              .put("discovery.zen.fd.ping_timeout", 300)
              .put("discovery.zen.fd.ping_interval", 5)
              .put("discovery.zen.fd.ping_retries", 5)
              .put("client.transport.ping_timeout", "10s")
              .put("multicast.enabled", false)
              .put("discovery.zen.ping.unicast.hosts", esHosts)
              .put("cluster.name", esClusterName)
              .put("index.refresh_interval", "10") //change refresh interval to a higher value
              .put("index.merge.async", true); //change index merge to async

Here is a sample code for creating ES client and using bulk load API for indexing.

 TransportClient client = new TransportClient( clientSettings.build() );
 List<TransportAddress> addresses = new LinkedList<TransportAddress>();
 //Add one or more ES address and port
 InetSocketTransportAddress address = new InetSocketTransportAddress("<ES_IP>)",Integer.parseInt("<ES_PORT>"));
 TransportAddress[] taddresses = addresses.toArray(new TransportAddress[addresses.size()]);

// Create initial bulk request builder
BulkRequestBuilder bulkRequest = client.prepareBulk();
IndexRequestBuilder indexRequestBuilder = esLoader.getClient().prepareIndex("<ES_INDEX_NAME>", "regular");
//Build the JSON content using XContentBuilder
BulkResponse bulkResponse = bulkRequest.execute().actionGet()

if (bulkResponse.hasFailures()) {
             log.info("Failed to send all requests in bulk " + bulkResponse.buildFailureMessage());
                return true;
            else {
             log.info("Elasticsearch Index updated in {} ms.", bulkResponse.getTookInMillis());

Performance Tuning

1. Start with tuning the index refresh rate at the time of bulk indexing, While importing large amount of data it is recommended to disable refresh interval by setting to a value of -1, you can then refresh the index programmatically towards the end of the load.

You can define index refresh rate at global level by defining in config/elasticsearch.yml or at index level
a value of -1 will suppress it or you can set to any positive integer value based on your requirements of index refresh.
curl -XPUT localhost:9200/test/_settings -d '{
    "index" : {
        "refresh_interval" : "-1"
    } }'
2. You can decrease the bulk thread pool size,Thread pool size should be carefully tuned, under most circumstances defaults are good enough, but you can tune these based on your application requirements, for instance if you are expecting data to flow into the index all the time you can think of adding more thread pools for bulk index operation.
Always remember this rule of thumb, every thread eats up system resources, and try to match it with number of cores.

# Search pool
threadpool.search.type: fixed
threadpool.search.size: 3
threadpool.search.queue_size: 100

# Bulk pool
threadpool.bulk.type: fixed
threadpool.bulk.size: 2
threadpool.bulk.queue_size: 300

# Index pool
threadpool.index.type: fixed
threadpool.index.size: 2
threadpool.index.queue_size: 100
3. if you want both - max perf on load and max perf on search - you should  use two indexes, one for the old generation and one for new generation, and connect them with an index alias. Distribute the indexes over the nodes so they form two separated groups, that is, so they use different machines (for example, by shard moving, shard allocation). Set replica level to 0 (no replicas) for the new gen index. Forward search only to those nodes with the old gen. After bulk is complete, add replica level to new gen, and switch from old to new with the help of index alias (or by just dropping the old gen). You may see a perf hit when replicas are building up but this is not much compared to bulk load.

4. One of the simplest and most effective strategy is to simply start with a no replica index. And once indexing is done, increase the number of replicas to the number you want to have. This will reduce the load when indexing.

Saturday, November 16, 2013

Cache Invalidation

Cache invalidation can quickly become one of the major bottlenecks of any ecommerce site, and given we end up with multiple cached layers, in a complex architecture it is important to tailor together the sequence in which cache should be evicted without impacting site stability.

In this section we will explore server side cache management in WCS and some of the common pitfalls and design considerations.

WCS in general has pretty good architecture for server side cache it inherits from WAS dynacache infrastructure which is pretty matured and adds on top of it WCS components for cache management.

Local Cache Topology

Cached components are stored locally in JVM, dynacache takes a slice of JVM heap space to store cached data, hence in a 32 bit architecture you will very quickly be out of room if you need to store more than 2 GB of data. At this point you should explore the option of a 64 bit JVM so a local cache can store more data.
Local cache is a good candidate for smaller cluster, for larger implementations maintenance of cached copy of data per JVM and data invalidation traffic can become overwhelming at times and hence a remote cache option should be explored.

Remote Cache Topology

WCS currently only supports WXS as remote cache sever, the plugin is available OOB and works seamlessly without any need to modify the code, In remote topology dynacache cache content is stored remotely in WXS server and hence invalidation of data happens across multiple WXS dynacache container. this configuration is preferred for sites that store large amount of data in dynacache and need to refresh and invalidate cache often.

Cache Invalidation Steps

For both local and remote topology WCS makes use of  Dynacache invalidation job for cache eviction, Dynacache invalidation job depends on dynamic cache services to replicate the invalidation of local cache In a topology where in dynacahe is local to JVM and is a clustered environment.
WAS DRS (Dynamic replication service) internally depends on WAS HAManager service for replication,
hence it is important that you configure and turn on both DRS and WAS HAManager failing which may result in inconsistent cache conten and data generation across the cluster.

Step 1: Any changes to data in stage / authoring server is captured by WCS triggers for a table and the delta changes are populated in stgprop table,

Step 2:  Data from stage prop is populated in corresponding tables in production database and an entry is created in cacheivl for every create, update, delete operation performed on the table, the entries in the table represent the cache dependency id that should be invalidated so a new cache is built by reading the respective datastore.
WCS cachespec.xml defined a cache dependency id

Step 3:
WCS Dynacache Invalidation Job processes cacheivl records, the records are not deleted or marked as processed, or in other words there is no way you can ever query all processed records from WCS cacheivl table.
Instead, WCS Dynacacheinvalidation job uses a special field startTime and startTimeNanos to identify the records timestamp that needs to be processed from cacheivl table for next run of the job, When the Job is run the state is changed to 'R', which means it is in running state currently.


SCHCONFIG.SCCQUERY by default has value of 'startTime=0&startTimeNanos=0' for the very first execution of this JOB, startTime and startTimeNanons refer to timestamp is long format, they are updated to the timestamp of last record that was processed from CACHEIVL table towards the end of the job execution.

UPDATE SCHCONFIG  SET SCCHOST = ?(null), SCCSTART = ?(9/30/12 9:20 PM), STOREENT_ID = ?(0), SCCPRIORITY = ?(1), SCCSEQUENCE = ?(0), SCCRECDELAY = ?(0), SCCACTIVE = ?('A'), SCCRECATT = ?(0), SCCAPPTYPE = ?(null), SCCPATHINFO = ?('DynaCacheInvalidation'), MEMBER_ID = ?(-1,000), SCCQUERY = ?('startTime=1348772158228&startTimeNanos=228000000'), INTERFACENAME = ?(null), SCCINTERVAL = ?(100), OPTCOUNTER = ?(29,941) WHERE SCCJOBREFNUM = ?(52,002) AND OPTCOUNTER = ?(29,940)

The next run of the job then would process the records from where it left last time around and based on the updated values of startTime and startTimeNanos.


Hint: To reprocess all the records from CACHEIVL we can run following update Query

update schconfig set SCCQUERY='startTime=0&startTimeNanos=0' where
sccpathinfo='DynaCacheInvalidation' and SCCACTIVE='A';

Once the Job completes a record is inserted in SCHSTATUS table

INSERT INTO SCHSTATUS (SCSINSTREFNUM, SCSEND, SCSRESULT, SCSQUEUE, SCSSTATE, SCSSEQUENCE, SCSACTLSTART, SCSATTLEFT, SCSPREFSTART, SCSJOBNBR, SCSINSTRECOV, OPTCOUNTER) VALUES (?(67,078), ?(null), ?(null), ?('localhost:-2cced56d:13a3876cf3e:-8000:default'), ?('R'), ?(0), ?(10/6/12 7:26 PM), ?(0), ?(10/6/12 7:26 PM), ?(52,002), ?(0), ?(22,283))

Job state in SCHACTIVE table is changed to 'I', which indicates it is scheduled to run again at


To clear all cache entries we can create following entry in cacheivl with "clearall" string
insert into cacheivl
(template, dataid, inserttime)

Use the following log trace components to debug issues with Dynacache

Client side logging
*=info:com.ibm.websphere.commerce.WC_SERVER=all:com.ibm.websphere.commerce.WC_CACHE=all: com.ibm.ws.cache.*=all

Logging on WXS server


Further Reading


Saturday, October 19, 2013

WAS VMM Custom Object Class mapping in Websphere Portal and Commerce

Context of this blog is to provide guidelines around customization of WAS VMM mapping to custom LDAP Objectclass and it's associated custom LDAP attributes, before we jump into some of the code samples it is a good idea to understand basic concepts of VMM and LDAP schema extension.

This is an advanced configuration option which is usually done after LDAP enablement of your portal or commerce server, please refer to my previous blog for LDAP enablement of WCS

What is VMM?

VMM or virtual member manager is a WAS component and provides an abstract interface to the underlying datastore which maintains the user profile and user roles, right out of the gate adapters are available for LDAP and database, VMM also provides a set of interfaces which can be implemented to develop a custom adapter for other types of data sources.

Various IBM Products that run on WAS runtime leverage and make use of WAS VMM components for repository federations, user authentication and role management in a central repository.
For instance websphere portal can use it for user authentication and role management, similarly WCS can make use of this as a central repository for user authentication.

VMM provides basic CRUD functionality interface to these underlying repositories as an application developer it means that you don't have to deal with the low level aspects of LDAP or database interaction for these operations.

How does WCS and Portal make use of VMM?

WAS VMM is configured to make use of dynamic data model, by default all of the standard attributes of LDAP object class such as top, person, OrganizationalPerson and InetOrgPerson are configured OOB, but you can additionally include any custom attribute and change mapping of LDAP standard attributes.

LDAP inetOrgPerson object class is mapped to PersonAccount entity within VMM

Extending LDAP Object Class
We have decided to extend inetOrgPerson class with a custom LDAP objectclass MyCompanyObjectClass and we would like to include a custom attribute wcsMemberID
LDAP schema extension as similar to inheritance in Object oriented programming, My custom Object class in this diagram below inherits everything from it's immediate parent and defines few additional custom attributes.

For instance, if you are making use of OpenDS, the location of all existing schema is OpenDS/config/schema, The directory server loads the schema files in alphanumeric order (numerals first) at directory server startup.
98myschema.ldif definition, copy this file under OpenDS/config/schema and restart directory server

Configuration of WCS with LDAP Custom Object Class

If you want to overwrite LDAP standard attributes then those should be defined in wimconfig.xml
Edit wasprofile\config\cells\localhost\wim\config\wimconfig.xml

By default VMM Maps inetOrgPerson LDAP Object Class to PersonAccount VMM Entity, in this example we have extended inetOrgPerson LDAP object class with MyCompanyObjectClass and have defined few custom attributes within them

We can manually edit the wimconfig.xml file to override the mapping of PersonAccount Entity to MyCompanyObjectClass  instead of default inetOrgPerson LDAP object class as follows.
Refer the section with following lines <config:ldapEntityTypes name="PersonAccount"....

If you want to define custom object class LDAP attributes to VMM, then those should be defined in wimconfigextension.xml
Edit wasprofile\config\cells\localhost\wim\model\wimxmlextension.xml

We need to now let WCS know how to map the custom VMM attribute with LDAP database field in user table, in this example we have mapped wcs member id from users object to LDAP custom attribute wcsMemberID

Further Reading

Refer following link to see a list of OOB tables/attributes that can be synchronized with LDAP

Thursday, September 19, 2013

Session Management in WCS

As compared to other commerce engines like ATG and Hybris, WCS takes a different approach of persisting session data in database tables,The idea of managing session related data in database promises very large scale cluster of WCS with minimal overhead on session chattiness across JVM clusters as WCS JVM's need to only make a database roundtrip to read the session state and need not replicate HttpSession Objects, but at times it may cause unnecessary overhead on database tables and thus become a major cause of concern for site stability.

We will review the design and some of the best practices that will ensure a greater site stability.


#1 WCS manages session / context state in the database.

#2 WCS is packaged OOB without any dependency on HttpSession Object, as a best practice it is discouraged to use this during customization as well.

#3 only one active session from any one device or in others words you cannot browse from two different browsers or a browser and device at the same time as a logged in user. The default behavior is to invalidate the session related to the device/browser which was used previously.

#4 When the user state is changed from generic to guest user an entry is created in CTXMGMT and multiple entries will be created in CTXDATA for each context.

//One entry is created to track guest user state
INSERT INTO CTXMGMT (ACTIVITY_ID, CALLER_ID, STARTTIME, ENDTIME, STATUS, STORE_ID, RUNAS_ID, LASTACCESSTIME, OPTCOUNTER) VALUES (?(69,751), ?(14,003), ?(10/6/12 10:34 PM), ?(10/6/12 10:34 PM), ?('A'), ?(10,951), ?(14,003), ?(10/6/12 10:34 PM), ?(25,580))

//Multiple entries are created for each context
INSERT INTO CTXDATA (NAME, ACTIVITY_ID, SERVALUE, OPTCOUNTER) VALUES (?('com.ibm.commerce.context.audit.AuditContext'), ?(69,751), ?(null), ?(17,127))

INSERT INTO CTXDATA (NAME, ACTIVITY_ID, SERVALUE, OPTCOUNTER) VALUES (?('com.ibm.commerce.store.facade.server.context.StoreGeoCodeContext'), ?(69,751), ?('null&null&null&null&null&null'), ?(16,630))

INSERT INTO CTXDATA (NAME, ACTIVITY_ID, SERVALUE, OPTCOUNTER) VALUES (?('com.ibm.commerce.catalog.businesscontext.CatalogContext'), ?(69,751), ?('10301&null&false&false&false'), ?(24,205))

INSERT INTO CTXDATA (NAME, ACTIVITY_ID, SERVALUE, OPTCOUNTER) VALUES (?('com.ibm.commerce.context.globalization.GlobalizationContext'), ?(69,751), ?('-1&USD&-1&USD'), ?(9,048))

INSERT INTO CTXDATA (NAME, ACTIVITY_ID, SERVALUE, OPTCOUNTER) VALUES (?('com.ibm.commerce.context.base.BaseContext'), ?(69,751), ?('10951&14003&14003&-1'), ?(21,650))

INSERT INTO CTXDATA (NAME, ACTIVITY_ID, SERVALUE, OPTCOUNTER) VALUES (?('com.ibm.commerce.giftcenter.context.GiftCenterContext'), ?(69,751), ?('null&null&null'), ?(16,598))

INSERT INTO CTXDATA (NAME, ACTIVITY_ID, SERVALUE, OPTCOUNTER) VALUES (?('com.ibm.commerce.context.entitlement.EntitlementContext'), ?(69,751), ?('null&null&null&null&null&null&null'), ?(27,836))

#5 For every Activity as logged in or guest user the CTXDATA table will be updated.

#6 When Guest clicks on Logout the session is terminate as follows.

update ctxmgmt set status = 'T', endtime = ?(10/6/12 10:47 PM), OPTCOUNTER=(CASE WHEN (OPTCOUNTER IS NULL OR OPTCOUNTER=32767) THEN 1 ELSE OPTCOUNTER+1 END) where caller_id = ?(14,004) and status = 'A'

Performance Consideration

Over a period of time IBM has provided several enhancements / fixes for session management to reduce the overall overhead related to database performance. Following is a summary of some of the best practices for production level tuning of these tables.

#1 CTXMGMT and CTXDATA can get excessive entries if you have not taken care of certain things in your code, for instance any command that needs to run as generic user should overwrite isGeneric() method to return true else WCS will try and create a guest ID and thus add a new entry in these tables.
For example browse commands can run as generic users so it is best to overwrite isGeneric() command, the first step should be to perform code review and ensure commands that do not need any session tracking should overwrite isGeneric() method.

Refer this technote for more details.

#2 Try to keep records in this table as low as possible, this should be implemented as part of site database maintenance and DB clean activity.
WCS does provide ActivityCleanup and DBClean scripts for this, but for large scale sites clearing the records from ActivityCleanup Job may turn out
to be an overkill, try to perform this activity when traffic is low to the site.

For example a simpler option is to delete all sessions that have not been accessed for last X Days, in this example it will delete all sessions that were not accessed for last 3 days.

delete from CTXMGMT where (SYSDATE - 3) >= LASTACCESSTIME

#3 CTXMGMT with multiple 'A' entries for same runas_id implies some issues with the code, try to trace the code using server side profiler and take corrective action as needed.

#4 If you see an index block contention on CTXMGMT / CTXDATA from the AWR reports, try to apply reverse key index and retest 
Reverse key index is online activity and it can be applied/reverted back quickly. The problem is quite evident in RAC based database setup where this “hot” index block needs to be accessed by all the instances and is being bounced around the various SGAs causing expensive block transfers between instances.

#5 Fragmentation of index

Since CTXMGMT and CTXDATA relatively has one of the maximum rates of inserts the index can get fragmented fairly quickly, as part of database maintenance activity analyze the index often and re-build as and when needed.
Here is an interesting blog on Oracle on index rebuild.

Rebuild the index as follows, perform these activities when the site has low traffic.

Alter index <index_name> rebuild online; --CTXMGMT TABLE


Google search with following string to see all technotes related to CTXMGMT
site:http://www-01.ibm.com/support/docview.wss CTXMGMT

Tuesday, July 30, 2013

WCS vs Hybris vs ATG - Feature Smackdown

They say "Beauty is in the eyes of the beholder", so if you are a purist in one of these technologies you would probably never agree to my comparison matrix, However these are my personal opinion, From my experience standpoint I have longest exposure to WCS Commerce Suite and the least with ATG Commerce.
Feature comparison are always shallow and should not be taken at face value, they serve a purpose and it is up to you to decide what best suits your implementation, I feel all three of these are excellent commerce products, they serve a different market segment and will continue to have a space for themselves for years to come.
...so at the end of the day you need to do your homework to see what best fits your needs..



Oracle ATG(10.2)

Development Environment +1
Complexity associated with development is very high, The system resources required for running IBM RAD and associated WAS Runtime is highest among the three, Organizations are often forced to use VDI which slows down the developer productivity vastly.
Hybris probably ranks no. 1 in this category, the best in terms of supportability for lighter development environment, it can run in an eclipse environment with tomcat container
ATG development environment includes Eclipse, ATG Eclipse Plugin, JBoss
JEE Environment +1
Works only on IBM Websphere Application Servers
Hybris Server (Flavor of Tomcat)
Spring Source TC Server (Flavor of Tomcat)
Oracle Weblogic
IBM WebSphere
Oracle Weblogic
Total installer size +1

4-10 GB when we include WCS, RAD and WAS test environment

300 MB

710 MB
Development Database Apache Derby HSQLDB database MySQL
Production Database +2
Microsoft SQL Server
Microsoft SQL Server
OS Support +1
Linux Redhat
Many Flavors(Refer Product documentation)


Mac OS
Mac OS

Framework Versatility +3
Controller Layer uses a modified version on Struts Framework, Services Layer uses Apache Wink, rest of the framework is IBM custom and the complexity associated with learning and extension of components is probably highest of the three.

Familiar and Flexible programming model as it is based on Spring Framework.

Most probably the purest implementation of Spring without too many custom wrappers, so if you know Spring you already know how to code in Hybris
ATG Custom IOC Container is referred to as “ATG Nucleus“, they were probably the first advocates of IOC containers even before Spring came to lime light.

But things have evolved over time and Spring IOC is far ahead of ATG custom IOC framework.
HTTP Session +3
Zero session footprint, WCS does not use HTTPSession Object, instead the state is maintained in DB tables.

Since the states are saved in DB a node failure does not impact session state and “full session failover” is supported OOB, this works very nicely with both active active and active passive db configuration.

This architecture seems to be of significant advantage in very large scale deployments and session management is natively supported in WCS framework instead of depending on WAS
Depends on JsessionID
Hybris refers to session failover support as “Semi session failover”, so if a node fails it can restore the guest session in a semi state like Guest ID, Cart etc..
Depends on JsessionID and HttpSession features of underlying JEE server.
Out of the box, nothing is persisted to the database until sign-in, but a configuration is available to persist state to DB for anonymous users.
Caching +3
DynaCache for caching within JVM and Advanced support for remote Cache by making use of IBM WXS as a central Cache repository
Hybris Region Cache custom caching framework, can be extended to custom caching provider like EHCache.

Standard Spring annotations are supported for caching objects.

Possibility to plugin other commercial third party caching solutions like

Coherence, Gigaspaces and Memcache

ATG supports custom Distributed hybrid caching
Clustering +2
All of the standard WAS Clustering features, considering WAS is very matured as a JEE server you can leverage all of the standard WAS clustering and failover features
Since Hybris is supported by various JEE container they cannot rely on container clustering features, Hybris Cluster solution is independent of the underlying JEE container.
Support for TCP or UDP based clustering.

Another important feature of Hybris cluster is Multi-Tenant Mode, this gives the flexibility to use a different database prefix for multiple application in the same JVM, WCS does not support this feature and uses a common underlying DB for multi-site mode,    Multi-tenant mode could be an extremely useful feature when you    would like to run multiple sites on same instance and scale out the database based on the db prefix
Depends on the clustering support from underlying JEE engine.
CSR Module +2
Referred to as “Sales Center” is a Rich client    application for CSR
Sales center is a thick client application and needs to be installed on every CSR desktop, it uses the WCS back end shared by production environment.
Referred to as “Customer Service Cockpit” is a module to support call center operations like order management
ATG provides Service module for service center agents. Commerce service center app uses it's own database which is different from production customer facing site DB.
Search Engine +2
WCS Search is based on SOLR search engine.

Does not really unleash all the power of SOLR and integration with any other search engine is not supported OOB
SOLR search and native support for Endeca Integration
ATG Search engine or Endeca
Print Module Not supported Hybris Print Module
PIM +2
Limited PIM features Supported by “Commerce Management Center” and Commerce Accelerator
Support by Hybris PIM module. Much more advanced PIM support as compared to other 3 products, this is good enough for medium scale retailers

ATG does not support a native PIM module

Business User Tool +2

WCS support Commerce Management center which is supported on limited set of browsers, it is a Flash based user interface.
WCS supports authoring and production environment similar to ATG.

Hybris supports product cockpit tool which allows business users to manage product and catalog information.

Referred to as BCC (Business Control Center) is a web based business user interface.

ATG supports publishing and production server, publishing server is used by business users for content creation, aggregation and version management.

Data Access Layer
EJB, BOD / DSL (Similar to Ibatis and Hybernate data access API)

EJB specification supported by WCS is quite outdated and BOD programming and WCS query language used in BOD is pretty complex to learn and implement.

But the good news is we have the flexibility to easily create custom tables and write custom optimized native SQL's as an alternate to EJB and DSL components
In Hybris you have to stop thinking in terms of tables, DAO's make use of Hybris Type system for persistence and FlexibleSearch engine for executing query. It takes time to get into the weeds of these concepts and can be confusing as there is a complete shift from normal database table and native SQL concepts.

Personally I did not like the idea about inability to query database directly, I think this can be a big limitation for ongoing production support.
ATG Data Anywhere architecture provides a “Repository API” abstraction on top of multiple datasources like RDBMS, LDAP or File system. This is another custom OR mapping style of data access which serves as an alternative to EJB or plain JDBC. ATG boosts RQL (Repository Query Laguage) for writing queries against unified repository. 

I just feel that it is easier to find good DBA's in the market to write optimized SQL compared to learning and writing another non stadard RQL language based queries.
OMS +3
WCS does not have any module for OMS, instead IBM sterling commerce is another IBM offering which integrates well with WCS to provide end to end OMS capabilities
Hybris Order Management module can server as a full-fledged OMS

No native support for Order management system

Unit Testing
WCS lacks a concrete unit testing framework, most of the components cannot be unit tested without the complex and heavy WCS runtime support.
Hybris is leaps and bounds ahead of others in this category, it inherits all of the standard Spring unit testing support, there isn't a comparison to the extent a component can be unit tested independent of JEE    container in Spring.

Similar to WCS and lacks a native support for unit testing.
Deployment Suitability +3
Very Large retailers
Mid-sized implementation
Mid to Large size implementation
Community Adoption +2
Although commercial the documentation is very detailed and community driven, forums are not so active

Software is not free to download, you need a partner world account to get your hands on WCS even for development and evaluation purpose
Closed community, I think this will only hurt its adoption, documentation is not so great.

For instance I can google on IBM infocenter pages, but finding something out of hybris is near impossible, you need to login to hybris wiki and search for the details.
After Oracle acquisition the software is free to download for learning and evaluation purpose

Documentation is pretty detailed and well structured


Starter Stores +3
WCS provides nearly a dozen of starter stores for B2B and B2C store models, I think this is a great asset and let's development teams to get started with a fully functional store in no time, of course you need to use it as a base to perform your customization.

Hybris Accelerators can be used to create custom stater stores for b2b and b2c store model.


ATG provides starter stores that share a common master catalog and store assets similar to WCS extended sites model, it also provides starter stores for independent B2C and B2B stores.


Holding Fort.

Growing Strong

Catching up