Wednesday, December 17, 2008

TrExMa for Firefox 0.8 Preview Release

Over at TrExMa For Firefox, you can find a long awaited update to the TrExMa plugin for the popular TrophyManager game. Version 0.8 is available and this is a Preview Release.

This was an excuse to learn a little about XUL, do some jQuery and refactor the plugin. Unfortunately, it still needs a ton of refactoring, and the code is quite soupy. This is a use at your own risk version. The plugin can crash, provide a little popup saying something's wrong, and sometimes not even offer that. It's buggy. Since I'm not a XUL or UI expert, there's a lot of things that have been done in a sloppy fashion and things that cause bugs. Anyone who's using this, I would really appreciate using the issue tracker rather than sending an email over at TM or posting in those forums. Reason is, I get notified if someone posts an issue.

It allows the user to see hit players abilities for all positions and dynamically change the loss of skill for the player being out of a favorite position. This feature works by clicking on the TrExMa for Firefox label in the status bar. You'll see a little XUL window appear in your browser. If you browse to a squad screen or a transfer list screen, you'll get a list of players to choose from. Clicking on a player will present the skills for that player in all positions available.

In addition, a drop down box is available to determine what player is the best at each position. If you want to find the best ML, select ML and the plugin will produce the top 5 players on that screen in that particular position.

Quite a bit of refactoring involved brining in the jQuery Javascript plugin. I'm very happy with the integration of jQuery, it's an outstanding tool, as it just flat out works against HTML and XUL.

If you don't like it, uninstall it and reinstall 0.6.2. If you do like it, please offer suggestions and features that you'd like to see. Still on the idea block is the ability to determine your best 11 for a given formation, but that's a little ways off.

Tuesday, December 09, 2008

A Spry Woot-Off Tracker

Although Flex is a neat tool, good ol' Javascript has been around for quite a while, and I thought I'd try to build a Woot-Off tracker using Flex's Javascript cousin, Spry. Conveniently, there's a Woot-Off going on today.

Spry is an interesting Javascript toolkit since it focuses on data extraction and presentation and widgets. In this example, we're not using any widgets, but we are taking advantage of the Spry Dataset tools to grok the XML stream from Woot. Just like with the AIR version, we need to use a proxy to grab the Woot XML stream.

Spry's DataSet works by allowing the developer to query the data set. Since Woot is providing an RSSish feed, we use the XMLDataSet class. By using braces, the dataset's contents can be accessed using the name of the XML tag. So to grab the price, we use {woot:price}. It's relatively simple.

The challenging part is for data that's not quite perfectly formatted. In this case, the percentage needs to be multiplied by 100. We do that inside the Observer. The Observer can update the contents of the dataset. So you can simply create an observer function and change away. The description also needs to be cleaned up since its HTML entities do not provide the needed effect. We actually want to use the tags, so we unentify them.

The timer is actually easier than in Flex, since the timer comes automatically with the DataSet with the loadInterval option. The only thing we need to do is speed it up and slow it down at the appropriate time.

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
<html xmlns="" xmlns:spry="">
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1" />
<title>Spry Woot Tracker</title>
<link type="text/css" rel="stylesheet" href=""/>
<link href="css/wootTracker.css" rel="stylesheet" type="text/css" />
<script type="text/javascript" src="includes/xpath.js"></script>
<script type="text/javascript" src="includes/SpryData.js"></script>
<script type="text/javascript">
// we don't want to use cached data, and we need to reload every 30 seconds
var dsWootInfo = new Spry.Data.XMLDataSet("WootProxy", "rss/channel/item", {useCache: false, loadInterval: 30000});

var quickCheck = false; // Is the timer sped up

// Observer watches for when data changes and modifies for presentation
function wootObserver(notificationType, dataSet) {
if (notificationType == "onDataChanged") {
if (dataSet) {
var data = dataSet.getData();
var soldout = data[0]["woot:soldoutpercentage"];
data[0]["woot:soldoutpercentage"] = soldout*100;
var desc = data[0]["description"];
// Description contains HTML entities, fix them
// Something strange is going on with the formatter it's just desc not descdesc
desc = desc.replace(/&gt;/g, ">");
desc = desc.replace(/&lt;/g, "<");
desc = desc.replace(/&quot;/g, '"');
desc = desc.replace(/[\u201C\u201d]/g, '"');
data[0]["description"] = desc ;

// Determine if it's time to speed up or slow down
if (quickCheck) {
if (soldout < .95) {
quickCheck = false;
} else {
if (soldout > .95) {
quickCheck = true;


<body id="wootTracker">
<h1>This page requires JavaScript. Please enable JavaScript in your browser and reload this page.</h1>
<div id="doc" spry:region="dsWootInfo">
<h1 id="wootName"> {title} </h1>
<div style="float:left; padding-right: 10px">
<img src="{woot:thumbnailimage}" />
<h3 id="wootPrice">{woot:price} </h3>
<h4><a href="{woot:purchaseurl}" target="_blank">Buy This Woot</a></h4>
<h3 id="wootPercent">{woot:soldoutpercentage}% Sold Thus Far</h3>
<div id="description" >{description}</div>

And here's the CSS:

body {
background:#EDEDED none repeat scroll 0 0;
h1 { font-size: 182%; margin-bottom: 0.5em}
h3 {font-size:138.5%; margin-bottom: 0.5em}
h4 {font-size:123.1%; margin-bottom: 0.5em}
li {list-style-type:disc; list-style-position:inside}
strong {font-weight:bold}
p {margin-top: 1em}

This application can be run inside Tomcat or wherever you have a proxy running.

Monday, November 24, 2008

More Google Analytics in SAP Portal with jQuery

One of the challenges with SAP Portal and integrating Google Analytics is it's tendency to create a lot of links that pop content open in a new window. Since you don't have access to the code that creates these URLs it causes a wee bit of a headache when you look to determine what items are being clicked on in the KM, or where your users are linking out of the portal to certain other applications.

We can resolve some of this by using a javascript library to scrape the HTML page and insert some onclick events that will allow the items to be tracked.

How can we accomplish this?

There are two steps:

First, add access to your favorite javascript library inside the Google Analytics code. I've chosen jQuery, although you could easily use other libraries. You can do this through the ga-split-1.js file that was outlined earlier. Don't forget to change the name of the file if need be so it is not cached in users browsers.

var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "' type='text/javascript'%3E%3C/script%3E"));
document.write(unescape("%3Cscript src='' type='text/javascript'%3E%3C/script%3E"));

By pulling jQuery from Google, we're increasing our load time, since we won't have to wait on connections to the Portal. The risk is low, since it's Google. In addition, if jQuery isn't found, we just won't track certain types of links. We'd still get the key pack clicking information.

The second part is to actually use jQuery to track stuff. You might be able to use the jQuery GA plugin, but in my case, I decided to write my own javascript based upon the plugin to do the trick. I would keep this code in a separate file and load it after you've initialized the pageTracker within your PortalComponent:

function ga_decorateLink(u){
var trackingURL = '';
if(u.indexOf('://') == -1 && u.indexOf('mailto:') != 0){
// no protocol or mailto - internal link - check extension
var ext = u.split('.')[u.split('.').length - 1];
var exts = ['pdf','doc','xls','csv','jpg','gif', 'mp3','swf','txt','ppt','zip','gz','dmg','xml']
for(i = 0; i < exts.length; i++){
if(ext == exts[i]){
// Likely grabbing an item from KM, etc.
trackingURL = '/downloads/' + u;
} else {
if(u.indexOf('mailto:') == 0){
// mailto link - decorate
trackingURL = '/mailto/' + u.substring(7);
} else {
// complete URL - check domain
var regex = /([^:\/]+)*(?::\/\/)*([^:\/]+)(:[0-9]+)*\/?/i;
var linkparts = regex.exec(u);
var urlparts = regex.exec(location.href);
if(linkparts[2] != urlparts[2]) trackingURL = '/external/' + u; /*leaving the portal*/
return trackingURL;

// Since you've initialized pageTracker in each Portal page, we're skipping that here.
// just wait until the entire page loads
var u = $(this).attr('href');

if(typeof(u) != 'undefined'){
var newLink = decorateLink(u);

If you're using the defualt framework, be aware that you will not be able to track each and every link. Javascript cannot dive into iframes on the page. Since it can't do that, you'll be unable to track each and every link, unless you can use embedded for that particular iView which will eliminate the iframes.

Some fair warning here. This is from memory. I am no longer working with SAP Portal, so there is a good chance I've forgotten something here. However, it did work on my last day working with least the version at that gig. If you run into problems, please fix them and share them. Don't hold onto it. Share it with the rest of the SAP community, post it on your blog, or submit it to SDN for inclusion in their hosted materials. At the very least, post a solution in the SDN forums so that others can use this. When you get it working, it's pretty darned cool!

Friday, November 21, 2008

An AIR based Woot-Off tracker

I've been going through some Flex training this week. It's an interesting tool, and pretty easy to make a quick application. Unfortunately, the training has been a bit robotic in terms of being very prescriptive on how to perform somewhat elementary programming. So it was time to take a break and actually try to attempt something that would be useful....or at least somewhat useful.

Since we had a Woot-Off yesterday, I decided to use Flex to write a Woot-Off tracker. A handy little AIR application to see when a new item appears. It's a simple single windowed application that polls Woot's API every 30 seconds, groks the RSSish feed, and displays information about the item being sold.

In addition, it provides handy information regarding the status of the sale in terms of percentage of items sold and a button to purchase the product, or manually check Woot for an update. If the percentage sold is above 94%, it ramps up the polling process to check Woot every second, since you never know when the BOC will appear.

The application is hardly complete. It lacks any style or substance in terms of look and feel. It also neglects the ability to run in the system tray (ala Twhirl or Tweetdeck) and update the user that an item might be selling out soon or that a new item is available. Right now it simply runs on the screen.

Of course, the trickiest part of this application is the need to run a Proxy service to hit an external URL. Due to Flash's security model, and the lack of a crossdomain.xml file at Woot, you need to have a local service running that will act as a proxy. A quick Java servlet and the very very lightweight Winstone servlet container. Ideally, you would launch this app with a little batch script that spun up your Servlet based proxy and then spun up the AIR app.

So let's walk through the source. That way, all of you out there who've actually done a lot of Flex and look at this and let me know what a BOC it is. :D First we'll look at the AIR app, and finally the Java based Proxy.

<?xml version="1.0" encoding="utf-8"?>
<mx:WindowedApplication xmlns:mx="" layout="absolute"
creationComplete="init()" width="500" height="370" xmlns:utils="flash.utils.*">
import flash.utils.Timer;
import mx.formatters.Formatter;
import mx.formatters.NumberFormatter;
import mx.collections.ArrayCollection;
import mx.controls.Image;

private var START_QUICK_POLL_PERCENT:Number = 0.94;

// a 30 second and a 1 second Timer
private var wootPing:Timer = new Timer(30000, 1000000000);
private var wootEndPing:Timer = new Timer(1000, 1000000000);
private var checks:int = 0;

[Bindable] private var wootItemText:String = "No Woot Found...Yet";
[Bindable] private var wootItemPrice:String = "$10,000,000.00";
[Bindable] private var wootItemPercent:String = "0% Sold";
[Bindable] private var wootItemLink:String = "";
[Bindable] private var checkText:String= "Checked\n0 Times";
[Bindable] private var itemImgURL:String = "";

// Setup the Timers, and start the default timer
private function init():void {
wootPing.addEventListener(Event.ACTIVATE, wootHandler);
wootPing.addEventListener(TimerEvent.TIMER, wootHandler);
wootEndPing.addEventListener(TimerEvent.TIMER, wootHandler);

// Handles the purchase button to open your browser
private function openWootWindow(event:MouseEvent):void {
var u:URLRequest = new URLRequest(wootItemLink);, "_blank");

// Generally use the Event to handle updating the app
private function wootHandler(event:Event):void {

// Hits the API.
private function getWoot():void {
checkCount.text = "Checked\n"+ ++checks + " times";

// Updates all of the items when the HTTPService completes
private function wootResultHandler(event:ResultEvent):void {
wootItemText =;
wootItemPrice =;
wootItemLink =;
wootDesc.htmlText =;
wootImage.source =;
// Determine if we need to do percentage checking.
if (!(new Boolean( {
wootItemPercent = "This is not a Woot-Off";
} else {
var percentNum:Number = new Number(;
wootItemPercent = new String(percentNum/100 + "% Sold");
// Do we need to start checking more often?
if (!wootEndPing.running && percentNum > START_QUICK_POLL_PERCENT) {
} else if (wootEndPing.running && percentNum < START_QUICK_POLL_PERCENT) {
<mx:HTTPService url="http://localhost:8080/WootProxy"
id="wootService" result="wootResultHandler(event)" />

<mx:VBox left="5" right="5" top="5" bottom="5">
<mx:Label id="itemText" text="{wootItemText}" fontSize="12" fontWeight="bold"/>
<mx:Canvas width="100%">
<mx:Button toolTip="Click to purchase" label="Purchase" click="openWootWindow(event)" y="152" x="0"/>
<mx:TextArea height="300" id="wootDesc" left="150" right="0" />
<mx:Image id="wootImage" width="142" height="116" left="0" top="0" >
<mx:Label id="checkCount" text="{checkText}" x="0" y="212" height="52" width="142"/>
<mx:Button toolTip="Click to load Woot" label="Check" click="getWoot()" y="182" />
<mx:Label id="itemPrice" text="{wootItemPrice}" y="124" fontStyle="italic" fontSize="12" x="0" width="142"/>
<mx:Label id="itemPercent" text="{wootItemPercent}" y="272" x="0" width="142"/>

And finally the Proxy:

* To change this template, choose Tools | Templates
* and open the template in the editor.
package org.woot.tracker;

import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

* @author student
public class WootProxy extends HttpServlet {

static final long serialVersionUID = 1L;

* Processes requests for both HTTP <code>GET</code> and <code>POST</code> methods.
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
protected void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {

String contentObj = "";

URL content = null;

if (null == contentObj) {
throw new ServletException("The destination url must be specified for ProxyHttpService");

try {
content = new URL(contentObj);
} catch (MalformedURLException e) {
throw new ServletException(contentObj + " is a malformed url.");
HttpURLConnection contentCon = null;
try {
contentCon = (HttpURLConnection) content.openConnection();
} catch (IOException exception) {
throw new ServletException("Problem opening " + contentObj + ": " + exception.toString());

// Get the content type from the URLConnection and set it on the response.
String contentType = contentCon.getContentType();

// Get and read the input stream.
StringBuffer buffer = new StringBuffer();

BufferedReader din =
new BufferedReader(new InputStreamReader(contentCon.getInputStream()));

String s;
while ((s = din.readLine()) != null) {

// Now write the bytes out to the client.
byte[] contentBytes = buffer.toString().getBytes();
OutputStream out = response.getOutputStream();
out.write(contentBytes, 0, contentBytes.length);

// <editor-fold defaultstate="collapsed" desc="HttpServlet methods. Click on the + sign on the left to edit the code.">
* Handles the HTTP <code>GET</code> method.
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
protected void doGet(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);

* Handles the HTTP <code>POST</code> method.
* @param request servlet request
* @param response servlet response
* @throws ServletException if a servlet-specific error occurs
* @throws IOException if an I/O error occurs
protected void doPost(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
processRequest(request, response);

* Returns a short description of the servlet.
* @return a String containing servlet description
public String getServletInfo() {
return "Short description";
}// </editor-fold>
There it is, enjoy :)

Monday, October 13, 2008

TrExMa for Firefox 0.6.2 players. TrExMa 0.6.2 is finally available. It should resolve most of the issues in getting a TrExMa rating since the update of TM's web site.

Wednesday, October 08, 2008

SAP Portal Javascript/CSS Service Enhanced

The service was enhanced today.

What I discovered whilst doing more testing is that the service inserted the content very early in the lifecycle of the Portal.

So lets say you inserted your CSS which fixes a bunch of SAPisms you can't fix using the theme editor. Using the service you'd get output that looked like this:

<link href="/irj/portalapps/" rel="stylesheet"/>
<link href="/irj/portalapps/" rel="stylesheet"/>

<!-- EPCF: BOB Core -->
<meta content="text/javascript" http-equiv="Content-Script-Type"/>
<script src="/irj/portalapps/"/>
// Snipped 30 lines of script
<!-- EPCF: EOB Core -->

<!-- HTML Business for Java, 645_VAL_REL, 477869, Tue Feb 26 13:23:36 EST 2008 -->
<!-- HTMLB: begin VARS -->
<script language="JavaScript">
ur_system = {doc : window.document , mimepath :"/irj/portalapps/", stylepath : "/irj/portalapps/", emptyhoverurl : "/irj/portalapps/", is508 : false, dateformat : 1, domainrelaxing : "MINIMAL"};
<!-- HTMLB: end VARS -->
<link type="text/css" href="http://your/css.css" rel="stylesheet"/>
a bunch of scripts

Oops, you're not after the theme at all. You'd have to use !important all over the place. How does one resolve this?

You enhance the service. In this case I used an example found by decompiling the LAFService, which is the actual theme service. It provided examples on how to implement and use new IResource and IResourceInformation objects. Here's the ExternalResource IResource object:

import com.sapportals.portal.prt.resource.IResource;
import com.sapportals.portal.prt.resource.IResourceInformation;

* Describes a resource which resides outside of the Portal landscape
* Such resources could be external Javascript toolkits or CSS pages
public class ExternalResource implements IResource, Serializable {
private IResourceInformation mm_resInfo;

public ExternalResource(){}

public IResourceInformation getResourceInformation(){
return mm_resInfo;

public void init(IResourceInformation resourceInformation) {
mm_resInfo = resourceInformation;

public boolean isAvailable() {
return mm_resInfo != null;


Doesn't do much does it. All of the work is in the init method and using the ExternalResourceInformation object which is an IResourceInformation:

import com.sapportals.portal.prt.component.IPortalComponentRequest;
import com.sapportals.portal.prt.resource.IResourceInformation;

* Describes the ResourceInformation required by a resource that lives
* outside of the Portal Landscape.
public class ExternalResourceInformation implements IResourceInformation, Serializable {

private String mm_type;
private String mm_fileName;
private boolean mm_useFileName;
private String mm_URL;

* @param resourceType - you should be using the static types defined in IResource.
* @param URL - The URL to the resource you're trying to add
public ExternalResourceInformation(String resourceType, String URL){
mm_type = resourceType;
mm_URL = URL;
/* (non-Javadoc)
* @see com.sapportals.portal.prt.resource.IResourceInformation#getComponent()
public String getComponent() {
return "theNameOfYourService";

/* (non-Javadoc)
* @see com.sapportals.portal.prt.resource.IResourceInformation#getType()
public String getType() {
return mm_type;

/* (non-Javadoc)
* @see com.sapportals.portal.prt.resource.IResourceInformation#getSource()
public String getSource() {
return "";

/* (non-Javadoc)
* @see com.sapportals.portal.prt.resource.IResourceInformation#getURL(com.sapportals.portal.prt.component.IPortalComponentRequest)
public String getURL(IPortalComponentRequest arg0) {
return getURL();

public String getURL() {
return mm_URL;

All of the work here is done in the constructor. You just provide the URL and it will pass that to the IResource which will be included in the PortalResponse.

What's amazing about this is how easy this actually was. What's more amazing is how you need to decompile things to really understand how this works. Most IResources are BaseResource objects. Those objects are more complex since they need to ask the portal to build a URL to the resource you're attempting to include. Therefore, using this method must be faster and lighter on the portal itself as well as the browser.

One more thing to do. Enhance the service objects:

A new method signature in our interface:

public IResource getExternalCssResource(String cssURL);

New methods in our implementation:

public IResource getExternalCssResource(String cssURL) {
IResource ret = null;
ret = getResource(IResource.CSS, cssURL);
return ret;

private IResource getResource(String URL) {
IResource er = new ExternalResource();
IResourceInformation ri = new ExternalResourceInformation(resourceType, URL);
return er;

That's it.

Now, if you create a simple "footer" portal component which does nothing but insert your IResource, you can have your CSS at the bottom of your page.

IPortalComponentResponse componentResponse = (IPortalComponentResponse)pageContext.getAttribute(javax.servlet.jsp.PageContext.RESPONSE);
IHtmlHeadService htmlHeadService =
(IHtmlHeadService) PortalRuntime.getRuntimeResources().getService("com.scotts.tiller.portal.layouts.htmlheadservice.HtmlHeadService");
IResource res = htmlHeadService.getExternalScriptResource("http://your/css.css");
componentResponse.include(componentRequest, res);

Now your stylesheet will appear after your portal theme.

Monday, October 06, 2008

Hacking SAP Portal with a Javascript/CSS Service

One of the more, erm, "interesting features" of SAP Portal is the lack of ability to directly access the HTML HEAD tag and insert SCRIPT and LINK tags to your own CSS and Javascripts. Well it's not impossible to do, but SAP doesn't offer this out of the box straight away. Probably because they don't want you breaking things.

But lets say you want to use the Dojo Toolkit or YUI on a new hip AbstractPortalComponent. But you don't want to download and host the scripts locally. You wish to use AOLs CDN or Yahoo's CDN to load the javascript. It's faster and solid in terms of reliability. How can you accomplish this?

The answer is, you need to write a new service to access the HTML HEAD.

Create a service inside NWDS and call it, HtmlHeadService. NWDS will create an interface for the service and an implementation. Go to IHtmlHeadService and insert the following method signatures:

package com.portal.htmlheadservice;

import com.sapportals.portal.prt.service.IService;
import com.sapportals.portal.prt.component.IPortalComponentRequest;

public interface IHtmlHeadService extends IService {

public static final String KEY = "HtmlHeadService";

public void addScript(IPortalComponentRequest request, String scriptURL, String type);
public void addJS(IPortalComponentRequest request, String jsURL);
public void addLink(IPortalComponentRequest request, String linkURL, String type, String rel);
public void addCSSLink(IPortalComponentRequest request, String linkURL);

This is pretty simple so far. Next look at the HtmlHeadService object:

package com.portal.htmlheadservice;

import com.sapportals.portal.prt.service.IServiceContext;
import com.sapportals.portal.prt.logger.ILogger;
import com.sapportals.portal.prt.runtime.IPortalConstants;
import com.sapportals.portal.prt.component.IPortalComponentRequest;
import com.sapportals.portal.prt.pom.IPortalNode;
import com.sapportals.portal.prt.connection.PortalHtmlResponse;
import com.sapportals.portal.prt.connection.IPortalResponse;
import com.sapportals.portal.prt.util.html.HtmlDocument;
import com.sapportals.portal.prt.util.html.HtmlHead;
import com.sapportals.portal.prt.util.html.HtmlScript;
import com.sapportals.portal.prt.util.html.HtmlLink;

public class HtmlHeadService implements IHtmlHeadService{

private IServiceContext mm_serviceContext;
private ILogger mm_logger;

public void init(IServiceContext serviceContext) {
mm_serviceContext = serviceContext;
mm_logger = serviceContext.getLogger(IPortalConstants.SERVICE_LOGGER);, "Initialization of HtmlHeadAccessor");

public void afterInit() {, "After Initialization of HtmlHeadAccessor");

public void configure(com.sapportals.portal.prt.service.IServiceConfiguration configuration) {}

public void destroy() {}

public void release() {}

public IServiceContext getContext() {
return mm_serviceContext;

public String getKey() {
return KEY;

public void addLink(IPortalComponentRequest request, String linkURL, String type, String rel) {
HtmlHead docHead = getHtmlHead(request);
if (docHead != null) {
HtmlLink link = new HtmlLink(linkURL);
} else {
mm_logger.severe("Could not get HtmlHead from PortalResponse");

public void addCSSLink(IPortalComponentRequest request, String linkURL) {
addLink(request, linkURL, "text/css", "stylesheet");

public void addScript(IPortalComponentRequest request, String scriptURL, String type) {
HtmlHead docHead = getHtmlHead(request);
if (docHead != null) {
HtmlScript script = new HtmlScript();
} else {
mm_logger.severe("Could not get HtmlHead from PortalResponse");

public void addJS(IPortalComponentRequest request, String jsURL) {
addScript(request, jsURL, "text/javascript");

/* This contains the deprecated method getHtmlDocument(). If this fails, check
* the Web Page Composer based service cssService. It uses the exact same
* method. If this is failing, it should be failing.
private HtmlHead getHtmlHead(IPortalComponentRequest request) {
HtmlHead docHead = null;
IPortalNode node = request.getNode().getPortalNode();
IPortalResponse resp = (IPortalResponse) node.getValue(IPortalResponse.class.getName());
try {
PortalHtmlResponse htmlResp = (PortalHtmlResponse) resp;
HtmlDocument doc = htmlResp.getHtmlDocument();
docHead = doc.getHead();
} catch (Exception cce) {
mm_logger.severe("Exception found: " + cce.getMessage());
return docHead;

Here's the meat of the matter. What does this code do? It uses some undocumented objects to gain access to an HtmlDocument object. This object gives you full access to the entire web page. In this case we're just grabbing a head, you could do much more if you so choose.

So what about the deprecated method getHtmlDocument(), seems bad. Well, with the exception of the fact that SAP is using the exact same method in the recently released Web Page Composer tool, I wouldn't be worried. WPC uses this method to grab its style sheets and javascripts from the KM repositiory. The cool thing is, the code can be repurposed to place anything you like into the page.

How to finalize the service? It needs a ton of SharingReferences in the portalapp.xml file to make it go. This is probably more than it needs, but cssService was using this exact string:

"connection,usermanagement, knowledgemanagement, landscape, htmlb, exportalJCOclient, exportal"

With this service you can easily create a PortalComponent that accesses external stylesheets and javascripts to give your portal that custom look and feel that it's been lacking. Some folks have used this method to change the Portal Title and other features as well. Thanks to Darrell Merryweather at SAP for the inspiration.

Friday, August 29, 2008

XSLT in SAP Portal's Knowledge Management

One of the features of SAP's Portal application is a Knowledge Management library. Think of it as a JSR-170 application that's not JSR-170 compliant.

One of the challenges of working with this library is the lack of meaningful documentation. It's difficult to parse exactly how it works by just looking at the javadocs. There are some examples of what you can do, but they require strange configurations and occasionally bouncing the Portal. Considering a bounce can take 20-30 minutes rather than seconds, that's not an ideal situation.

Let's examine an idea that the UI/Usability designer has had on my current project. He wanted to simply drop XML into the Portal and use XSLT to give the look and feel he was looking for on individual pages.

Seems like a reasonable request. After much searching, I found this document on how to do that within SAP's KM. Go ahead, give it a read. Seems straight forward except for the bouncing of the server and the fact that it is focused more on XML documents rather than XML for the sake of having HTML look proper within the portal. If you've spend any time with Firebug looking at Portal output, you'll understand where I'm coming from.

Needless to say, this seemed highly difficult to actually implement. I don't want to have to bounce a server for each page we develop or each mistake we might make with the XSLT. Velocity of development would be far too slow.

Therefore I set off on a journey to figure out just how the KM APIs work. I ended up with the following code:

import com.sapportals.portal.prt.component.*;
import com.sapportals.wcm.repository.*;
import com.sapportals.wcm.util.uri.*;
import com.sapportals.wcm.util.usermanagement.*;

import org.jdom.*;
import org.jdom.input.*;
import org.jdom.output.*;
import org.jdom.transform.*;

import javax.xml.transform.*;

public class KmXmlTransformer extends AbstractPortalComponent {

public void doContent(IPortalComponentRequest request, IPortalComponentResponse response) {
IPortalComponentProfile profile = request.getComponentContext().getProfile();
String xmlDocument = profile.getProperty("XmlDocumentPath");
String xsltDocument = profile.getProperty("XsltDocumentPath");

try { user=(; epUser = WPUMFactory.getUserFactory().getEP5User(user);
ResourceContext ctx= new ResourceContext(epUser);

RID xmlRid=RID.getRID(xmlDocument);
IResource xmlResource = (ResourceFactory.getInstance().getResource(xmlRid, ctx));

RID xslRid=RID.getRID(xsltDocument);
IResource xslResource = (ResourceFactory.getInstance().getResource(xslRid, ctx));

SAXBuilder builder = new SAXBuilder();

Document docXml =;
Document resultDoc = null;

TransformerFactory transformerFactory = TransformerFactory.newInstance();
Templates stylesheet =
transformerFactory.newTemplates(new StreamSource(xslResource.getContent().getInputStream()));
Transformer xslTransformer = stylesheet.newTransformer();

JDOMResult jdRes = new JDOMResult();
JDOMSource jdSrc = new JDOMSource(docXml);
xslTransformer.transform(jdSrc, jdRes);

resultDoc = jdRes.getDocument();

XMLOutputter outputter = new XMLOutputter(Format.getPrettyFormat());
outputter.output(resultDoc, response.getWriter());

} catch (Exception e) {
So what does this do? It uses JDOM and an XSLT engine to take an XML file in the repository and transform it with an XSLT file in the repository. It uses properties (XmlDocumentPath, XsltDocumentPath) to define where in the KM those files are. These are configurable so that you can simply reuse this object and just modify the properties to choose different files.

There are some issues with the code. Obviously, it's limited to a single transform in its current form. It also uses a deprecated API in the first three lines of the try block. is a deprecated class. Unfortunately you can't create a ResourceContext without one. Nice professionalism by SAP to not offer an alternative.

Other than those limitations, it works pretty darn well. The only question left to analyze is how well this scales.

Saturday, July 26, 2008

Updating Javascript in SAP Portal

Quick reminder to those of you who've followed my posts on Google Analytics.

Be sure that if you're updating a Javascript file that you take the time to update the version of the file. SAP Portal does this using a query string like construct after the javascript file for its OOTB components.

You can do a similar construct for your files by simply embedding a version into the file name. Following the previous GA example, you can simply update the file name to be ga-split-2.0.1.js

Of course, the next step is to update the PortalComponent to pull in the correct version of the file.

So why do you need to do this? Depending upon how your Portal and Load Balancers are setup with caching and expires headers, you won't push the correct version of the javascript file to the browser unless you update the file name! Why? It's common practice to set Expires headers in the far future and set the browser to cache the javascript file. If your setup is doing that, then any changes you make the the original JS file will not be pulled in unless your users happened to clear their browser cache. Since the chance of your entire user community pulling that off is miniscule, the only way to force them to get the new version of the file is to update the file name!

Also, be sure you head over and look at what Spyvee did to inspire these posts over at NetweaverCentral.

Thursday, July 17, 2008

Enhanced Google Analytics in SAP Portal

If you happened to follow my post on integrating Google Analytics with SAP Portal, and attempted to implement it, you may have found some challenges with the reports. More specifically:
  • If you're using the Light Framework (or derivation), all of your URLs are unreadable. They don't describe what is going on in the page since the Portal uses GUIDs as a URL parameter to gather the appropriate page.
  • If you're using the Default Framework (or derivation), you only show hits on your entry point. Which is great for gathering browser information, but not so much for following user activity.
  • In order to resolve this problem, you decide to add the Analytics iView to other pages in your Portal. Now all of your URLs are really unreadable. In fact, you will find that you receive multiple URLs for the same page, where the only difference is the windowID in the query string. This makes the data flat out unusable.
So, what to do?

There is a single fix that resolves both issues. The fix involves asking Portal where in the Navigation Tree you are. First, add in some imports to your code:

import com.sapportals.portal.navigation.INavigationNode;
import com.sapportals.portal.navigation.NavigationEventsHelperService;
import com.sapportals.portal.prt.runtime.PortalRuntime;
import com.sapportals.portal.prt.pom.IEvent;

In order to use these, you'll need to get the following JARs and import them into your project:
One of the methods you can override in an AbstractPortalComponent is doOnNodeReady(). This method is called once the PortalNode has been constructed. At this point, the node can ask the Portal for information. The method is implemented as follows:

protected void doOnNodeReady(IPortalComponentRequest request,IEvent arg1) {
// Get the service to access the Navigation information
NavigationEventsHelperService helperService =(NavigationEventsHelperService) PortalRuntime.getRuntimeResources().getService("");
// Get your current location in the navigation tree
INavigationNode navTargetNode = helperService.getCurrentLaunchNavNode(request);
StringBuffer fullPath = new StringBuffer(navTargetNode.getTitle(Locale.ENGLISH));
// After stashing the title of the node, get the node's parent and loop
// until you've reached the top node. Stash each parent's name and build
// a navigation "path" for use later.
INavigationNode aParent = helperService.getParentNode(navTargetNode, request);
while (aParent != null && !aParent.getTitle(Locale.ENGLISH).equals("")) {
fullPath.insert(0, aParent.getTitle(Locale.ENGLISH) + "/");
aParent = helperService.getParentNode(aParent, request);
// store the path in a member variable that can be used inside doContent()
pageTitle = fullPath.toString();

Once you've created this path, you can then use it to track the page properly. Inside ga-split-2.js, you should remove the final line which calls pageTracker._trackPageview() Instead, you'll create a set of response.write() calls to use the pageTitle object and write a new snippet of code on each specific page.

The end of doContent will look as follows:

response.include(request, googleAnalyticsDataResource2);
response.write("<script type=\"text/javascript\">\n");
response.write("pageTracker._trackPageview(\""+ pageTitle +"\");\n");


How to use the enhancements:

If you're in the light framework, it will just work. You can keep the code at the framework level and it will work on every page in the portal. If you're in the default framework, you'll need to add the code to each page that you want to track. You may want to remove the code from the framework and just track pages. The resulting reports will be far more readable and much better for your business users and portal sponsors who would likely be consuming the data (and pretty graphs) that Google Analytics provides.

Wednesday, July 02, 2008

How does a pacemaker get infected?

A friend of mine from college has an ICD in his chest. He's had it there for 20 years. Basically if his heart gets messed up, it restarts it. Knocks him on his ass and everything if it gets triggered. It looks really funny, but apparently isn't if you're the one getting knocked down (Insert annoying TumbaWumba song here)

Anyway, somehow, it got infected. Now he's up at the Cleveland Clinic getting a new one. Not quite certain how an internal item gets infected, but I guess it's possible, since it happened. Love to hear how that actually happens.

Been thinking about him quite a bit recently. He's going through some pretty annoying stuff due to his condition. Hopefully it goes well and without issue and he can get home and recover soon.

Wednesday, June 18, 2008

SAP Portal and Google Analytics

Some folks over at Spyvee created a document on how to integrate Google Analytics with SAP Portal. It's a very good document but, I didn't care for the fact that the code would not be entered in a standard place for javascript.

Essentially SAP takes a lot of ownership over how objects are inserted into the portal. Ideally you'd want to place the Google Anayltics code right above the tag in your page. Portal doesn't quite let you do that. At least not with any ease.

The next best place for javascript code is at the bottom of the least in SAP Portal. Why? Because you can easily place it there using an AbstractPortalComponent.

Here's some modified steps to Spyvee's document that will allow you to insert Google Analytics into your Portal Framework and track a whole lot of clicks.

Netweaver Portal Integration:

When you get to this part, create a PAR with an AbstractPortalComponent. Create something like this:


import com.sapportals.portal.prt.component.*;
import com.sapportals.portal.prt.resource.IResource;

public class GoogleAnalytics extends AbstractPortalComponent {

public void doContent(IPortalComponentRequest request, IPortalComponentResponse response) {
IResource googleAnalyticsDataResource = request.getResource(IResource.SCRIPT, "scripts/ga-split-1.js");
response.include(request, googleAnalyticsDataResource);
IResource googleAnalyticsDataResource2 = request.getResource(IResource.SCRIPT, "scripts/ga-split-2.js");
response.include(request, googleAnalyticsDataResource2);

What this code will do is pull two scripts that you will create in the scripts directory of the portal application. These two scripts will be the two parts of the ga.js code you grabbed from Google. The code is split into two pieces surrounded by script tags. So creating the following script files will do the trick:


var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www.");
document.write(unescape("%3Cscript src='" + gaJsHost + "' type='text/javascript'%3E%3C/script%3E"));

var pageTracker = _gat._getTracker("UA-XXXXXX-3");

Obviously, don't copy this straight as you'll want your personalized tracking code instead of XXXXXXX :)

Once you've uploaded and created an iView, stash the iView at the bottom of your framework. I'm assuming you needed to customize it and aren't using the out of the box SAP framework. If you stash the iView at the bottom, and it's working, you'll see two script tags in the head to your two scripts, and you'll find a script between them calling the script.

Eventually Google will pick up that it's working and you'll begin to track your clicks. Just beware that if you're behind a firewall, you will probably get some strange results as to where your clicks are being routed depending upon your network topology. I've got requests in Ohio showing up as Chicago, which is where the google analytics call is being routed.

Tuesday, May 20, 2008

Hilliard to gain Crew Facility

According to the Dispatch, and Shawn Mitchell, Hilliard is getting the Crew Training ground.

This is undoubtedly good news for the school district and the city. But I still sit and stratch my head trying to figure out how the infrastructure will support the facility. Can you imaging the travelling hordes of teams heading the Hilliard for a large soccer tourney dealing with the Roundabout of Doom planned for the triangle at Scioto-Darby, Main and Cemetary Roads? I dread dealing with roundabouts and I'd hate to put unsuspecting teams from Pennsyltucky through that.

Either way, as a current HCSD resident, overall I think this is a good thing.

Saturday, May 10, 2008

Where'd WCSN go?

Columbus Sports Network disappeared from the Clear QAM channel 85-7 on Time Warner Columbus last week. I was hoping it would reappear but I haven't found it so far.

This thread is helpful in finding the QAM listings on TWC, but it's not up to date...since WCSN is gone. I'm guessing they decided to encrypt it, which really makes no sense since you can pick the channel up OTA. I thought that by law, they have to make this signal available through the "lifeline service", which means it should be unscrambled.

QAM has tended to work better than my downstairs TV with the cable box, so I was hoping that it would reappear in time for the Crew game tonight. I'm no longer holding my breath.

Trophy Manager Firefox Plugin

This is old news to those who browse the US/Canada board on, but I figured I'd stick something out here just in case someone's searching the net for information about the old Greasemonkey scripts.

trexma-for-firefox is a new project hosted at Google Code. At version 0.5, it currently implements all functionality available in the GM scripts with the exception of the training graph literals. Since Trophymanager implemented the cool flash graph, that scrape is no longer needed.

If you want to use trexma for firefox, go to the aforementioned project, and download trexma_0.5.xpi. Open this file with Firefox, and it should install the extension into your browser.


Thursday, May 01, 2008

The thing about perception

Doug MacLean had an interesting quote on 590 in TO regarding the late John McConnell's perception of him courtesy of Puckrakers:

I went back and I coached in Columbus for a year. I was the president, I was the general manager, I was running the rink, I was trying to get the Rolling Stones to play in our building … I mean, I just had too many hats. Mr. McConnell, God bless him, but he thought I was running a high school basketball team, if the truth was known. He had no idea of the size of the operation, that it really was running an NHL team and a building and everything.
If true, then Doug is responsible. In business, you are responsible for marketing yourself and making sure that your boss understands what you do and how good you are at it. If your boss doesn't "get it" then it is up to you to change the message and educate until they do.

It can be a tough lesson to learn, but ultimately, you have to take some responsibility over marketing yourself and make adjustments when needed.

Friday, April 25, 2008

Week one @ new gig

Things have been a bit quiet on the blog front. I'm digging into a new job this past week after leaving my old gig of nine years.

The new job has me diving into MOSS Sharepoint and Portlets. So far I'm diving into all sorts of tools and samples looking for ways to improve upon the current situation and find the best way to design the next generation of the tool. A lot of thinking and strategy so far, I suspect the prototyping isn't too far off.

Little bit of a culture clash since the portal was built on SAP's product and I'm encountering some folks who haven't run some basic web performance tools. They just haven't learned how to use them yet. Which is fine, because I can teach them ;)

I hope to start to offer some interesting info on MOSS and Portal development over the next few months as I dive into this. Otherwise, the smaller updates will be available through Twitter.

Wednesday, April 09, 2008

Try as I might, Facebook is still just too slow

I'm still trying with Facebook. Am I the only one out there who finds it just running slow all the time?

This was just a good excuse to retry ScribeFire ;)

Sunday, March 23, 2008

Time Warner's "Free HD" not so "Free"

Time Warner added four HD channels here in Columbus last week. This is great, since I now have two HD sets since we picked up a 26" Panasonic on clearance for the bedroom a few weeks ago (thank you tax return).

There's just one small problem. "Free HD" really means that you need a digital cable box to view the channels.

Not so free when you need to pay to rent a cable box.

You see, the Panasonic has an ATSC tuner with Clear QAM. QAM is the technology that handles transmitting a digital signal across cable lines. If the signal is in the "clear", then it can be decoded by a TV that supports Clear QAM.

In Columbus, we get CBS HD, PBS HD, PBS Ohio, PBS Plus, Columbus Sports Network, NBC HD, NBC Weather Plus, CW HD, ABC HD, My TV and Fox HD in the clear. All of these are broadcast over the air, so Time Warner is required to transmit them over cable in the clear.

According to Time Warner, "Free HD" includes Discovery Theater, TNT, STO, FSN, Vs/Golf, TBS, A&E, HGTV, and NatGeo.

If it was truly "free", I'd be seeing these channels in my bedroom like the 60 odd analog channels being picked up without a cable box.

That would be something worth marketing.

Friday, March 14, 2008

Crew Complex in Hilliard?

The Crew now has added Hilliard and Columbus to the mix for a potential training complex.

The Hilliard bid is due to a recent annexation of land next to the Municipal park. They have about 155 acres to develop: I believe it has to be the farmland south of Scioto Darby, West of Alton Darby, North of Heritage Golf Club and, East of the Municipal Park.

What's interesting about the site (if this is the site) is that right there in the middle of it is a large soccer field complex. How convenient.

This would have to be the location, in Hilliard, wouldn't it? It couldn't be East of Alton Darby, right?

Monday, March 10, 2008

Inbox Zero (aka Email Nirvana)

For the longest time, my email inbox was thousands of entries marked red. Once I got on the ETT train, and began to read more about things to enhance productivity, I dove into GTD.

I tried GTD some months ago using Google Notebook with mixed results. I struggled with having to maintain the app, and moving email info from one message to another was just flat out annoying. So to see more folks talking about GTD as my next step was a bit scary. it still didn't help me from groking our email system which is where "everything" gets done. As an aside, email overload is one of the advantages of implementing RSS technologies to let us know when things we're interested in are updated. But that's a much larger topic.

Inbox Zero is a concept, mindset, what have you on how to manage your email. It's a concept that I discovered by searching the Outlook blogs to find information on how to implement GTD in Outlook. Instead what I found was a way to implement Inbox Zero in Outlook.

Today when I receive an email I go through the following steps:

  1. Is this email related to my job, or is it just noise. If it's noise, delete it.
  2. Is this email related to a project I'm working on, or have worked, or is it informational like an organizational announcement. Categorize it with a category called "Fiserv" ir the project I'm working on.
  3. Determine if this is an actionable item for something I need to do. If it is drag the email to the To-Do bar. Setup a date for completion. If there's something I need to wait on in order to complete this task, mark the task with a category of "@Waiting". Otherwise, do nothing.
  4. Move the email to a folder. Be it a project folder, a folder of all "@Waiting" items, or whatever. Right now I'm using project based folders.

This has been immensely freeing. Now I don't have to sort though thousands of emails to determine if I need to do something. it's now in my to-do bar. It's easier to schedule time to work on a task by dragging the task into my calendar. It is now easier to track down what you've been doing when you go into your 1-on-1s or annual reviews. (Yes, I wish I had started this months ago).

This is the first step on the way to implementing a GTD process. I could go two ways. One way is to drop project categories, which I like since it color codes my meeting schedule (which I now also categorize). The other is to drop project folders and use categories to sort in a @Done folder which contains everything that is completed. Or I could just keep things the way they are now, which isn't quite GTD, but it feels darn close and much better than the mess I waded through before.

Sunday, March 09, 2008

Going Old School With Time Tracking

About a year ago, my team began using a tool called Enterplicity to track projects as well as integrate time tracking. It is especially effective in managing large portfolios of projects and tracking them at executive levels.

However, I've personally been challenged when working with it. I stuggled to mesh my desire to have a time tracking tool be my task tracking tool. I felt that if I was working on something, I should have a task in this tool to track it. Unfortunately, life isn't that simple. Is it appropriate to have my project manager add tasks for each little piece of the app I'm working on? It would if I am using it as a task tracker.

Eventually I realized that I was losing productivity trying to get this tool to work for me rather than for the project manager. So, over the past month or so I've been working with a new system to keep track of my time.

It's old school. Paper and pen. Sounds crazy, but it's been highly effective.

Instead of trying to add everything into the tool, I decided it would be better to actually work the task, and then after some time, determine if it's something worth tracking at a granular level.

Enter a tool from David Seah, an interactive designer. Ryan Jacobs pointed me in the direction of David's Printable CEO series of PDFs, of which I've taken to the Emergent Task Timer. The Emergent Task Timer is a very simple bubble sheet with 15 minute increments. Each day, I write down the task I'm working on, be it, a Project Call, Admin (email sorting, HR crud), On-Call, Project coding general, Project coding specific, etc, and I simply fill the bubbles out at the day goes on. I don't have the 15 minute egg timer as is suggested, but I try to keep the thing updated about every hour. It has the added bonus of keeping the day moving and making sure I am working on something worthwhile. Filling out the bubbles means something. You don't want to have too many bubbles in Admin.

Why is this easier for me than Enterplicity? Two main reasons. I don't have to log into Enterplicity to track these random items, and I can very quickly look after a few days and see if it's worth requesting more granularity in the project plan. Once or twice a week, I log in, enter my items off of a task sheet (I generally get 2 or 3 days out of a single sheet) and move on. If the random item only took a couple of hours, I enter it into the general line item or the line item that closest matches what it is I needed to do. If it has been going on for two days, I'll ask the PM for a real task to track it.

I used to feel that Enterplicity hindered me in some respects. What I really wanted was for it to be like the ETT, but that's not what it is. It's a project management tool not a task management tool. Realizing that, and looking for my own productivity methods has been extremely freeing. Now I feel like my time in Enterplicity is accurate and useful to those doing analysis, when at times I just felt like it was in the way.

Moral of the story: Don't try to get the tool to conform to the way you do work. First, be productive. Then and only then, figure out how to deliver the accurate entries into the tracking tool your team uses.

Wednesday, March 05, 2008

Maybe I'm Getting Old

But I'm not enjoying Facebook as much as the business related social networks. I'm trying to get into it, but it's just not doing it for me yet. There seems to be a lot of silly applications which are fun, but also sort of scary. I mean, have you seen some of the quizzes you can take on that thing?

Maybe I just don't want the whole world to know what sort of a drunk I am. But maybe it's that I can't take these things in private. The whole nature of the social graph forces me to spam my friends with certain "fun" apps. And that's just not for me anymore I guess.

For whatever reason, I find looking at LinkedIn and seeing what old classmates are up to more interesting. This probably sounds infinitely boring to a lot of folks. Plaxo's Pulse feature is very intriguing since it lets me know about blog posts, Twitters and pretty much anything one wishes to share with their career network. I know Facebook has these things, but these places just feel more mature.

Yes, maybe I'm getting old.

Tuesday, February 26, 2008

Breaking The Build (Parody)

To the tune of Breaking the Girl by Red Hot Chili Peppers. Enjoy with your copy of Blood Sugar Sex Magik.

Much like the original was about a lover scorned, Breaking the Build is about a developer scored.

Any suggestions for alternate lines are encouraged.


I am a geek
Code's all I know
For Web 2.0
You were a geek
New to the team
We were the two
Working with Seam
Four weeks until Go Live
A feeling of pride inside

Coding and hacking
Your skill set is lacking
You're breaking the build
Pager went off last night
So I switched to Ruby
But who am I fooling
We're breaking the build
It won't fix your code

It's two a.m.
I am awake
Trying to fix
Your coding mistake
You were asleep
I'm alone
Working so hard
While you're at home
I guess management was tired
When they thought you'd be a good hire

Coding and hacking
Your skill set is lacking
You're breaking the build
Pager went off last night
So I switched to Ruby
But who am I fooling
You're breaking the build
It won't fix your code

Coding and hacking
Your skill set is lacking
You're breaking the build
Pager went off last night
So I switched to Ruby
But who am I fooling
You're breaking the build
It won't fix your code

Tuesday, February 19, 2008

Decorating a CD (the Decorator Pattern)

The other day I implemented the Decorator pattern on a baby shower gift.

Sometimes those of us in the software engineering fields end up describing stuff in our own lingo and make it very difficult to translate what it is we're doing to our PMs, direct managers or customers. So when an example comes along in every day life to help explain certain concepts, it's helpful to jump on that to help draw a picture.

So as I said, the other day I implemented the Decorator pattern on a baby shower gift.

First the background. A coworker and friend I have worked with for a good six or seven years recently was blessed with his first child. We'll call him Brad. About a month prior to the date, I discovered the Rockabye Baby collection of rock goes lullabye. When I saw the Metallica album I knew it would be a great fit. After giving it a listen, with a lend of the Columbus Library and determining it was really well done and entertaining, a group of us pitched in and got a last minute pseudo-gag gift.

Now, being computer geeks, we didn't have anything handy to adorn this gift with. We could have handed the gift to him with no wrapping, but that wouldn't feel like a gift. We needed to decorate the gift. Commonly we do this by using special paper for this, but we didn't have that either. So we used a hunter green filing folder cut up and taped lovingly around the CD.

This is the first application of the Decorator pattern. Decorators by their very nature wrap an existing object and provide additional functionality. In this case our object was the CD. Our Decorator was a file folder. The additional functionality was gift wrapping. Obviously it's a bit easier to decorate any physical object, while in software you have to write a bit of code to engulf the object you wish to decorate.

What is handy is, when you decorate an object in something like Java, is that you can decorate it again. This was applicable to our gift as well. When digging through some documents, one of my co workers, we'll call her Lori, found an old specification for a project Brad had put a lot of work into for over a year. She took the cover off of the spec, and we used that paper to wrap the CD again. It was funnier than the file folder. So we decorated the CD again.

When decorators are created, they use the object they are decorating as the argument for creation. In order to decorate something, you need to have that object. So we pass the decorated CD, which is still by it's very nature a CD into a new decorator. We now have a doubly decorated CD with even more wrapping features!

We applied a third decorator to add some bows we found in Lori's desk, and a fourth to add a To and From label.

Granted, we could have written one large Decorator to do all four things, but we didn't know we needed to do them at the time. The Decorator pattern provides the flexibility to add features when you need them, rather than needing to know all about it up front.

This example somewhat oversimplifies things here and there. Even so, it's a good way to explain it to someone whose idea of a Decorator are the designers on Trading Spaces.

Sunday, February 17, 2008

Yes, it's a Zune Widget

So, that item on the right hand "nav" bar, for lack of a better term, is a Zune Widget. Yep, a Zune.

OK, stop laughing now.

Anyway, I picked up one of the old 30 GB ones from Woot last November for the princely sum of $89.95 + $5 shipping. Still laughing? Seriously, it's 30 GB of music and video for 94.95. Hopefully the Zune will last longer than the Creative Vision:M did (6 months).

Anyway, the Zune software is not ideal, but it does a decent job. One of the intriguing items was the Zune social site. It keeps up to date on what I've listened to, as long as it actually knows what it is I'm listening to. It seems to have no idea who Wolfgang Parker or Fenster were, or are, which really isn't very surprising.

Anyway, that's what the little widget is up there with it's really strange images of the Bosstones, Primus and Soundgarden. I think you're supposed to connect to other Zune users, but there aren't a whole lot of them. So if you've got one...connect!

Below it is the Plaxo widget...see an earlier post about that plague and feel free to connect to me there if you're so inclined.

Friday, February 08, 2008

The Plaxo Plague

Plaxo is spreading like the plague at work due to LinkedIn connection scraping. As is typical, we're so far behind on this that Plaxo is looking to be purchased (by Google). I mean, if we were on the ball, this would have happened well before they got to that point. They were founded in 2001 after all...granted Pulse didn't launch until the summer of 2007.

That said, I like the Outlook calendar syncing. One of the comments at TechCrunch prior to the Pulse launch was, will people move over to Plaxo in place of Facebook? I'll give you one really good reason why they will. Facebook is blocked at work. Plaxo isn't. I'm sure my business isn't the only one who is blocking it. It's one of the reasons LinkedIn is big at work...not blocked.

I guess this means I should be looking at Google's OpenSocial API. Plaxo supports that. I asked LinkedIn for their API, but they never sent it along. I'm interested in some of the things that Andrew McAfee and others have talked about in using social networks to replace or augment Intranet functionality.

One of the challenges in doing that is to make sure that certain posts and links stay not to your Business Relationships, but to your Company Relationships. I will never know when someone leaves my current company on a social network. If I were to use Plaxo, or Facebook for that matter to augment internal blogs and Wikis, I would need to find a way to create a new network of individuals controlled by the company. I don't want someone who just left the company yesterday to have links to internal URLs posted in their Pulse because I posted this on an internal blog rather than an external blog. Similarly, i wouldn't want to confuse other business relationships with links to internal items.

The privacy of intellectual property is the concern. Can the OpenSocial API help in resolving that? With a little digging, we should be able to find out.

Thursday, January 31, 2008

Now THIS is Marketing

Not only to the Wheeling Nailers go after WVU fan, with Shred Rodriguez night...but they offer discounts to OSU fan as well for "mutual distaste of Michigan".

Tuesday, January 29, 2008

Crew Blog

In the web stone ages, I worked on The site is long since dead, but it was essentially a blog on the crew.

Now, much like the Puckrakers blog on the Jackets, the Dispatch has ponied up for Crew beat writer Shawn Mitchell.

Covering the Crew has been up since the MLS Superdraft. So far, it's been outstanding. Given the amount of inches the Crew gets in the print editions, this has been a very welcome addition. Where else are you going to get the quick scoop on the pursuit of Maciej Zurawski, with a purely Columbus perspective?

Keep it up Shawn.

Monday, January 07, 2008

In House Software vs. Software Companies (BoIB)

Originally posted on December 4, 2007 on the internal blog:

Joel Splotzy has two new posts where he provides the transcript of a speech he gave at Yale. The basic premise is his career and how he got to be where he is.

It's quite interesting, and quite frightening. I've seen some of the places he's talked about when he goes off on the "80%" of programming jobs that deal with in house software.

That’s the second reason these jobs suck: as soon as your program gets good enough, you have to stop working on it. Once the core functionality is there, the main problem is solved, there is absolutely no return-on-investment, no business reason to make the software any better. So all of these in house programs look like a dog’s breakfast: because it’s just not worth a penny to make them look nice. Forget any pride in workmanship or craftsmanship you learned in CS323. You’re going to churn out embarrassing junk, and then, you’re going to rush off to patch up last year’s embarrassing junk which is starting to break down because it wasn’t done right in the first place, twenty-seven years of that and you get a gold watch. Oh, and they don’t give gold watches any more. 27 years and you get carpal tunnel syndrome. Now, at a product company, for example, if you’re a software developer working on a software product or even an online product like Google or Facebook, the better you make the product, the better it sells. The key point about in-house development is that once it’s “good enough,” you stop. When you’re working on products, you can keep refining and polishing and refactoring and improving, and if you work for Facebook, you can spend a whole month optimizing the Ajax name-choosing gizmo so that it’s really fast and really cool, and all that effort is worthwhile because it makes your product better than the competition. So, the number two reason product work is better than in-house work is that you get to make beautiful things.

One of the interesting things about this segment is the assumption that working on Products actually means that you're going to implement something that makes it better for your user. Sometimes, you make it better for your backend, which is far less glamorous than working on the frontend. Sometimes cool features get cut because, your customers aren't ready for it. I can think of several products which took several years to incorporate cool javascript gizmos, let alone Ajax gizmos because our customers were not ready for it. In essence, a segment of the customer base doesn't want the product they integrate into their online presence to look better than the rest of their online persona. There are customers who I've worked with who use older technology in 10% of their application simply because they aren't ready to fund the upgrade to the newer stuff.

It's unfortunately just not that simple in every instance.

Joel also comments on management styles when recalling his time at Juno:

Eventually, though, I started to discover that the management philosophy at Juno was old fashioned. The assumption there was that managers exist to tell people what to do. This is quite upside-down from the way management worked in typical west-coast high tech companies. What I was used to from the west coast was an attitude that management is just an annoying, mundane chore someone has to do so that the smart people can get their work done. Think of an academic department at a university, where being the chairperson of the department is actually something of a burden that nobody really wants to do; they’d much rather be doing research. That’s the Silicon Valley style of management. Managers exist to get furniture out of the way so the real talent can do brilliant work.

My first question here is, who decides if the investment is worth it? I suppose if you're a startup with a ton of angel funding, it doesn't matter if it's worth it to upgrade gizmo X. But if you're a software services company, it does matter. If contracts are written in ways where it really doesn't matter if you create gizmo X because no one's going to pay to upgrade to it, then you probably aren't going to do it. Someone has to manage that. It's not a chore in this case. In this case it's paramount to the survival of the company that allows you to create cool gizmos. Someone has to understand the customer and what brings in the coin so that the correct gizmo X Y or Z can be created.

Joel is a bit simplistic in his view because he makes the assumption that you're always going to do what's cool working for a software company. You will do cool stuff quite often, but in many cases, you aren't. You're going to do what provides the best return on investment for our customers. It's not always glamorous, but it's what keeps you employed. You need management because you need to sell product. Yes, managers should help to get furniture out of the way, but they also need to be smart enough to step in when the team is floundering and not making the decisions needed to be successful.

I do agree with Joel on one thing, however. In house software is not as fun as working on software services. Software services gets you working with internal and external clients. It's challenging and keeps a set of social business skills nimble that you might otherwise neglect.

Book Review - Managing Humans (BoIB)

Originally posted on December 3, 2007 on the internal blog:

About three weeks ago, I got an email from the Columbus Metropolitan Library that "Managing Humans" was available. It's taken me a while due to other commitments to finish it and this review, but here goes.

Managing Humans is the first book by Michael Lopp, otherwise known as Rands from Rands in Repose. It is an updated and edited collection of essays from the Rands in Repose blog that focuses on management, specifically managing software engineers in software companies. Lopp's resume is quite extensive, taking him around Silicon Valley and back through Borland, Netscape, Apple, Symantec and the failed startup. Through the diversity of these stops, he's come up with a set of tales about management and various personalities you encounter in the workforce.

If you've read Rands in Repose, you know that it's very snarky and pointed in its commentary. Managing Humans tones these two items down, which allows the book to be a bit more accessable to the masses. It's very funny and "too true" in many of its passages. Lopp gives many of his characters names which are catchy. You will probably find yourself saying things like "I know a Fez and his name is....", or "my manager is sooooo Organic".

The catchiness of the book allows it to disseminate quality information in a consise 200 pages and appeals to managers and their employees at the same time. For managers, the importance he places on the one-on-one is and communication is very compelling. While for the staff, the importance of understanding who your manager is and how he thinks is a great start on figuring out how to "manage" your manager to help hims succeed and make sure he knows that you are doing it.

When you do get the book, be sure to read through the glossary which contains many terms which you should probably know if you're in software engineering. My personal favorites:

  • Synergy - A word used in close proximity to Leverage
  • Leverage - A word used in close proximity to Synergy

If you don't find that funny, then, maybe this book isn't for you. If you do, go pick it up.

I highly recommend this book for ease of reading, entertainment and insight. It's worth a purchase and a place on your bookshelf for quick reference of how to manage people to help them and you succeed.

Thursday, January 03, 2008

PDF "Trouble"

I like Jeff Atwood's Coding Horror blog. I use it as fodder for many posts at my internal work blog. It covers a lot of "looking forward" topics for my group, but it doesn't hit home as much as it could. Some of those topics are so far out that my coworkers just aren't thinking about these things yet.

That changed today.

Jeff's post on PDFs is directly relevant to things that I do on a daily (or monthly) basis. In it, he argues against PDF and for HTML, which 99% of the time is fine. The user experience is better with HTML, it can do neat things by interfacing with the browser.

However, it doesn't allow the user to save their content. PDF does. Why is this important? Because you need to save a legal document.

An aside, my work currently consists of producing electronic statements for large companies. Your monthly cable bill, your wireless bill, your credit card name I have produced many HTML based statements, which are great. They interface with the end user, can allow them to sort transactions in some cases, or download CSV files to import into a spreadsheet. So why is it, that I argue against Jeff and for PDF? Because you can save PDF and get a file that looks and feels like that paper document I get in the mail.

Look and feel is important for large companies. They have spent loads of cash on print document authoring software. They spent even more cash on print vendor contracts or large print shops. It is important that the document they produce for paper looks the same or very similar then the one on the web. It reduces customer care costs if they are the same. PDF allows for that. You can do it with HTML, and some forward looking companies and utility startups are doing so.

I believe Jeff is simply ignorant of some of the uses for PDF, and probably doesn't know about the world I interface with.. But read the comments and you see a lot of users who just don't understand the advantages of PDF when it comes to statements. They overestimate the client and their legal team, as well as the end user.

As companies try to get more "green", they will try to reduce paper costs which means turning paper off for customers. As this occurs, HTML (well, actually the browsers) will have to come up with a better way to save a statement. Obviously IE can package a statement in an mht file. But Firefox, Opera and Safari can't. And we all know the holy wars that are unleashed when you limit your audience to one browser.

West By "Gosh" Virginia?

Has our society become so scared that we take a simple phrase and replace God with Gosh?

Who are we? Goofy?

That's exactly what the play by play guy on Fox said when he repeated the phrase my father uttered so many times over the years. Of course the Fox announcer probably just heard it for the first time when interim coach Stewart bellowed it at 2:20 of this clip.

Just another example of someone being too PC. Either the announcer or his producers were scared to say "God".