The downside is, I allowed myself to get drug into the flames. The upside is, I believe I was able to make my point that JUnit is not the be all end all of Test Frameworks.
JUnit is a wonderfully well supported tool. Is it the best thing? No. Is it pretty damn good? Yes....for authoring and executing unit tests.
Part of the issue I have with JUnit is that it seems that if you're trying to do something other than unit testing, you end up with a solution that is more about hacking the JUnit framework so you can run in a pretty GUI than actually authoring tests.
Let me outline an example. I will warn you that I am going to make every effort to obfuscate he actual innards of the software package I'm using such that I don't violate secret corporate type things.
In this scenario, I am testing HTML pages that are constructed using, for lack of a better term, flexible container objects. (If you know what pattern this is, please comment about it and I'll correct.) The main container contains some beans containing text to display on the page. Think of it as a bunch of beans with getText() type methods. Some beans can be named as keys differentiate the container it from another customer's page. Each bean can be organized into smaller sub-containers that further describe the text within them. Each bean gets a name that describes what the text inside it is. It's a loose format and can be used for many different clients very quickly because you don't have to develop custom beans. You just tie them together.
Each container object is constructed by parsing a binary image file. The object is then loaded to a database for access via JSP.
So what am I testing? I'm testing that no links that have been parsed are broken. That no images are missing. That the HTML is valid.
Pretty simple tests? Can be done via a spider? Probably, once you find a way to generate all of the encoded URLs. What about just mocking a container up with limits?
Ahhh, there's the difficulty. Because I'm reverse engineering these objects from a binary file, we might make a mistake. Instead of grabbing 4 "line" beans in my address area, I've grabbed 3! What caused this? Is it a problem? How will I catch this?
So you say, unit test your parsing. Can't do that, how do I mock binary data? How do I know what the limits are? I have a client that one day sent in 5 lines of address, and we handled it fine, because I didn't enforce an artificial limit that the client claimed to be true.
YOU MEAN YOU CAN'T TRUST YOUR CLIENT? RUN FOR THE HILLS!
Well, it's not that I don't trust them. It's just that their systems are legacy and the business rules are generally so old that no one can remember them all. So I expect any limits provided to break within 6 months of production. Hence, we code loosly to try to prevent any breakage. It's an art form. Too loose, major problems. Too tight, missing data.
So in essence, I test the object AFTER I parse it. I create my own acceptance test that says, I expect to have say 4 lines in my address or whatever.
What can happen, and has happened, is I find I've gotten 10 lines! Uh oh! So now, I can go back, look at the specific input stream and figure out what needs to be changed, on my end to make it work...the client isn't going to change.
In the flame war, a great comment was made:
Testing is a trade-off because the time is limited and the tests are endless.This is exactly why this had to be automated. And exactly why JUnit could not work. You see, I had a test suite running in an ultra-hacked JUnit. I overrode about half of the relection methods in TestCase! Not ideal because I shouldn't be spending all of this time working on a framework. I should be writing tests!
It's also not ideal because of how JUnit handles memory. Once I start generating dynamic tests, the memory usage is incredible. I essentially could test two thousand objects. But it's not uncommon to have fifty or one hundred thousand to test. I need to leave for the day, set up a script and start testing on today's runs. It's not continuous integration, but it's the best I can do with the environment.
Unfortnately it's not as simple as just calling getters. It's generating the testcases dynamically. TestNG allowed me to replace the hacked TestCase with a @Factory annotated method:
@Factory( parameters = { "testClass",
"runID",
"limit",
"JDBCURL",
"DBUser",
"DBPassword",
"URLPrefix",
"URLSuffix",
"CipherAlgorithm",
"CipherKey",
"voucherString"} )
public Object[] factory(String testClass,
String runID,
String limit,
String JDBCURL,
String DBUser,
String DBPassword,
String URLPrefix,
String URLSuffix,
String cipherAlgorithm,
String cipherKey,
String voucherString) {
List fixtureList = new ArrayList();
List objList = getObjList(runID,
JDBCURL,
DBUser,
DBPassword,
URLPrefix,
URLSuffix,
cipherAlgorithm,
cipherKey,
voucherString);
int intLimit = 0;
try {
intLimit = Integer.parseInt(limit);
} catch (Exception e) {
intLimit = 100;
}
try {
Class theTestClass = Class.forName(testClass);
Class[] args= { String.class, String.class };
Constructor theTestConstructor = theTestClass.getConstructor(args);
Utils.log("TestFactory2", 2, "Adding test instances");
for (int j=0; j < objList.size() && j < intLimit; j++) {
fixtureList.add(
theTestConstructor.newInstance(
new Object[] {(String) objList.get(j),
(String) objIDs.get(j)}));
}
} catch (Exception e) {
e.printStackTrace();
}
Utils.log("TestFactory2",
1,
"Returning " +
fixtureList.size() +
" test objects from factory method and beginning tests.");
return fixtureList.toArray(new Object[fixtureList.size()]);
}
No comments:
Post a Comment