Friday, May 9, 2008

Day 3 of StarEast - Orges are like Web Services (Part 2 - The Sessions)

So for anyone following my blog posts currently about the conference, are asking themselves "How are Orges like Web Services?" To that I must give credit to Brian Bryson from IBM who used it in my first presentation on Testing SOA Applications.

Testing SOA Applications: A Guide for Functional Testers - Brian Bryson (IBM)

"Orges are like Web Services," quotation was really in regards to the original quote " Orges are like Onions" which is then translated into "Web Services are like Onions" because there is mainly layers to take into account when performing functional tests.

Service Oriented Architecture (SOA) is a computer systems architectural style for creating and using business processes, packaged as services, throughout their life cycle. SOA also defines and provisions the IT infrastructure to allow different applications to exchange data and participate in business processes. - Wikipedia Definition

The above is a small summary of what SOA actually is. To the traditional tester this means a different way of approach compared to traditional interface level testing. As the traditional model will most likely be an Interface wired up to a Web Service, we need as testers to remove the Interface from our testing to truly be able to functionally test the web service. This means,

Gui-less or "Headless" testing. This is the idea of testing beneath the interface layer and testing the web service directly. This means that the functional test must interact directly with the web service, similar to testing of API's or Object level testing. Here is some of the reasons why you should test beneath the GUI layer.

      • Application level validation - The GUI may restrict you from the data you input that does not guarantee that the application/service will be tested to handle bad data. The web services has no input validation it will accept what you send it, but the question is "How will it handle it?"
      • Security - Data via an interface is usually malleable and easy to manipulate. This can cause security weakness. Testing beneath the interface allows you to be better aware of how the service handles malicious data.
      • Performance - The web can be a bottleneck, testing the application/service beneath the GUI will allow you to accurately get a feel for how well the service itself can handle under heavy load.
So the main way to achieve this testing effort is via automation. This is one of the area's in software testing where Automation is a must. It is not an easy manual effort and web service components are usually constantly changing, and automation testing will help with the regression testing effort and ensuring that the web service is maintaining its ability to meet its requirements. Below is a few hints on how to test SOA apps.

      • Use SOAP Testing tools to gain easy access to Methods to data input.
      • Try functional level tools such as Fitnesse to deliver functional level tests integrated with acceptance tests at low cost.
      • Try security testing methods like Injection, overflow, malicious data, parameter tampering.
      • Boundary Test, The web may limit character intake, but does the method?
      • If inter service testing - does the data being sent back always come in a form understandable by the requesting service?
      • Load test the service - How does it handle under extreme requests/transmissions? Monitor the CPU/Memory usage on the service machine. Is there memory leaks?
      • Data Variation - Try anything. Combinations of Data and values to really understand how the service handles varying sorts of information.
Now that you have some basic idea's, there will be many variations of tools on the marketplace to help you achieve your testing goals, whether it be open source or license software. The main thing to remember is use your head and don't follow the traditional path, but the road less taken.

Thursday, May 8, 2008

Day 3 of StarEast - Orges are like Web Services (Part 1 - The Keynotes)

The Tutorials are over and now the key notes and speaker sessions have begun. Here is a brief overview of some of the keynotes and presentations that I attended and what I was able to extract from them.

Keynote 1 - Testing Dialogues - In the Executive Suite - James Whittaker(Microsoft).

(snippet from his PowerPoint presentation)

• Consider that your …


–Finances, credit and tax information
–Travel documents and records

–Traffic violations and criminal history

–Citizenship, travel history and visa information

–Medical records
–Organization membership information


• …are all stored in and processed by software

The main objective was to express that as software becomes integrated even more into our daily routines and life processes we must realize that Quality cannot be sacrificed and we need to adapt, learn and constantly be ready to change/grow to face the challenges of maintaining Quality in today's software model.

It is time for Innovation to occur and as an industry we need to move toward improving these skills, tools and processes that we currently use into something bigger, better and more universal. As systems grow the integration between products, companies and people will grow, and Quality is everyone's responsibility.

Testing Lessons Learned from Extreme Programmers - by Elisabeth Hendrickson (Quality Tree Software)

Extreme Programming or commonly known as XP is widely known as an Agile Practice that is sometimes a combined process between other Agile Methodologies like Lean and Scrum. The main focus between XP is very developer focused however the message was that it does not have to be a Developer Only work cycle but one in which the QA(Tester) and the Developer can live in a world of harmony where they both equally contribute to the testing and insurance of quality of the product.

Speaking from experience in which I work for an Agile team that implements both XP process and Scrum, I can say this has created highly testable and quality code. Building quality as a team from the ground up with the turn around time of minutes and hours for bugs instead of days and weeks really helps the feeling of productivity and self worth, and makes a very happy team dynamic.

More information about Extreme Programming can be found here: http://www.extremeprogramming.org/

Day 2 of StarEast - Performance Testing

Performance Testing*

Day 2 of the StarEast conference I attended a tutorial on Software Performance Testing. My general knowledge in the area of Performance Testing is strictly from an observation stand point and as I have had some experience working with Performance Testers, I have yet played a key role in a performance test. However I did take away some key testing points and strategies that can be used by testers across the board, in regards to performance testing.

This not being a technical training as much as a training of process and planning for structuring a performance test, I was able to take away some key points, but the most important being; Performance Tests are costly(whether it be time, resources, or tools) and therefore must be planned and set up with significant thought. Here is a list of some of the pre-test planning items that should be taken into consideration.
  1. Ask not the question "Can we afford the cost of a true performance Test?" but instead "Can we afford the risk of not performing a true performance Test?"
    1. Definition of True as in the context of the above statement: An exact or very close representation of the production environment in which the product will rest.
  2. As related to item #1, one must realize that performance tests can be highly influenced by the environment in which it resides. Therefore a test environment should mimic the live environment as best as it can.
    1. Hardware. Hardware ratio should remain the same throughout the environment as if in production. (ex. If the SQL Server has 8GB of RAM, and RAID HD Setup with 10,000 RPM drives, the test environment should have this set up)
    2. # of Servers should remain at least in ratio to the production environment. If you have 3 Web Server, 3 SQL Servers, 3 App Servers, your test environment should at least maintain a 1:1:1 RATIO.
    3. Data. Data should not be a slice of production data it should be a mimic(or production) data. Content may not have to be the same, but the variety and size should be similar/exact.
    4. Network: Is the application going to be influenced by Network Traffic(Internet, Extranet, IntraNet). If so this traffic should be simulated in the test environment.
    5. Time. Will there be certain aspects of the application that will be used differently and more frequently during certain parts of the day/year. These should be taken into consideration when determining the expectations and planning of the test boundaries.
    6. Security. Will there be Firewalls, Throttles, or other administrative tools that will be in production that need to be taken into consideration?
    7. Bottlenecks - If they exist in the production environment, they must be replicated in the test environment.
  3. Exceptions to above rules
    1. Feature Testing/object level performance. Not all performance testing is concerned with End to End/Production level performance. The above rules may not always apply when testing specific load capacities of objects, functions and services. (ex. A application server may not be in control of any outside sources(especially if coming over Internet) and therefore performance may be ran to check how quickly the application server can take data in, process and return the data.)
  4. When should performance Testing begin?
    1. In the requirements phase!!! It will always be cheaper to build it in from the beginning then to build it in later. (See Chart below - Cost of Performance Testing). By starting the requirement phase and gathering performance requirements it will be easier to start the Testing process and development process of implementing performance suitable code.


Cost of Performance Testing


Recommended Link for introduction to performance tools(open source) to start practicing with Performance Testing at no cost(for tools) .
* Training Held by Dale Perry, SQE Training

Day 1 - StarEast - Exploratory Testing

Exploratory Testing*

Day 1 of the StarEast conference I attended a tutorial on Exploratory Testing. Originally I had planned to attend an Agile Testing course by NetObjectives but I realized I attended this training exactly one week prior already.

It is my belief(and shared by many others at the conference) that no development/testing practice is not without it's flaws. One of the main items found in the agile world that seems to be lacking as the direction is moved towards more developer centric automation is the lack of Exploratory testing. Automation is very useful and not to be disregarded as it drives reliability and efficiency of testers during periods of regression and code change.

What the above should relate to than is, through increased efficiency in the test efforts, testers should actually have more time(and should use that time) for exploratory testing. The value with Exploratory testing is that there cannot and will never be a 100% requirements coverage + automative coverage to handle all scenarios of dynamic applications. Even if theoretically possible the time spent would be exponential and would give very little ROI.

Time spent on Exploratory Testing will help drive Quality and Requirements to become more stable and reliable. By spending a few hours whether it be a day/sprint/test cycle to go above and beyond the pre-determined test scenarios to do things that the general user would not do, to try and "break" the application/service can help discover not necessarily new "bugs" but missing requirements that would not normally been thought of. This way through this testing method, you can push forward an enhanced set of requirements that feeds better code delivered by the developers.

As we rely on Automation more and more, I believe we must never forget that as long as we are building a user experience, there will always be a need for User Patterns, and as we all know, users are unpredictable, random, and unquantifiable.

* Training Held by Jonathan Kohl, Kohl Concepts Inc.