Monday, October 31, 2011

Difference between mobile application testing and mobile testing

Mobile Testing means the complete testing of mobile in System level + Application level

Generally there are 3 categories:

1. Mobile Protocol stack testing. (using network simulators)




few examples are below:
* Stack supports 4 bands EGSM,PGSM,GSM-850,DCS.Mobile originating and mobile terminating call in all those bands.
*cell selection reselection
*cell bar
*All types of handover
*frequency hopping
*Coding schemes etc

2. Multimedia testing
* Midi polyphonic tones ringer and player
* MP3 as ringer and player &other supported formats
* Camera
* Video Conferencing etc

3. Feature testing
*Phonebook
*SMS
*Supplementary calls
*Security
*Calculator
*Gaming
*Fast dialing etc

Mobile Application testing deals with only the features and multimedia part. But Mobile testing deals with all three categories above.

Here are the lots of tool you can use for automation:

Iphone:

UISPEC

SIKULI

Fonemonkey

Squish

Android:

 Robotium

Sunday, October 30, 2011

Mobile Application Testing: Process, Tools & Techniques



The market for mobile applications increases every day and is becoming more and more demanding as technology grows. In a new study, Yankee Group predicts a $4.2 billion “Mobile App Gold Rush” by 2013 which includes:
  • Estimated number of smartphone users: 160 million
  • Estimated number of smartphone app downloads: 7 billion
  • Estimated revenue from smartphone app downloads: $4.2 billion
At Organic, our goal is to stay on the cutting edge of emerging platforms by launching new and diverse applications. We have this goal in mind when developing mobile web applications. We utilize some of the same styles of programming used for the developing of web applications. We also follow the same testing methodology employed for web development testing when testing our mobile applications.
  • Test Strategy is a high level document that defines “Testing Approach” to achieve testing objectives. The Test Strategy document is a static document meaning that it is not frequently updated. Components of the document include Approach, Risks, Contingencies & Recommendations, Testing Responsibility Matrix, Defect Management Process and Resource requirements (schedule, tools, roles & responsibilities).
  • Performance Test Plan specifies how performance testing will proceed from a business perspective and technical perspective. At a minimum, a performance testing plan addresses Dependencies and baseline assumptions, Pre-performance testing actions, Performance testing approach and Performance testing activities
  • Test Design Specification outlines, defines and details the approach taken to perform mobile application testing. The objective is to identify user flows and annotations, features to be tested, test scenarios, acceptance and release criteria.
  • Test Cases are derived from Test Scenarios and are identified in the Test Design Specification. They are a set of test actions, test data/user input data, execution conditions, and expected results developed to verify successful and acceptable implementation of the application requirements.
  • Test Case Execution Summary Report provides information uncovered by the tests and is accomplished by the testing type. The report is used to relay the overall status of Test Execution on an iteration-by-iteration basis.
Although the mobile application testing process is basically the same we understand mobile devices have different peculiarities that must be kept in mind when deciding which testing types to use for authentication. The testing types used are predominantly unchanged but we do utilize different testing techniques and tools. Following are a list of testing types, techniques and tools used to support our mobile applications:
  • ADA Compliance Testing is used to measure and evaluate compliance to the Americans with Disabilities Act requirements. With mobile devices at an all-time high, there has been a surge of interest in developing applications that are in line with Mobile Web Best Practices (MWBP). To test accessibility we used the following tools and techniques.
    • Create a URL test harness. The URL is checked via W3C mobileOK Checker, a free W3C service that validates the level of mobile-friendliness
    • The other test consists of using Apple’s Assistive Technology to test for screen magnification and VoiceOver for the blind and visually impaired.
  • Automated Testing is achieved using an emulator and a performance testing tool. The test runs on the device itself and is controlled by the PC. Results are captured using the performance testing tool. More details are provided below in the Performance Testing section.
    • eggPlant is a QA automation and software testing product that allows you to emulate mobile devices and automate the testing. eggPlant can be downloaded for the Windows or Mac platforms.
  • Database Testing is very important for all applications. We check for data integrity and errors while editing, deleting and modifying the forms and all other DB related functionality. This testing is done manually, without the use of any testing tools.
  • Compatibility Testing assures the application works as intended with the selected device, operating system, screen size, display and internal hardware. Following are a list of tools that simulate different devices, operating systems, screens, etc.:
    • iPhoney is a free iPhone simulator powered by Safari (used on a MAC OS platform only).
    • iPad Peek allows you to see how your websites look when rendered on the iPad. This simulator is also free.
    • Adobe Device Central CS5 allows you to plan, preview, and test and delivers mobile applications. It is available with the Adobe Creative Suite® editions: Photoshop, Illustrator, Flash Professional, Dreamweaver After Effects and Fireworks.
    • DeviceAnywhere™ allows you to compose automated tests that run across multiple devices and multiple platforms/OS’s. DeviceAnywhere™ is a paid solution providing monthly and/or hourly options.
  • Functionality Testing includes the testing of controls, storage media handling options, and other operational aspects. Functionality testing for the mobile application is black-box testing and assures that the application functions per the business specifications. This testing is done manually.
  • Interoperability Testing includes testing of different functionalities within the iPad. For instance we uncovered that iTunes and Pandora end the play of music when launching the BroadFeed™. Interoperability testing had uncovered a major defect.
  • Mobile Analytics Testing is one of the most important tests and validates our ROI. We used Flurry™ to collect the analytics for BroadFeed™. To test correct implementation of analytics, we verified page and link tags, redirects, page source and user attributes as well as data capture.
    • Used Charles Web Debugging Proxy to verify the page and link tags, redirects requirements. This was achieved by changing the proxy settings in Charles then on the iPad; changed the Wi-Fi settings; “HTTP Proxy”, selected the Manual button and entered the desktop’s IP address.
    • Used the Flurry™ Dashboard to validate the data was captured correctly. The dashboard view provided us with snapshot of user metrics and usage.
    • Performance Testing is used to load and stress test the mobile application and database servers. To conduct performance testing we first created a test harness. Once this was created, we used Empirix eTester to record the script used to preform load and stress testing. Empirix eLoad Expert allowed us to easily and accurately test the performance and scalability of BroadFeed™ to ensure our customers would have the best possible experience. eLoad Expert simulated concurrent users, which allowed us to analyze the performance and identify any potential database issues.
  • Power Consumption Testing uncovers defects related to battery drainage caused by the application. Device settings can drain the battery life and this makes it hard to determine if the mobile application or the settings are the cause. Following are list of devices and the different testing methods for testing power consumption:
    • iPhone, iPod & iPad settings are adjusted; Screen Brightness, Minimize use of location services, Turn off push notifications, Turn off other Downloaded Applications, Fetch new data less frequently and Turn off push mail. Then run the mobile application to determine the rate it took for the battery life to decrease. This testing is done manually without any testing tools.
    • Nokia Energy profiler is a stand-alone test and measurement application which lets you monitor the battery consumption on target device.
  • Usability Testing is used to verify mobile interface, navigation, and intuitiveness of the application, as well as consistency, and soberness of color schemes.
Following are a list of mobile device emulators used for testing:


What we know for sure is that there will always be some level of manual testing when launching new applications, whether it is web or mobile. The solutions we use for testing combine manual testing, remote-manual testing, and a lot of testing using emulators and performance testing. We accomplish our testing goals utilizing an array of testing types to support the different techniques. We combine testing tools to help with the validation process. We try to remain cost effective by using freeware and in-house tools which allows us to conduct testing quickly and efficiently.
Lindiwe Vinson, Director, Technology at Organic

Tuesday, September 13, 2011

test execution report from test lab

There are so many different ways to pull the data from QC for reports.
1) You can create a small program using the C#.Net
2) You can use the third party tool like Reliable Business Reporting, Inc.(http://www.rbreporting.com/)
3) You can write a small macro to generate your report in EXCEL using VB script.

Here is the questions :

How to pull a report from the Test Lab describing a high level execution to show Test Folders and their Status?

Ans:  You may wish to create a macro to pull the metrics out. I prefer this over the excel report generator.

Here is some VBA to pull out test set metrics. Just change the connection string with your details...

Dim tdc As TDConnection

Sub UpdateStats()

'create QC connection
QCconnect_silent "URL", "DOMAIN", "PROJECT", "USERNAME", "PASSWORD"
'process tests
getMetrics
'disconnect
QCdisconnect

End Sub

Private Sub QCconnect_silent(url As String, domain As String, project As String, user As String, pass As String)
'# Creates a connection to QC
On Error GoTo connected
'create the connection object
Set tdc = CreateObject("TDApiOle80.TDConnection")
tdc.InitConnectionEx url
'login to QC
tdc.Login user, pass
'connect to project
tdc.Connect domain, project
Application.StatusBar = "Connected to Quality Centre...."
Exit Sub
connected:
Exit Sub
End Sub


Private Sub QCdisconnect()
'# disconnect from QC and release connection
'check if connection exists
If (tdc Is Nothing = False) Then
'disconnect, logout and release connection
tdc.Disconnect
tdc.Logout
tdc.ReleaseConnection
Application.StatusBar = "Disconnnected from Quality Centre...."
End If
End Sub


Private Sub getMetrics()
'create variables
Dim TestSetFolderPath, TestName, TestInstance, TestSetName, tsl
Dim tsf As TestSetFactory
Dim tf As TestFactory
Dim filterSet As TDFilter
Dim Tset As TestSet
Dim testInLab, testList
Dim test As TSTest
Dim row As Integer
Dim execDate, diff
Dim totalTestCount, runCount, totalRunCount, passed, totalPassed, failed, totalFailed As Integer
Dim runThisWeek, passedThisWeek, failedThisWeek, totalRunThisWeek, totalPassedThisWeek, totalFailedThisWeek As Integer
'write out column headings
writeHeadings
'set initial values
row = 3
totalRunCount = 0
totalPassed = 0
totalFailed = 0
passed = 0
failed = 0
runCount = 0
runThisWeek = 0
passedThisWeek = 0
failedThisWeek = 0
'get list of test cycles
Set tsf = tdc.TestSetFactory
Set tsl = tsf.NewList("")
'loop for each test cycle
For Each Tset In tsl
Application.StatusBar = "Getting Metrics for " & Tset.Name & "...."
'get list of tests in cycle
Set testInLab = Tset.TSTestFactory
Set testList = testInLab.NewList("")
'output cycle folder\name and number of tests
Range("A" & row).Value = Tset.TestSetFolder & "\" & Tset.Name
Range("B" & row).Value = testList.Count
'keep total of tests
totalTestCount = totalTestCount + testList.Count
'loop for each test in the cycle
For Each test In testList
'get the execution date and compare with todays date
execDate = test.Field("TC_EXEC_DATE")
diff = DateDiff("d", execDate, Date)
'check if test HAS been run
If test.Status <> "No Run" Then
'increment count
runCount = runCount + 1
'check if the run was this week - 0 to 7 days
If diff < 8 Then runThisWeek = runThisWeek + 1
End If
'check if test has passed
If test.Status = "Passed" Then
'increment count
passed = passed + 1
'check if the run was this week - 0 to 7 days
If diff < 8 Then passedThisWeek = passedThisWeek + 1
End If
'check if test has failed
If test.Status = "Failed" Then
'increment count
failed = failed + 1
'check if the run was this week - 0 to 7 days
If diff < 8 Then failedThisWeek = failedThisWeek + 1
End If
Next
'output totals for test cycle
Range("C" & row).Value = runCount
Range("D" & row).Value = runThisWeek
Range("E" & row).Value = passed
Range("F" & row).Value = passedThisWeek
Range("G" & row).Value = failed
Range("H" & row).Value = failedThisWeek
'clear objects containing list of tests in cycle
Set testsinlab = Nothing
Set testList = Nothing
'increment totals
totalRunCount = totalRunCount + runCount
totalPassed = totalPassed + passed
totalFailed = totalFailed + failed
totalRunThisWeek = totalRunThisWeek + runThisWeek
totalPassedThisWeek = totalPassedThisWeek + passedThisWeek
totalFailedThisWeek = totalFailedThisWeek + failedThisWeek
'reset values
passed = 0
failed = 0
runCount = 0
runThisWeek = 0
passedThisWeek = 0
failedThisWeek = 0
'increment row number
row = row + 1
Next 'end of processing
'increment row to add blank line
row = row + 1
'output totals
Range("A" & row).Value = "Total"
Range("B" & row).Value = totalTestCount
Range("C" & row).Value = totalRunCount
Range("D" & row).Value = totalRunThisWeek
Range("E" & row).Value = totalPassed
Range("F" & row).Value = totalPassedThisWeek
Range("G" & row).Value = totalFailed
Range("H" & row).Value = totalFailedThisWeek
'autofit columns
Columns("A:K").EntireColumn.AutoFit
'clear objects containing list of test cycles
Set tsf = Nothing
Set tsl = Nothing
'inform user of finish
MsgBox "Finished"
End Sub

Private Sub writeHeadings()
'# write out column headings
Range("A1").Value = "Quality Center Statistics"
Range("B1").Value = "Date: " & Date
Range("A2").Value = "Test Cycle"
Range("B2").Value = "No. of Tests"
Range("C2").Value = "No. Run"
Range("D2").Value = "No. Run this Week"
Range("E2").Value = "Total No. Passed "
Range("F2").Value = "No. Passed this Week"
Range("G2").Value = "Total No. Failed"
Range("H2").Value = "Total No. Failed this Week"
Range("I2").Value = "Current cycle No."
Range("J2").Value = "No of Cycles required for complete run"
Range("K2").Value = "No of builds required for complete run"
End Sub

Note: its vba so needs to go in a macro, this is written for excel. You can either:

Open excel, add the dev bar which gives a link to the vba editor. Paste in there, change the connection string, save then run.

Or...You can open excel, click to create a new macro, stop recording then edit and paste the code in.

Friday, September 9, 2011

Agile Software Development - Twelve Principles

Agile Software Development - Twelve Principles
View more presentations from Eastern Software Systems

Agile Is the New Waterfall

Agile Is the New Waterfall
View more presentations from Naresh Jain

Role Of QA And Testing In Agile

Role Of QA And Testing In Agile
View more presentations from Naresh Jain

Testing in Agile Software Development

Testing in Agile Software Development
View more presentations from Softwarecentral

SOFTWARE TESTING PROCESS IN AGILE DEVELOPMENT

SOFTWARE TESTING PROCESS IN AGILE DEVELOPMENT
View more documents from Softwarecentral

Best Practices of Load Runner

Hp load runner best practices
View more documents from Bharath Marrivada

LoadRunner and Performance Center11

What\'s new in LoadRunner and Performance Center 11
View more presentations from fsyed

Load Runner 11.0

Performance Testing using LoadRunner 11
View more presentations from Kamran Khan

Load Testing Best Practices

Saturday, September 3, 2011

Informatica Tutorial

http://www.learnbi.com/informatica.htm

ETL Testing

ETL stands for extract, transform, and load. It can consolidate the scattered data for any
organization while working with different departments. It can very well handle the data
coming from different departments.
For example, a health insurance organization might have information on a customer in
several departments and each department might have that customer's information listed in
a different way. The membership department might list the customer by name, whereas
the claims department might list the customer by number. ETL can bundle all this data
and consolidate it into a uniform presentation, such as for storing in a database or data
warehouse.
ETL can transform not only data from different departments but also data from different
sources altogether. For example, any organization is running its business on different
environments like SAP and Oracle Apps for their businesses. If the higher management
wants to take discussion on their business, they want to make the data integrated and used
it for their reporting purposes. ETL can take these two source system data and make it
integrated in to single format and load it into the tables.


http://www.learndatamodeling.com/etl.htm