Automated testing to avoid user-interaction

INTRODUCTION

Our testing at work has largely been suites of manual tests, in the form of follow a list of instructions, and pass or fail depending on what is expected. i.e. Inefficient, prone to false negatives/positives, time-consuming, and crying out for automation.

The long-standing belief or culture at work has been that automation is prohibitively difficult in a CAD environment because a lot of user-interaction takes place which interrupts code execution. e.g. point clicking, message-boxes, dialogs, etc.

I set out with the belief that there are some categories of tests that can be automated, and that the user-interaction part can be offloaded in an object-oriented way.

KEY COMPONENTS TO MY AUTOMATED TESTING

  • Testing does not change production code;
  • For any one test, derive a class for the test code;
  • All user interaction gets overridden to not stop execution;
  • Have input data as binary resources in the project;
  • Have some means of comparing a result state, with an expected state;
  • Tests can be run individually, or in groups of common functionality or all together.

METHODOLOGY

Introduce a UnitTest project in the solution that depends on all projects for which you want test code. Build it as a .dll.

In order to test class A, export this class, and derive from A a new class B in the UnitTest project. In this way all test code then happens in B and does not affect production code. This is not to say nothing changes in A to make the tests work, but you won’t be obfuscating the code with flags that cause different behaviour depending on whether it’s called from production code or from test code.

Refactor blocks of user interaction code (dialogs, message boxes, point clicking) into single-purpose functions in A and make them virtual. This might mean teasing apart business logic from interface code, but this is desirable good practise anyway and is a worthwhile change. Then override these virtual functions in the derived test class B to do programmatically what the user would otherwise be doing. For example, in class B handle the following:

  • Dialogs – set the variables that the dialog would otherwise do in it’s OnOK() and return;
  • Message boxes – just return IDYES, IDNO, etc. as required;
  • Point clicking – determine in some other way the desired coordinate, e.g. hard code the coordinate that you want back, or retrieve the coordinate by a database look-up.

In the one and only use of multiple inheritance that I’ve found, is in fact here where I also derive B from a CTest class. CTest class contains functions common to all tests, like comparisons and counts of errors, I/O startup and shutdown code, etc.

UML class diagram

Where document data is needed to conduct the test, say where an existing drawing/document needs to be File Open-ed, then add this file into the UnitTest project as a binary resource. Then programmatically create on disk a file from the resource to be opened instead. In this way, the input test data is part of the code.

LPCTSTR lpszResourceName = MAKEINTRESOURCE(resource);
HMODULE hMod = GetModuleHandle(_T("UnitTest.dll"));
HRSRC hResFile = FindResource(hMod, lpszResourceName, _T("BINARY"));
HGLOBAL hRes = LoadResource(hMod, hResFile);
DWORD dwResSize = SizeofResource(hMod, hResFile);
UINT FAR* lpnRes = (UINT FAR*)LockResource(hRes);
CFile f(filePath, CFile::modeCreate | CFile::modeWrite );
f.Write(lpnRes, dwResSize);
f.Flush();
FreeResource(hRes);

If the test is of the form ‘compare text output file with solution output file’ then compare the two files programmatically by starting an external file comparator program, e.g. Beyond Compare. Running an external program and blocking until the user closes it can be done like this:

if (PathFileExists(exeFilePath) && !PathIsDirectory(exeFilePath))
   {
   SHELLEXECUTEINFO SEI;
   SEI.cbSize = sizeof(SHELLEXECUTEINFO);
   SEI.fMask = SEE_MASK_NOCLOSEPROCESS;
   SEI.hwnd = NULL;
   SEI.lpVerb = _T("open");
   SEI.lpFile = exeFilePath; // path to the .exe
   SEI.lpParameters = exeParameters; // command-line paramters to the .exe
   SEI.lpDirectory = NULL;
   SEI.nShow = SW_SHOWNORMAL;
   SEI.hInstApp = NULL;
   if(!::ShellExecuteEx(&SEI))
      return false;
   else
      ::WaitForSingleObject(SEI.hProcess, INFINITE);
   CloseHandle(SEI.hProcess);
   }

If the test has some other result, then find some way to compare the current state or values, with known expected values. Always increment an error count and show the failed test name in some test results log.

A PLAN OF ACTION TO BEGIN

When manually testing,

  • Identify those tests where there is no good reason that a computer can’t do it instead;
  • Keep a list of those tests which take a lot of time to perform;
  • Estimate a 1-5 difficulty in how to run it programmatically;
  • Occasionally make time to write test code to do it instead, the difficulty level helps in dedicating time;
There’s little as satisfying as seeing a computer working extremely hard doing what otherwise would take you hours to perform.

About Alasdair

Capetonian, software developer, land surveyor.
This entry was posted in programming. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *