Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 1 | <chapter id="testing"> |
| 2 | <title>Writing Conformance tests</title> |
| 3 | |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 4 | <sect1 id="testing-intro"> |
| 5 | <title>Introduction</title> |
| 6 | <para> |
| 7 | With more The Windows API follows no standard, it is itself a defacto |
| 8 | standard, and deviations from that standard, even small ones, often |
| 9 | cause applications to crash or misbehave in some way. Furthermore |
| 10 | a conformance test suite is the most accurate (if not necessarily |
| 11 | the most complete) form of API documentation and can be used to |
| 12 | supplement the Windows API documentation. |
| 13 | </para> |
| 14 | <para> |
| 15 | Writing a conformance test suite for more than 10000 APIs is no small |
| 16 | undertaking. Fortunately it can prove very useful to the development |
| 17 | of Wine way before it is complete. |
| 18 | <itemizedlist> |
| 19 | <listitem> |
| 20 | <para> |
| 21 | The conformance test suite must run on Windows. This is |
| 22 | necessary to provide a reasonable way to verify its accuracy. |
| 23 | Furthermore the tests must pass successfully on all Windows |
| 24 | platforms (tests not relevant to a given platform should be |
| 25 | skipped). |
| 26 | </para> |
| 27 | <para> |
| 28 | A consequence of this is that the test suite will provide a |
| 29 | great way to detect variations in the API between different |
| 30 | Windows versions. For instance, this can provide insights |
| 31 | into the differences between the, often undocumented, Win9x and |
| 32 | NT Windows families. |
| 33 | </para> |
| 34 | <para> |
| 35 | However, one must remember that the goal of Wine is to run |
| 36 | Windows applications on Linux, not to be a clone of any specific |
| 37 | Windows version. So such variations must only be tested for when |
| 38 | relevant to that goal. |
| 39 | </para> |
| 40 | </listitem> |
| 41 | <listitem> |
| 42 | <para> |
| 43 | Writing conformance tests is also an easy way to discover |
| 44 | bugs in Wine. Of course, before fixing the bugs discovered in |
| 45 | this way, one must first make sure that the new tests do pass |
| 46 | successfully on at least one Windows 9x and one Windows NT |
| 47 | version. |
| 48 | </para> |
| 49 | <para> |
| 50 | Bugs discovered this way should also be easier to fix. Unlike |
| 51 | some mysterious application crashes, when a conformance test |
| 52 | fails, the expected behavior and APIs tested for are known thus |
| 53 | greatly simplifying the diagnosis. |
| 54 | </para> |
| 55 | </listitem> |
| 56 | <listitem> |
| 57 | <para> |
| 58 | To detect regressions. Simply running the test suite regularly |
| 59 | in Wine turns it into a great tool to detect regressions. |
| 60 | When a test fails, one immediately knows what was the expected |
| 61 | behavior and which APIs are involved. Thus regressions caught |
| 62 | this way should be detected earlier, because it is easy to run |
| 63 | all tests on a regular basis, and easier to fix because of the |
| 64 | reduced diagnosis work. |
| 65 | </para> |
| 66 | </listitem> |
| 67 | <listitem> |
| 68 | <para> |
| 69 | Tests written in advance of the Wine development (possibly even |
Tom Wickline | c28575e | 2003-07-09 19:50:14 +0000 | [diff] [blame] | 70 | by non Wine developers) can also simplify the work of the |
| 71 | future implementer by making it easier for him to check the |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 72 | correctness of his code. |
| 73 | </para> |
| 74 | </listitem> |
| 75 | <listitem> |
| 76 | <para> |
Francois Gouget | b221005 | 2002-09-17 18:34:38 +0000 | [diff] [blame] | 77 | Conformance tests will also come in handy when testing Wine on |
| 78 | new (or not as widely used) architectures such as FreeBSD, |
| 79 | Solaris x86 or even non-x86 systems. Even when the port does |
| 80 | not involve any significant change in the thread management, |
| 81 | exception handling or other low-level aspects of Wine, new |
| 82 | architectures can expose subtle bugs that can be hard to |
| 83 | diagnose when debugging regular (complex) applications. |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 84 | </para> |
| 85 | </listitem> |
| 86 | </itemizedlist> |
| 87 | </para> |
| 88 | </sect1> |
| 89 | |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 90 | |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 91 | <sect1 id="testing-what"> |
| 92 | <title>What to test for?</title> |
| 93 | <para> |
| 94 | The first thing to test for is the documented behavior of APIs |
| 95 | and such as CreateFile. For instance one can create a file using a |
| 96 | long pathname, check that the behavior is correct when the file |
| 97 | already exists, try to open the file using the corresponding short |
| 98 | pathname, convert the filename to Unicode and try to open it using |
| 99 | CreateFileW, and all other things which are documented and that |
| 100 | applications rely on. |
| 101 | </para> |
| 102 | <para> |
| 103 | While the testing framework is not specifically geared towards this |
| 104 | type of tests, it is also possible to test the behavior of Windows |
| 105 | messages. To do so, create a window, preferably a hidden one so that |
| 106 | it does not steal the focus when running the tests, and send messages |
| 107 | to that window or to controls in that window. Then, in the message |
| 108 | procedure, check that you receive the expected messages and with the |
| 109 | correct parameters. |
| 110 | </para> |
| 111 | <para> |
| 112 | For instance you could create an edit control and use WM_SETTEXT to |
| 113 | set its contents, possibly check length restrictions, and verify the |
| 114 | results using WM_GETTEXT. Similarly one could create a listbox and |
| 115 | check the effect of LB_DELETESTRING on the list's number of items, |
| 116 | selected items list, highlighted item, etc. |
| 117 | </para> |
| 118 | <para> |
| 119 | However, undocumented behavior should not be tested for unless there |
| 120 | is an application that relies on this behavior, and in that case the |
| 121 | test should mention that application, or unless one can strongly |
| 122 | expect applications to rely on this behavior, typically APIs that |
| 123 | return the required buffer size when the buffer pointer is NULL. |
| 124 | </para> |
| 125 | </sect1> |
| 126 | |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 127 | |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 128 | <sect1 id="testing-wine"> |
| 129 | <title>Running the tests in Wine</title> |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 130 | <para> |
| 131 | The simplest way to run the tests in Wine is to type 'make test' in |
| 132 | the Wine sources top level directory. This will run all the Wine |
| 133 | conformance tests. |
| 134 | </para> |
| 135 | <para> |
| 136 | The tests for a specific Wine library are located in a 'tests' |
| 137 | directory in that library's directory. Each test is contained in a |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 138 | file (e.g. <filename>dlls/kernel/tests/thread.c</>). Each |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 139 | file itself contains many checks concerning one or more related APIs. |
| 140 | </para> |
| 141 | <para> |
| 142 | So to run all the tests related to a given Wine library, go to the |
| 143 | corresponding 'tests' directory and type 'make test'. This will |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 144 | compile the tests, run them, and create an '<replaceable>xxx</>.ok' |
| 145 | file for each test that passes successfully. And if you only want to |
| 146 | run the tests contained in the <filename>thread.c</> file of the |
| 147 | kernel library, you would do: |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 148 | <screen> |
| 149 | <prompt>$ </>cd dlls/kernel/tests |
| 150 | <prompt>$ </>make thread.ok |
| 151 | </screen> |
| 152 | </para> |
| 153 | <para> |
| 154 | Note that if the test has already been run and is up to date (i.e. if |
| 155 | neither the kernel library nor the <filename>thread.c</> file has |
| 156 | changed since the <filename>thread.ok</> file was created), then make |
| 157 | will say so. To force the test to be re-run, delete the |
| 158 | <filename>thread.ok</> file, and run the make command again. |
| 159 | </para> |
| 160 | <para> |
| 161 | You can also run tests manually using a command similar to the |
| 162 | following: |
| 163 | <screen> |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 164 | <prompt>$ </>../../../tools/runtest -q -M kernel32.dll -p kernel32_test.exe.so thread.c |
| 165 | <prompt>$ </>../../../tools/runtest -p kernel32_test.exe.so thread.c |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 166 | thread.c: 86 tests executed, 5 marked as todo, 0 failures. |
| 167 | </screen> |
| 168 | The '-P wine' options defines the platform that is currently being |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 169 | tested. Remove the '-q' option if you want the testing framework |
| 170 | to report statistics about the number of successful and failed tests. |
| 171 | Run <command>runtest -h</> for more details. |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 172 | </para> |
| 173 | </sect1> |
| 174 | |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 175 | |
| 176 | <sect1 id="testing-windows"> |
| 177 | <title>Building and running the tests on Windows</title> |
| 178 | <sect2> |
| 179 | <title>Using pre-compiled binaries</title> |
| 180 | <para> |
| 181 | Unfortunately there are no pre-compiled binaries yet. However if |
| 182 | send an email to the Wine development list you can probably get |
| 183 | someone to send them to you, and maybe motivate some kind soul to |
| 184 | put in place a mechanism for publishing such binaries on a regular |
| 185 | basis. |
| 186 | </para> |
| 187 | </sect2> |
| 188 | <sect2> |
| 189 | <title>With Visual C++</title> |
| 190 | <itemizedlist> |
| 191 | <listitem><para> |
| 192 | get the Wine sources |
| 193 | </para></listitem> |
| 194 | <listitem><para> |
| 195 | Run msvcmaker to generate Visual C++ project files for the tests. |
| 196 | 'msvcmaker' is a perl script so you may be able to run it on |
| 197 | Windows. |
| 198 | <screen> |
| 199 | <prompt>$ </>./tools/winapi/msvcmaker --no-wine |
| 200 | </screen> |
| 201 | </para></listitem> |
| 202 | <listitem><para> |
| 203 | If the previous steps were done on your Linux development |
| 204 | machine, make the Wine sources accessible to the Windows machine |
| 205 | on which you are going to compile them. Typically you would do |
| 206 | this using Samba but copying them altogether would work too. |
| 207 | </para></listitem> |
| 208 | <listitem><para> |
| 209 | On the Windows machine, open the <filename>winetest.dsw</> |
| 210 | workspace. This will load each test's project. For each test there |
| 211 | are two configurations: one compiles the test with the Wine |
| 212 | headers, and the other uses the Visual C++ headers. Some tests |
| 213 | will compile fine with the former, but most will require the |
| 214 | latter. |
| 215 | </para></listitem> |
| 216 | <listitem><para> |
| 217 | Open the <menuchoice><guimenu>Build</> <guimenu>Batch |
| 218 | build...</></> menu and select the tests and build configurations |
| 219 | you want to build. Then click on <guibutton>Build</>. |
| 220 | </para></listitem> |
| 221 | <listitem><para> |
| 222 | To run a specific test from Visual C++, go to |
| 223 | <menuchoice><guimenu>Project</> <guimenu>Settings...</></>. There |
| 224 | select that test's project and build configuration and go to the |
| 225 | <guilabel>Debug</> tab. There type the name of the specific test |
| 226 | to run (e.g. 'thread') in the <guilabel>Program arguments</> |
| 227 | field. Validate your change by clicking on <guibutton>Ok</> and |
| 228 | start the test by clicking the red exclamation mark (or hitting |
| 229 | 'F5' or any other usual method). |
| 230 | </para></listitem> |
| 231 | <listitem><para> |
| 232 | You can also run the tests from the command line. You will find |
| 233 | them in either <filename>Output\Win32_Wine_Headers</> or |
| 234 | <filename>Output\Win32_MSVC_Headers</> depending on the build |
| 235 | method. So to run the kernel 'path' tests you would do: |
| 236 | <screen> |
| 237 | <prompt>C:\></>cd dlls\kernel\tests\Output\Win32_MSVC_Headers |
Ferenc Wagner | b286233 | 2003-09-02 18:17:46 +0000 | [diff] [blame] | 238 | <prompt>C:\dlls\kernel\tests\Output\Win32_MSVC_Headers></>kernel32_test path |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 239 | </screen> |
| 240 | </para></listitem> |
| 241 | </itemizedlist> |
| 242 | </sect2> |
| 243 | <sect2> |
| 244 | <title>With MinGW</title> |
| 245 | <para> |
| 246 | This needs to be documented. The best may be to ask on the Wine |
| 247 | development mailing list and update this documentation with the |
| 248 | result of your inquiry. |
| 249 | </para> |
| 250 | </sect2> |
| 251 | <sect2> |
| 252 | <title>Cross compiling with MinGW on Linux</title> |
| 253 | <para> |
Francois Gouget | 887b2d9 | 2002-12-13 23:42:04 +0000 | [diff] [blame] | 254 | Here is how to generate Windows executables for the tests straight |
| 255 | from the comfort of Linux. |
| 256 | </para> |
| 257 | <itemizedlist> |
| 258 | <listitem><para> |
| 259 | First you need to get the MinGW cross-compiler. On Debian all |
| 260 | you need to do is type <command>apt-get install mingw32</>. |
| 261 | </para></listitem> |
| 262 | <listitem><para> |
| 263 | If you had already run <command>configure</>, then delete |
| 264 | <filename>config.cache</> and re-run <command>configure</>. |
| 265 | You can then run <command>make crosstest</>. To sum up: |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 266 | <screen> |
Francois Gouget | 887b2d9 | 2002-12-13 23:42:04 +0000 | [diff] [blame] | 267 | <prompt>$ </><userinput>rm config.cache</> |
| 268 | <prompt>$ </><userinput>./configure</> |
| 269 | <prompt>$ </><userinput>make crosstest</> |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 270 | </screen> |
Francois Gouget | 887b2d9 | 2002-12-13 23:42:04 +0000 | [diff] [blame] | 271 | </para></listitem> |
| 272 | <listitem><para> |
| 273 | If you get an error when compiling <filename>winsock.h</> then |
| 274 | you probably need to apply the following patch: |
| 275 | <ulink url="http://www.winehq.com/hypermail/wine-patches/2002/12/0157.html">http://www.winehq.com/hypermail/wine-patches/2002/12/0157.html</> |
| 276 | </para></listitem> |
| 277 | </itemizedlist> |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 278 | </sect2> |
| 279 | </sect1> |
| 280 | |
| 281 | |
| 282 | <sect1 id="testing-test"> |
| 283 | <title>Inside a test</title> |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 284 | |
| 285 | <para> |
| 286 | When writing new checks you can either modify an existing test file or |
| 287 | add a new one. If your tests are related to the tests performed by an |
| 288 | existing file, then add them to that file. Otherwise create a new .c |
| 289 | file in the tests directory and add that file to the |
| 290 | <varname>CTESTS</> variable in <filename>Makefile.in</>. |
| 291 | </para> |
| 292 | <para> |
| 293 | A new test file will look something like the following: |
| 294 | <screen> |
| 295 | #include <wine/test.h> |
| 296 | #include <winbase.h> |
| 297 | |
| 298 | /* Maybe auxiliary functions and definitions here */ |
| 299 | |
| 300 | START_TEST(paths) |
| 301 | { |
| 302 | /* Write your checks there or put them in functions you will call from |
| 303 | * there |
| 304 | */ |
| 305 | } |
| 306 | </screen> |
| 307 | </para> |
| 308 | <para> |
| 309 | The test's entry point is the START_TEST section. This is where |
| 310 | execution will start. You can put all your tests in that section but |
| 311 | it may be better to split related checks in functions you will call |
| 312 | from the START_TEST section. The parameter to START_TEST must match |
| 313 | the name of the C file. So in the above example the C file would be |
| 314 | called <filename>paths.c</>. |
| 315 | </para> |
| 316 | <para> |
| 317 | Tests should start by including the <filename>wine/test.h</> header. |
| 318 | This header will provide you access to all the testing framework |
| 319 | functions. You can then include the windows header you need, but make |
| 320 | sure to not include any Unix or Wine specific header: tests must |
| 321 | compile on Windows. |
| 322 | </para> |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 323 | <para> |
| 324 | You can use <function>trace</> to print informational messages. Note |
| 325 | that these messages will only be printed if 'runtest -v' is being used. |
| 326 | <screen> |
| 327 | trace("testing GlobalAddAtomA"); |
| 328 | trace("foo=%d",foo); |
| 329 | </screen> |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 330 | </para> |
| 331 | <para> |
| 332 | Then just call functions and use <function>ok</> to make sure that |
| 333 | they behaved as expected: |
| 334 | <screen> |
| 335 | ATOM atom = GlobalAddAtomA( "foobar" ); |
| 336 | ok( GlobalFindAtomA( "foobar" ) == atom, "could not find atom foobar" ); |
| 337 | ok( GlobalFindAtomA( "FOOBAR" ) == atom, "could not find atom FOOBAR" ); |
| 338 | </screen> |
| 339 | The first parameter of <function>ok</> is an expression which must |
| 340 | evaluate to true if the test was successful. The next parameter is a |
| 341 | printf-compatible format string which is displayed in case the test |
| 342 | failed, and the following optional parameters depend on the format |
| 343 | string. |
| 344 | </para> |
Francois Gouget | 9eb964e | 2003-01-20 23:36:22 +0000 | [diff] [blame] | 345 | </sect1> |
| 346 | |
| 347 | <sect1 id="testing-error-messages"> |
| 348 | <title>Writing good error messages</title> |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 349 | <para> |
Francois Gouget | 9eb964e | 2003-01-20 23:36:22 +0000 | [diff] [blame] | 350 | The message that is printed when a test fails is |
| 351 | <emphasis>extremely</> important. |
| 352 | </para> |
| 353 | <para> |
| 354 | Someone will take your test, run it on a Windows platform that |
| 355 | you don't have access to, and discover that it fails. They will then |
| 356 | post an email with the output of the test, and in particular your |
| 357 | error message. Someone, maybe you, will then have to figure out from |
| 358 | this error message why the test failed. |
| 359 | </para> |
| 360 | <para> |
| 361 | If the error message contains all the relevant information that will |
| 362 | be easy. If not, then it will require modifying the test, finding |
| 363 | someone to compile it on Windows, sending the modified version to the |
| 364 | original tester and waiting for his reply. In other words, it will |
| 365 | be long and painful. |
| 366 | </para> |
| 367 | <para> |
| 368 | So how do you write a good error message? Let's start with an example |
| 369 | of a bad error message: |
| 370 | <screen> |
| 371 | ok(GetThreadPriorityBoost(curthread,&disabled)!=0, |
| 372 | "GetThreadPriorityBoost Failed"); |
| 373 | </screen> |
| 374 | This will yield: |
| 375 | <screen> |
| 376 | thread.c:123: Test failed: GetThreadPriorityBoost Failed |
| 377 | </screen> |
| 378 | </para> |
| 379 | <para> |
| 380 | Did you notice how the error message provides no information about |
| 381 | why the test failed? We already know from the line number exactly |
| 382 | which test failed. In fact the error message gives strictly no |
| 383 | information that cannot already be obtained by reading the code. In |
| 384 | other words it provides no more information than an empty string! |
| 385 | </para> |
| 386 | <para> |
| 387 | Let's look at how to rewrite it: |
| 388 | <screen> |
| 389 | BOOL rc; |
| 390 | ... |
| 391 | rc=GetThreadPriorityBoost(curthread,&disabled); |
| 392 | ok(rc!=0 && disabled==0,"rc=%d error=%ld disabled=%d", |
| 393 | rc,GetLastError(),disabled); |
| 394 | </screen> |
| 395 | This will yield: |
| 396 | <screen> |
| 397 | thread.c:123: Test failed: rc=0 error=120 disabled=0 |
| 398 | </screen> |
| 399 | </para> |
| 400 | <para> |
| 401 | When receiving such a message, one would check the source, see that |
| 402 | it's a call to GetThreadPriorityBoost, that the test failed not |
| 403 | because the API returned the wrong value, but because it returned an |
| 404 | error code. Furthermore we see that GetLastError() returned 120 which |
| 405 | winerror.h defines as ERROR_CALL_NOT_IMPLEMENTED. So the source of |
| 406 | the problem is obvious: this Windows platform (here Windows 98) does |
| 407 | not support this API and thus the test must be modified to detect |
| 408 | such a condition and skip the test. |
| 409 | </para> |
| 410 | <para> |
| 411 | So a good error message should provide all the information which |
| 412 | cannot be obtained by reading the source, typically the function |
| 413 | return value, error codes, and any function output parameter. Even if |
| 414 | more information is needed to fully understand a problem, |
| 415 | systematically providing the above is easy and will help cut down the |
| 416 | number of iterations required to get to a resolution. |
| 417 | </para> |
| 418 | <para> |
| 419 | It may also be a good idea to dump items that may be hard to retrieve |
| 420 | from the source, like the expected value in a test if it is the |
| 421 | result of an earlier computation, or comes from a large array of test |
| 422 | values (e.g. index 112 of _pTestStrA in vartest.c). In that respect, |
| 423 | for some tests you may want to define a macro such as the following: |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 424 | <screen> |
| 425 | #define eq(received, expected, label, type) \ |
| 426 | ok((received) == (expected), "%s: got " type " instead of " type, (label),(received),(expected)) |
| 427 | |
| 428 | ... |
| 429 | |
Francois Gouget | 9eb964e | 2003-01-20 23:36:22 +0000 | [diff] [blame] | 430 | eq( b, curr_val, "SPI_{GET,SET}BEEP", "%d" ); |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 431 | </screen> |
| 432 | </para> |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 433 | </sect1> |
| 434 | |
Francois Gouget | 41df00f | 2002-12-05 19:13:42 +0000 | [diff] [blame] | 435 | |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 436 | <sect1 id="testing-platforms"> |
| 437 | <title>Handling platform issues</title> |
| 438 | <para> |
| 439 | Some checks may be written before they pass successfully in Wine. |
| 440 | Without some mechanism, such checks would potentially generate |
| 441 | hundred of known failures for months each time the tests are being run. |
| 442 | This would make it hard to detect new failures caused by a regression. |
| 443 | or to detect that a patch fixed a long standing issue. |
| 444 | </para> |
| 445 | <para> |
| 446 | Thus the Wine testing framework has the concept of platforms and |
| 447 | groups of checks can be declared as expected to fail on some of them. |
| 448 | In the most common case, one would declare a group of tests as |
| 449 | expected to fail in Wine. To do so, use the following construct: |
| 450 | <screen> |
| 451 | todo_wine { |
| 452 | SetLastError( 0xdeadbeef ); |
| 453 | ok( GlobalAddAtomA(0) == 0 && GetLastError() == 0xdeadbeef, "failed to add atom 0" ); |
| 454 | } |
| 455 | </screen> |
| 456 | On Windows the above check would be performed normally, but on Wine it |
| 457 | would be expected to fail, and not cause the failure of the whole |
| 458 | test. However. If that check were to succeed in Wine, it would |
| 459 | cause the test to fail, thus making it easy to detect when something |
| 460 | has changed that fixes a bug. Also note that todo checks are accounted |
| 461 | separately from regular checks so that the testing statistics remain |
| 462 | meaningful. Finally, note that todo sections can be nested so that if |
| 463 | a test only fails on the cygwin and reactos platforms, one would |
| 464 | write: |
| 465 | <screen> |
| 466 | todo("cygwin") { |
| 467 | todo("reactos") { |
| 468 | ... |
| 469 | } |
| 470 | } |
| 471 | </screen> |
| 472 | <!-- FIXME: Would we really have platforms such as reactos, cygwin, freebsd & co? --> |
| 473 | But specific platforms should not be nested inside a todo_wine section |
| 474 | since that would be redundant. |
| 475 | </para> |
| 476 | <para> |
| 477 | When writing tests you will also encounter differences between Windows |
| 478 | 9x and Windows NT platforms. Such differences should be treated |
| 479 | differently from the platform issues mentioned above. In particular |
| 480 | you should remember that the goal of Wine is not to be a clone of any |
| 481 | specific Windows version but to run Windows applications on Unix. |
| 482 | </para> |
| 483 | <para> |
| 484 | So, if an API returns a different error code on Windows 9x and |
| 485 | Windows NT, your check should just verify that Wine returns one or |
| 486 | the other: |
| 487 | <screen> |
| 488 | ok ( GetLastError() == WIN9X_ERROR || GetLastError() == NT_ERROR, ...); |
| 489 | </screen> |
| 490 | </para> |
| 491 | <para> |
| 492 | If an API is only present on some Windows platforms, then use |
| 493 | LoadLibrary and GetProcAddress to check if it is implemented and |
| 494 | invoke it. Remember, tests must run on all Windows platforms. |
| 495 | Similarly, conformance tests should nor try to correlate the Windows |
| 496 | version returned by GetVersion with whether given APIs are |
| 497 | implemented or not. Again, the goal of Wine is to run Windows |
| 498 | applications (which do not do such checks), and not be a clone of a |
| 499 | specific Windows version. |
| 500 | </para> |
Dimitrie O. Paun | 28036fd | 2003-09-22 19:34:38 +0000 | [diff] [blame] | 501 | <!--para> |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 502 | FIXME: What about checks that cause the process to crash due to a bug? |
Dimitrie O. Paun | 28036fd | 2003-09-22 19:34:38 +0000 | [diff] [blame] | 503 | </para--> |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 504 | </sect1> |
| 505 | |
| 506 | |
| 507 | <!-- FIXME: Strategies for testing threads, testing network stuff, |
| 508 | file handling, eq macro... --> |
| 509 | |
| 510 | </chapter> |
| 511 | |
| 512 | <!-- Keep this comment at the end of the file |
| 513 | Local variables: |
| 514 | mode: sgml |
Dimitrie O. Paun | 255ecc5 | 2003-04-19 02:50:57 +0000 | [diff] [blame] | 515 | sgml-parent-document:("wine-devel.sgml" "set" "book" "part" "chapter" "") |
Francois Gouget | 848d50d | 2002-09-17 00:07:03 +0000 | [diff] [blame] | 516 | End: |
| 517 | --> |