150c6fb74b
Tidied up the existing float tests in the process. (In particular, s/SUCCESS/PASS/ since that matches real test output.) These verify that QCOMPARE() handles floats and doubles as intended. Extended the existing qFuzzyCompare tests to probe the boundaries of the ranges of values of both types, in the process. Revised the toString<double> that qCompare() uses to give enough precision to actually show some of the differences being tested there (12 digits, to match what qFuzzyCompare tests, so as to show different values rather than, e.g. 1e12 for both expected and actual) and to give consistent results for infinities and NaN (MinGW had eccentric versions for these, leading to different output from tests, which thus failed); did the latter also for toString<float> and fixed stray zeros in MinGW's exponents (which made a kludge in tst_selftest.cpp redundant, so I removed that, too). That's further complicated handling of floating-point types, so let's just keep an eye on how expensive that's getting by adding a benchmark test for QTest::toString(). Unfortunately, default settings only get runs that take modest numbers of milliseconds (some as low as 40) while increasing this with -minumumvalue 100 or more gets the process killed - and I'm unable to find out who's doing the killing (it's not QProcess::kill, ::kill or the QtTest WatchDog, as far as I can tell). So results are rather noisy; the integral tests exhibit speed-ups by factors up to 5, and slow-downs by factors up to 100, between runs with and without this change, which does not affec the integral tests. The relatively modest slow-downs and speed-ups in the floating point tests thus seem likely to be happenstance rather than signal. Change-Id: I4a6bbbab6a43bf14a4089e96238a7c8da2c3127e Reviewed-by: Ulf Hermann <ulf.hermann@qt.io> |
||
---|---|---|
.. | ||
auto | ||
baselineserver | ||
benchmarks | ||
global | ||
manual | ||
shared | ||
testserver | ||
README | ||
tests.pro |
This directory contains autotests and benchmarks based on Qt Test. In order to run the autotests reliably, you need to configure a desktop to match the test environment that these tests are written for. Linux X11: * The user must be logged in to an active desktop; you can't run the autotests without a valid DISPLAY that allows X11 connections. * The tests are run against a KDE3 or KDE4 desktop. * Window manager uses "click to focus", and not "focus follows mouse". Many tests move the mouse cursor around and expect this to not affect focus and activation. * Disable "click to activate", i.e., when a window is opened, the window manager should automatically activate it (give it input focus) and not wait for the user to click the window.