ODBC SQL driver: fix conversion of QByteArray to QVLA<SQLTCHAR>

The QByteArray is assumed to contain an SQLTCHAR string (so, either
UTF-8, UTF-16 or UTF-32-encoded). Only in the UTF-8 case would the
size of the byte array be the same as the size of the SQLTCHAR string
in codepoints, yet the size in bytes is what the code passed to the
QVLA<SQLTCHAR> append() call, causing it to read past the QByteArray
buffer in the UTF-16 and UTF-32 cases.

Fix by properly calculating the string size from the size-in-bytes and
then memcpy()ing into the QVLA. We use memcpy() and not
QVLA::append(T*, n) because the QByteArray buffer need not be aligned
on an alignof(SQLTCHAR) boundary (certainly not since it gained the
prepend "optimization").

Pick-to: 6.5 6.4 6.2 5.15
Change-Id: If3838c3dee89e6aca65541242642315b8e1fa6b4
Reviewed-by: Thiago Macieira <thiago.macieira@intel.com>
This commit is contained in:
Marc Mutz 2023-01-31 11:06:56 +01:00
parent a1d43b8334
commit 4c445ef0ba

View File

@ -1740,10 +1740,11 @@ bool QODBCResult::exec()
case QMetaType::QString:
if (d->unicode) {
if (bindValueType(i) & QSql::Out) {
const QByteArray &first = tmpStorage.at(i);
QVarLengthArray<SQLTCHAR> array;
array.append((const SQLTCHAR *)first.constData(), first.size());
values[i] = fromSQLTCHAR(array, first.size()/sizeof(SQLTCHAR));
const QByteArray &bytes = tmpStorage.at(i);
const auto strSize = bytes.size() / sizeof(SQLTCHAR);
QVarLengthArray<SQLTCHAR> string(strSize);
memcpy(string.data(), bytes.data(), strSize * sizeof(SQLTCHAR));
values[i] = fromSQLTCHAR(string);
}
break;
}