The patch is straighforward:
- The sparc32 v8 implementations are moved as the generic ones.
- A configure test is added to check for either __sparc_v8__ or
__sparc_v9__.
- The triple names are simplified and sparc implies sparcv8.
The idea is to keep support on sparcv8 architectures that does support
CAS instructions, such as LEON3/LEON4.
Checked on a sparcv9-linux-gnu and sparc64-linux-gnu.
Tested-by: Andreas Larsson <andreas@gaisler.com>
This patch makes build-many-glibcs.py default to binutils 2.33 branch.
Tested with build-many-glibcs.py (compilers and glibcs builds).
* scripts/build-many-glibcs.py (Context.checkout): Default
binutils version to 2.33 branch.
Co-authored-by: Gabriel F. T. Gomes <gabriel@inconstante.net.br>
Reviewed-by: Gabriel F. T. Gomes <gabriel@inconstante.net.br>
Reviewed-by: Joseph Myers <joseph@codesourcery.com>
The utility of a ChangeLog file has been discussed in various mailing
list threads and GNU Tools Cauldrons in the past years and the general
consensus is that while the file may have been very useful in the past
when revision control did not exist or was not as powerful as it is
today, it's current utility is fast diminishing. Further, the
ChangeLog format gets in the way of modernisation of processes since
it almost always results in rewriting of a commit, thus preventing use
of any code review tools to automatically manage patches in the glibc
project.
There is consensus in the glibc community that documentation of why a
change was done (i.e. a detailed description in a git commit) is more
useful than what changed (i.e. a ChangeLog entry) since the latter can
be deduced from the patch. The GNU community would however like to
keep the option of ascertaining what changed through a ChangeLog-like
output and as a compromise, it was proposed that a script be developed
that generates this output.
The script below is the result of these discussions. This script
takes two git revisions references as input and generates the git log
between those revisions in a form that resembles a ChangeLog. Its
capabilities and limitations are listed in a comment in the script.
On a high level it is capable of parsing C code and telling what
changed at the top level, but not within constructs such as functions.
Design
------
At a high level, the script analyses the raw output of a VCS, parses
the source files that have changed and attempts to determine what
changed. The script driver needs three distinct components to be
fully functional for a repository:
- A vcstocl_quirks.py file that helps it parse weird patterns in
sources that may result from preprocessor defines.
- A VCS plugin backend; the git backend is implemented for glibc
- A programming language parser plugin. C is currently implemented.
Additional programming language parsers can be added to give more
detailed output for changes in those types of files.
For input in languages other than those that have a parser, the script
only identifies if a file has been added, removed, modified,
permissions changed, etc. but cannot understand the change in content.
The C Parser
------------
The C parser is capable of parsing C programs with preprocessor macros
in place, as if they were part of the language. This presents some
challenges with parsing code that expands macros on the fly and to
help work around that, a vcstocl_quirks.py file has transformations to
ease things.
The C parser currently can identify macro definitions and scopes and
all global and static declarations and definitions. It cannot parse
(and compare) changes inside functions yet, it could be a future
enhancement if the need for it arises.
Testing
-------
The script has been tested with the glibc repository up to glibc-2.29
and also in the past with emacs. While it would be ideal to have
something like this in a repository like gnulib, that should not be a
bottleneck for glibc to start using this, so this patch proposes to
add these scripts into glibc.
And here is (hopefully!) one of the last ChangeLog entries we'd have
to write for glibc:
* scripts/gitlog_to_changelog.py: New script to auto-generate
ChangeLog.
* scripts/vcs_to_changelog/frontend_c.py: New file.
* scripts/vcs_to_changelog/misc_util.py: New file.
* scripts/vcs_to_changelog/vcs_git.py: New file.
* scripts/vcs_to_changelog/vcstocl_quirks.py: Likewise.
This patch makes build-many-glibcs.py use Linux 5.3.
Tested with build-many-glibcs.py (host-libraries, compilers and glibcs
builds).
* scripts/build-many-glibcs.py (Context.checkout): Default Linux
version to 5.3.
Now that there are no internal users of __sysctl left, it is possible
to add an unconditional deprecation warning to <sys/sysctl.h>.
To avoid a test failure due this warning in check-install-headers,
skip the test for sys/sysctl.h.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
GCC 9 dropped support for the SPE extensions to PowerPC, which means
powerpc*-*-*gnuspe* configurations are no longer buildable with that
compiler. This ISA extension was peculiar to the “e500” line of
embedded PowerPC chips, which, as far as I can tell, are no longer
being manufactured, so I think we should follow suit.
This patch was developed by grepping for “e500”, “__SPE__”, and
“__NO_FPRS__”, and may not eliminate every vestige of SPE support.
Most uses of __NO_FPRS__ are left alone, as they are relevant to
normal embedded PowerPC with soft-float.
* sysdeps/powerpc/preconfigure: Error out on powerpc-*-*gnuspe*
host type.
* scripts/build-many-glibcs.py: Remove powerpc-*-linux-gnuspe
and powerpc-*-linux-gnuspe-e500v1 from list of build configurations.
* sysdeps/powerpc/powerpc32/e500: Recursively delete.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/e500: Recursively delete.
* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/context-e500.h:
Delete.
* sysdeps/powerpc/fpu_control.h: Remove SPE variant.
Issue an #error if used with a compiler in SPE-float mode.
* sysdeps/powerpc/powerpc32/__longjmp_common.S
* sysdeps/powerpc/powerpc32/setjmp_common.S
* sysdeps/unix/sysv/linux/powerpc/powerpc32/getcontext-common.S
* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/getcontext.S
* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/setcontext.S
* sysdeps/unix/sysv/linux/powerpc/powerpc32/nofpu/swapcontext.S
* sysdeps/unix/sysv/linux/powerpc/powerpc32/setcontext-common.S
* sysdeps/unix/sysv/linux/powerpc/powerpc32/swapcontext-common.S:
Remove code to preserve SPE register state.
* sysdeps/unix/sysv/linux/powerpc/elision-lock.c
* sysdeps/unix/sysv/linux/powerpc/elision-trylock.c
* sysdeps/unix/sysv/linux/powerpc/elision-unlock.c
Remove __SPE__ ifndefs.
A few of our installed headers contain UTF-8 in comments.
check-obsolete-constructs opened files without explicitly specifying
their encoding, so it would barf on these headers if “make check” was
run in a non-UTF-8 locale.
* scripts/check-obsolete-constructs.py (HeaderChecker.check):
Specify encoding="utf-8" when opening headers to check.
This patch makes build-many-glibcs.py use Linux 5.0 in place of 4.20
(now that the test change required to avoid false positives with ulong
in kernel headers has been committed). This includes adjusting the
logic to compute a tarball URL to handle different major version
numbers (rather than changing the path to hardcode v5.x in place of
v4.x, as someone might still wish to check out a v4.x version).
Tested that build-many-glibcs.py successfully checks out Linux 5.0
sources after this patch.
* scripts/build-many-glibcs.py (Context.checkout): Default Linux
version to 5.0.
(Context.checkout_tar): Handle variable major version for Linux
kernel.
The test for obsolete typedefs in installed headers was implemented
using grep, and could therefore get false positives on e.g. “ulong”
in a comment. It was also scanning all of the headers included by
our headers, and therefore testing headers we don’t control, e.g.
Linux kernel headers.
This patch splits the obsolete-typedef test from
scripts/check-installed-headers.sh to a separate program,
scripts/check-obsolete-constructs.py. Being implemented in Python,
it is feasible to make it tokenize C accurately enough to avoid false
positives on the contents of comments and strings. It also only
examines $(headers) in each subdirectory--all the headers we install,
but not any external dependencies of those headers. Headers whose
installed name starts with finclude/ are ignored, on the assumption
that they contain Fortran.
It is also feasible to make the new test understand the difference
between _defining_ the obsolete typedefs and _using_ the obsolete
typedefs, which means posix/{bits,sys}/types.h no longer need to be
exempted. This uncovered an actual bug in bits/types.h: __quad_t and
__u_quad_t were being used to define __S64_TYPE, __U64_TYPE,
__SQUAD_TYPE and __UQUAD_TYPE. These are changed to __int64_t and
__uint64_t respectively. This is a safe change, despite the comments
in bits/types.h claiming a difference between __quad_t and __int64_t,
because those comments are incorrect. In all current ABIs, both
__quad_t and __int64_t are ‘long’ when ‘long’ is a 64-bit type, and
‘long long’ when ‘long’ is a 32-bit type, and similarly for __u_quad_t
and __uint64_t. (Changing the types to be what the comments say they
are would be an ABI break, as it affects C++ name mangling.) This
patch includes a minimal change to make the comments not completely
wrong.
sys/types.h was defining the legacy BSD u_intN_t typedefs using a
construct that was not necessarily consistent with how the C99 uintN_t
typedefs are defined, and is also too complicated for the new script to
understand (it lexes C relatively accurately, but it does not attempt
to expand preprocessor macros, nor does it do any actual parsing).
This patch cuts all of that out and uses bits/types.h's __uintN_t typedefs
to define u_intN_t instead. This is verified to not change the ABI on
any supported architecture, via the c++-types test, which means u_intN_t
and uintN_t were, in fact, consistent on all supported architectures.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
* scripts/check-obsolete-constructs.py: New test script.
* scripts/check-installed-headers.sh: Remove tests for
obsolete typedefs, superseded by check-obsolete-constructs.py.
* Rules: Run scripts/check-obsolete-constructs.py over $(headers)
as a special test. Update commentary.
* posix/bits/types.h (__SQUAD_TYPE, __S64_TYPE): Define as __int64_t.
(__UQUAD_TYPE, __U64_TYPE): Define as __uint64_t.
Update commentary.
* posix/sys/types.h (__u_intN_t): Remove.
(u_int8_t): Typedef using __uint8_t.
(u_int16_t): Typedef using __uint16_t.
(u_int32_t): Typedef using __uint32_t.
(u_int64_t): Typedef using __uint64_t.
The check for "/finclude/" fails with the actual location of
Fortran headers because they are now stored in the "finclude"
subdirectory of the top-level include directory, so a relative path
does not contain a slash '/' before the "finclude" string.
If building on a subset of architectures only, it is easy to miss
wrapper headers which are required by other architectures because
they lack the corresponding sysdeps header. The check ensures
that every installed header which is not itself a sysdeps header
has a header under include/ (that presumably wraps the header,
and perhaps also adding declarations and definitions for !_ISOMAC).
Also check for the absence of the sysdeps/generic/bits directory
removed in commit c72565e5f1, to make
accidental re-introduction more difficult.
Reviewed-by: Carlos O'Donell <carlos@redhat.com>
In some cases, sensitive to readline version and the user's
environment, gdb might emit escape codes while run under python's
pexpect (i.e. testing pretty printers). This patch, suggested
by Jan, helps isolate the test from the user's environment.
Tested on RHEL 7 x86_64 with DTS 7 and EPEL, which is one
magic combination of components that triggers this bug.
This patch updates some miscellaneous files from their upstream
sources (thereby bringing in copyright date updates for some of those
files).
Tested for x86_64, including "make pdf".
* manual/texinfo.tex: Update to version 2018-12-28.17 with
trailing whitespace removed.
* scripts/config.guess: Update to version 2019-01-01.
* scripts/config.sub: Update to version 2019-01-01.
* scripts/move-if-change: Update from gnulib.
Continuing the process of building up and using Python infrastructure
for extracting and using values in headers, this patch adds a test
that MAP_* constants from sys/mman.h agree with those in the Linux
kernel headers. (Other sys/mman.h constants could be added to the
test separately.)
This set of constants has grown over time, so the generic code is
enhanced to allow saying extra constants are OK on either side of the
comparison (where the caller sets those parameters based on the Linux
kernel headers version, compared with the version the headers were
last updated from). Although the test is a custom Python file, my
intention is to move in future to a single Python script for such
tests and text files it takes as inputs, once there are enough
examples to provide a guide to the common cases in such tests (I'd
like to end up with most or all such sets of constants copied from
kernel headers having such tests, and likewise for structure layouts
from the kernel).
The Makefile code is essentially the same as for tst-signal-numbers,
but I didn't try to find an object file to depend on to represent the
dependency on the headers used by the test (the conform/ tests don't
try to represent such header dependencies at all, for example).
Tested with build-many-glibcs.py, and also for x86_64 with older
kernel headers.
* scripts/glibcextract.py (compare_macro_consts): Take parameters
to allow extra macros from first or second sources.
* sysdeps/unix/sysv/linux/tst-mman-consts.py: New file.
* sysdeps/unix/sysv/linux/Makefile [$(subdir) = misc]
(tests-special): Add $(objpfx)tst-mman-consts.out.
($(objpfx)tst-mman-consts.out): New makefile target.
This patch eliminates the gen-py-const.awk variant of gen-as-const,
switching to use of gnu-as-const.py (with a new --python option) to
process .pysym files (i.e., to generate nptl_lock_constants.py), as
the syntax of those files is identical to that of .sym files.
Note that the generated nptl_lock_constants.py is *not* identical to
the version generated by the awk script. Apart from the trivial
changes (comment referencing the new script, and output being sorted),
the constant FUTEX_WAITERS, PTHREAD_MUTEXATTR_FLAG_BITS,
PTHREAD_MUTEXATTR_FLAG_PSHARED and PTHREAD_MUTEX_PRIO_CEILING_MASK are
now output as positive rather than negative constants (on x86_64
anyway; maybe not necessarily on 32-bit systems):
< FUTEX_WAITERS = -2147483648
---
> FUTEX_WAITERS = 2147483648
< PTHREAD_MUTEXATTR_FLAG_BITS = -251662336
< PTHREAD_MUTEXATTR_FLAG_PSHARED = -2147483648
---
> PTHREAD_MUTEXATTR_FLAG_BITS = 4043304960
> PTHREAD_MUTEXATTR_FLAG_PSHARED = 2147483648
< PTHREAD_MUTEX_PRIO_CEILING_MASK = -524288
---
> PTHREAD_MUTEX_PRIO_CEILING_MASK = 4294443008
This is because gen-as-const has a cast of the constant value to long
int, which gen-py-const lacks.
I think the positive values are more logically correct, since the
constants in question are in fact unsigned in C. But to reliably
produce gen-as-const.py output for constants that always (in C and
Python) reflects the signedness of values with the high bit of "long
int" set would mean more complicated logic needs to be used in
computing values.
The more correct positive values by themselves produce a failure of
nptl/test-mutexattr-printers, because masking with
~PTHREAD_MUTEXATTR_FLAG_BITS & ~PTHREAD_MUTEX_NO_ELISION_NP now leaves
a bit -1 << 32 in the Python value, resulting in a KeyError exception.
To avoid that, places masking with ~ of one of the constants in
question are changed to mask with 0xffffffff as well (this reflects
how ~ in Python applies to an infinite-precision integer whereas ~ in
C does not do any promotions beyond the width of int).
Tested for x86_64.
* scripts/gen-as-const.py (main): Handle --python option.
* scripts/gen-py-const.awk: Remove.
* Makerules (py-const-script): Use gen-as-const.py.
($(py-const)): Likewise.
* nptl/nptl-printers.py (MutexPrinter.read_status_no_robust): Mask
with 0xffffffff together with ~(PTHREAD_MUTEX_PRIO_CEILING_MASK).
(MutexAttributesPrinter.read_values): Mask with 0xffffffff
together with ~PTHREAD_MUTEXATTR_FLAG_BITS and
~PTHREAD_MUTEX_NO_ELISION_NP.
* manual/README.pretty-printers: Update reference to
gen-py-const.awk.
This patch converts the tst-signal-numbers test from shell + awk to
Python.
As with gen-as-const, the point is not so much that shell and awk are
problematic for this code, as that it's useful to build up general
infrastructure in Python for use of a range of code involving
extracting values from C headers. This patch moves some code from
gen-as-const.py to a new glibcextract.py, which also gains functions
relating to listing macros, and comparing the values of a set of
macros from compiling two different pieces of code.
It's not just signal numbers that should have such tests; pretty much
any case where glibc copies constants from Linux kernel headers should
have such tests that the values and sets of constants agree except
where differences are known to be OK. Much the same also applies to
structure layouts (although testing those without hardcoding lists of
fields to test will be more complicated).
Given this patch, another test for a set of macros would essentially
be just a call to glibcextract.compare_macro_consts (plus boilerplate
code - and we could move to having separate text files defining such
tests, like the .sym inputs to gen-as-const, so that only a single
Python script is needed for most such tests). Some such tests would
of course need new features, e.g. where the set of macros changes in
new kernel versions (so you need to allow new macro names on the
kernel side if the kernel headers are newer than the version known to
glibc, and extra macros on the glibc side if the kernel headers are
older). tst-syscall-list.sh could become a Python script that uses
common code to generate lists of macros but does other things with its
own custom logic.
There are a few differences from the existing shell + awk test.
Because the new test evaluates constants using the compiler, no
special handling is needed any more for one signal name being defined
to another. Because asm/signal.h now needs to pass through the
compiler, not just the preprocessor, stddef.h is included as well
(given the asm/signal.h issue that it requires an externally provided
definition of size_t). The previous code defined __ASSEMBLER__ with
asm/signal.h; this is removed (__ASSEMBLY__, a different macro,
eliminates the requirement for stddef.h on some but not all
architectures).
Tested for x86_64, and with build-many-glibcs.py.
* scripts/glibcextract.py: New file.
* scripts/gen-as-const.py: Do not import os.path, re, subprocess
or tempfile. Import glibcexctract.
(compute_c_consts): Remove. Moved to glibcextract.py.
(gen_test): Update reference to compute_c_consts.
(main): Likewise.
* sysdeps/unix/sysv/linux/tst-signal-numbers.py: New file.
* sysdeps/unix/sysv/linux/tst-signal-numbers.sh: Remove.
* sysdeps/unix/sysv/linux/Makefile
($(objpfx)tst-signal-numbers.out): Use tst-signal-numbers.py.
Redirect stderr as well as stdout.
This patch updates various miscellaneous files from their upstream
sources.
Tested for x86_64, including "make pdf".
* manual/texinfo.tex: Update to version 2018-09-21.20 with
trailing whitespace removed.
* scripts/config.guess: Update to version 2018-11-28.
* scripts/config.sub: Update to version 2018-11-28.
* scripts/install-sh: Update to version 2018-03-11.20.
* scripts/mkinstalldirs: Update to version 2018-03-07.03.
* scripts/move-if-change: Update to version 2018-03-07 03:47.
It was reported in
<https://sourceware.org/ml/libc-alpha/2018-12/msg00045.html> that
gen-as-const.py fails to generate test code in the case where a .sym
file has no symbols in it, so resulting in a test failing to link for
Hurd.
The relevant difference from the old awk script is that the old script
treated '--' lines as indicating that the text to do at the start of
the test (or file used to compute constants) should be output at that
point if not already output, as well as treating lines with actual
entries for constants like that. This patch changes gen-as-const.py
accordingly, making it the sole responsibility of the code parsing
.sym files to determine when such text should be output and ensuring
it's always output at some point even if there are no symbols and no
'--' lines, since not outputting it means the test fails to link.
Handling '--' like that also avoids any problems that would arise if
the first entry for a symbol were inside #ifdef (since the text in
question must not be output inside #ifdef).
Tested for x86_64, and with build-many-glibcs.py for i686-gnu. Note
that there are still compilation test failures for i686-gnu
(linknamespace tests, possibly arising from recent posix_spawn-related
changes).
* scripts/gen-as-const.py (compute_c_consts): Take an argument
'START' to indicate that start text should be output.
(gen_test): Likewise.
(main): Generate 'START' for first symbol or '--' line, or at end
of input if not previously generated.
hurd's jmp_buf-ssp.sym does not define any symbol.
scripts/gen-as-const.py currently was emitting an empty line in that
case, and the gawk invocation was prepending "asconst_" to it, ending up
with:
.../build/glibc/setjmp/test-as-const-jmp_buf-ssp.c:1:2: error: expected « = », « , », « ; », « asm » or
« __attribute__ » at end of input
1 | asconst_
| ^~~~~~~~
* scripts/gen-as-const.py (main): Avoid emitting empty line when
there is no element in `consts'.
This patch replaces gen-as-const.awk, and some fragments of the
Makefile code that used it, by a Python script. The point is not such
much that awk is problematic for this particular script, as that I'd
like to build up a general Python infrastructure for extracting
information from C headers, for use in writing tests of such headers.
Thus, although this patch does not set up such infrastructure, the
compute_c_consts function in gen-as-const.py might be moved to a
separate Python module in a subsequent patch as a starting point for
such infrastructure.
The general idea of the code is the same as in the awk version, but no
attempt is made to make the output files textually identical. When
generating a header, a dict of constant names and values is generated
internally then defines are printed in sorted order (rather than the
order in the .sym file, which would have been used before). When
generating a test that the values computed match those from a normal
header inclusion, the test code is made into a compilation test using
_Static_assert, where previously the comparisons were done only when
the test was executed. One fragment of test generation (converting
the previously generated header to use asconst_* prefixes on its macro
names) is still in awk code in the makefiles; only the .sym processing
and subsequent execution of the compiler to extract constants have
moved to the Python script.
Tested for x86_64, and with build-many-glibcs.py.
* scripts/gen-as-const.py: New file.
* scripts/gen-as-const.awk: Remove.
* Makerules ($(common-objpfx)%.h $(common-objpfx)%.h.d): Use
gen-as-const.py.
($(objpfx)test-as-const-%.c): Likewise.
These files were both auto-generated and shipped in the source tree.
We can assume that sed is available and always generate the files
during the build.
Now that build-many-glibcs.py touches at checkout time all files that
might get rebuilt in the glibc source directory in a normal glibc
build and test run, this patch stops the script from copying the glibc
source directory, so that all builds use the original directory
directly (and less disk space is used, less I/O is involved and cached
copies of the sources in memory can be shared between all the builds -
as well as avoiding spurious failures from copying while "git gc" is
running). This is similar to how all other components were already
handled. Any bugs involving writing into the source directory can be
dealt with in future as normal bugs, just as such bugs already are
handled.
Tested with build-many-glibcs.py runs with a read-only glibc source
directory, with all files not touched by the script having timestamps
in forwards alphabetical order and separately with all files not
touched by the script having timestamps in backwards alphabetical
order.
* scripts/build-many-glibcs.py (Glibc.build_glibc): Use original
source directory instead of a copy.
(CommandList.create_copy_dir): Remove.
Mathieu Desnoyers ran into an issue with his rseq patch where he
was the first person to add weak thread-local data and this
resulted in an ABI list update with entries like this:
"GLIBC_2.29 w ? D .tdata 0000000000000020".
The weakness of the symbol has nothing to do with the DSOs ABI
and so we should not write anything about weak symbols here. The
.tdata entries should be treated exactly like .tbss entries and
the output should have been: "GLIBC_2.29 __rseq_abi T 0x20"
This change makes abilist.awk handle .tdata just like .tbss,
while at the same time adding an error case for the default, and
the unknown line cases. We never want anyone to be able to add
such entries to any ABI list files and should see an immediate
error and consult with experts.
Tested by Mathieu Desnoyers <mathieu.desnoyers@efficios.com> with
the rseq patch set and 'make update-all-abi'.
Tested myself with 'make update-all-abi' on x86_64 with no
changes.
Signed-off-by: Carlos O'Donell <carlos@redhat.com>
build-many-glibcs.py currently copies the source tree to avoid issues
with parallel builds trying to write into it. This copying can result
in occasional spurious build failures from bots, when a "git gc" is in
progress that changes .git contents while copying is taking place, and
it would also be desirable to avoid the need to copy to save on disk
space, I/O and memory used in build-many-glibcs.py builds.
In preparation for removing the copying, this patch arranges for
build-many-glibcs.py to touch more files on checkout so their
timestamps do not result in make attempting to rebuild them. Before
actually removing the copying, I intend to do further tests to ensure
I haven't missed any other such makefile dependencies.
This is of course without prejudice to possibly moving more of these
files to being generated in the build directory rather than being
checked in at all, where that can be done using build tools already
required for the build. For sysdeps files (installed and otherwise)
it would be necessary to make sure this does not affect the search
ordering, for headers used in the build it would be necessary to
ensure they are generated early enough, and for errlist.c there may be
dual licensing reasons for keeping it checked in.
Tested that a checkout with build-many-glibcs.py does touch the
expected files and that a glibcs build for aarch64-linux-gnu succeeds.
* scripts/build-many-glibcs.py (Context.fix_glibc_timestamps):
Touch additional files.
Since we have consensus on requiring Python 3.4 or later to build
glibc, it follows that compatibility with older Python versions is
also no longer relevant to auxiliary Python scripts for use in glibc
development. This patch removes such compatibility code from
build-many-glibcs.py (compatibility code needed for 3.4, which lacks
the newer subprocess interface, is kept). Because
build-many-glibcs.py is not itself called from the glibc build system,
this patch is independent of the configure checks for having a
new-enough Python version, which are only relevant to uses of Python
from the main build and test process.
Tested with build-many-glibcs.py building glibc for aarch64-linux-gnu
(with Python 3.4 to make sure that still works).
* scripts/build-many-glibcs.py: Remove compatibility for missing
os.cpu_count and re.fullmatch.
For architectures and ABIs that are added in version 2.29 or later the
option --enable-obsolete-nsl is no longer available, and no libnsl
compatibility library is built.
Every so often we get libsanitizer or libgo builds breaking with new
glibc because of some change in the glibc headers.
glibc's build-many-glibcs.py deliberately disables libsanitizer and
GCC languages other than C and C++ because the point is to test glibc
and find glibc problems (including problems shown up by new compiler
warnings in new GCC), not to test libsanitizer or libgo; if the
compiler build fails because of libsanitizer or libgo failing to
build, that could hide the existence of new problems in glibc.
However, it seems reasonable to have a non-default mode where
build-many-glibcs.py does build those additional pieces, which this
patch adds.
Note that I do not intend to run a build-many-glibcs.py bot with this
new option. If people concerned with libsanitizer, libgo or other
potentially affected GCC libraries wish to find out about such
problems more quickly, they may wish to run such a bot or bots (and to
monitor the results and fix issues found - obviously there will be
some overlap with issues found by my bots not using that option).
Note also that building a non-native Ada compiler requires a
sufficiently recent native (or build-x-host, in general) Ada compiler
to be used, possibly more or less the same version as being built.
That needs to be in the PATH when build-many-glibcs.py --full-gcc is
run; the script does not deal with setting up such a compiler (or any
of the other host tools needed for building GCC and glibc, beyond the
GMP / MPFR / MPC libraries), but perhaps it should, to avoid the need
to keep updating such a compiler manually when running a bot.
Tested by running build-many-glibcs.py with the new option, with
mainline GCC. There are build failures for various configurations,
which may be of interest to Go / Ada people even if you're not
interested in running such a bot:
* mips64 / mips64el (all configuration): ICE building libstdc++, as
seen without using the new option
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87156>.
* aarch64_be: error building libgo (little-endian aarch64 works fine):
version.go:67:13: error: expected ';' or ')' or newline
67 | BigEndian =
| ^
version.go:67:3: error: reference to undefined name 'BigEndian'
67 | BigEndian =
| ^
* arm (all configurations): error building libgo:
/scratch/jmyers/glibc/many9/src/gcc/libgo/go/internal/syscall/unix/getrandom_linux.go:29:5: error: reference to undefined name 'randomTrap'
29 | if randomTrap == 0 {
| ^
/scratch/jmyers/glibc/many9/src/gcc/libgo/go/internal/syscall/unix/getrandom_linux.go:38:34: error: reference to undefined name 'randomTrap'
38 | r1, _, errno := syscall.Syscall(randomTrap,
| ^
What's happening there is, I think, that the arm*b*-*-* case in
libgo/configure.ac is wrongly matching arm-glibc-linux-gnueabi with
the 'b' in the vendor part, and then something else is failing to
handle GOARCH=armbe. Given that you can have configurations with
multilibs of both endiannesses, endianness should always be detected
by configure.ac, for all architectures, using a compile test of
whether __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__, not based on textual
matches to the host (= target at top-level) triplet.
* armeb (all configurations): error building libada (for some reason
the Arm libada configuration seems to do different things for EH for
big-endian, which makes no sense to me and doesn't actually work):
a-exexpr.adb:87:06: "System.Exceptions.Machine" is not a predefined library unit
a-exexpr.adb:87:06: "Ada.Exceptions (body)" depends on "Ada.Exceptions.Exception_Propagation (body)"
a-exexpr.adb:87:06: "Ada.Exceptions.Exception_Propagation (body)" depends on "System.Exceptions.Machine (spec)"
* hppa: error building libgo (same error as for aarch64_be).
* ia64: ICE building libgo. I've filed
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=87281> for this.
* m68k: ICE in the Go front end building libgo
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=84948>.
* microblaze, microblazeel, nios2, sh3, sh3eb: build failure in libada
for lack of a libada port to those systems (I'm not sure sh3 would
actually need anything different from sh4):
a-cbdlli.ads:38:14: violation of restriction "No_Finalization" at system.ads:47
* i686-gnu: build failure in libada, might be fixed by the patch
attached to <https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81103>
(not tested):
terminals.c:1115:13: fatal error: termio.h: No such file or directory
* scripts/build-many-glibcs.py (Context.__init__): Add full_gcc
argument.
(Config.build_gcc): Use --disable-libsanitizer for first GCC
build, but not for second build if --full-gcc. Use
--enable-languages=all for second build if --full-gcc.
(get_parser): Add --full-gcc option.
(main): Update call to Context.
We've had issues before with build failures (with new GCC) in code
only built with --enable-obsolete-rpc or --enable-obsolete-nsl not
being reported for a while because build-many-glibcs.py does not test
those configure options. This patch adds configurations (32-bit and
64-bit) using those options so that in future we can notice quickly if
they start failing to build.
Tested the new configurations do build with GCC 8.
* scripts/build-many-glibcs.py (Context.add_all_configs): Add
x86_64 and i686 configs using --enable-obsolete-rpc
--enable-obsolete-nsl.