* crypt/Makefile: Move test targets after toplevel Rules
inclusion. Grab any necessary sysdep routines when linking.
* crypt/md5.c (md5_process_block): Remove define, we will always
name it __md5_process_block.
(md5_finish_ctx): Update md5_process_block call.
(md5_stream): Likewise.
(md5_process_bytes): Likewise.
(md5_process_block): Rename to __md5_process_block and move to ...
* crypt/md5-block.c: ... here.
* crypt/sha256.c (sha256_process_block): Move to ...
* crypt/sha256-block.c: ... here.
* crypt/sha512.c (sha512_process_block): Move to ...
* crypt/sha512-block.c: ... here.
* locale/Makefile (CFLAGS-md5.c): Define to add crypt/ to include
path.
* sysdeps/sparc/sparc-ifunc.c (sparc_libc_ifunc): Define.
* sysdeps/sparc/sparc64/multiarch/Makefile
(libcrypt-sysdep_routines): Add crypto assembler sysdeps when in
crypt subdir.
(localedef-aux): Add md5 crypto assembler when in locale subdir.
* sysdeps/sparc/sparc32/sparcv9/multiarch/Makefile: Mirror sparc64
multiarch changes.
* sysdeps/sparc/sparc64/multiarch/md5-block.c: New file.
* sysdeps/sparc/sparc64/multiarch/md5-crop.S: New file.
* sysdeps/sparc/sparc64/multiarch/sha256-block.c: New file.
* sysdeps/sparc/sparc64/multiarch/sha256-crop.S: New file.
* sysdeps/sparc/sparc64/multiarch/sha512-block.c: New file.
* sysdeps/sparc/sparc64/multiarch/sha512-crop.S: New file.
* sysdeps/sparc/sparc32/sparcv9/multiarch/md5-block.c: New file.
* sysdeps/sparc/sparc32/sparcv9/multiarch/md5-crop.S: New file.
* sysdeps/sparc/sparc32/sparcv9/multiarch/sha256-block.c: New
file.
* sysdeps/sparc/sparc32/sparcv9/multiarch/sha256-crop.S: New file.
* sysdeps/sparc/sparc32/sparcv9/multiarch/sha512-block.c: New
file.
* sysdeps/sparc/sparc32/sparcv9/multiarch/sha512-crop.S: New file.
* sysdeps/unix/sysv/linux/sparc/sparc64/get_clockfreq.c: Include
inttypes.h
(__get_clockfreq_via_proc_openprom): Use __open, __read, and
__close rather than their public counterparts.
* sysdeps/unix/sysv/linux/powerpc/bits/fcntl.h: Remove all
definitions and declarations that are provided by
<bits/fcntl-linux.h> and include <bits/fcntl-linux.h>.
* sysdeps/unix/sysv/linux/sparc/sparc64/__start_context.S: New file.
* sysdeps/unix/sysv/linux/sparc/sparc64/makecontext.c
(__start_context): Declare.
(__makecontext_ret): Delete.
(__makecontext): Hook up __start_context instead of
__makecontext_ret.
* sysdeps/unix/sysv/linux/sparc/sparc64/Makefile
(sysdep_routines): Add __start_context when in stdlib.
[BZ #14809]
* sysdeps/unix/sysv/linux/sys/sysctl.h (_UAPI_LINUX_KERNEL_H)
(_UAPI_LINUX_TYPES_H): Starting with Linux 3.7, the include header
guards are changed. Only define if not yet defined, #undef back
after including linux/sysctl.h if defined here.
Atomic ops are issued directly from the core, rather than
potentially sitting in the write buffer, so can improve the
performance of other waiters. In addition, if we didn't end
up pulling a copy of the cache line where the lock is into cache,
by using an atomic op we don't have to acquire the cache line
before we can unlock.
With gcc 4.8 tilegx has support for -mcmodel=large, to tolerate very
large shared objects. This option changes the compiler output to
not include direct jump instructions, which have a range of only
2^30, i.e +/- 512MB. Instead the compiler marshalls the target PCs
into registers and then uses jump- or call-to-register instructions.
For glibc, the upshot is that we need to arrange for a few functions
to tolerate the possibility of a large range between the PC and
the target. In particular, the crti.S and start.S code needs
to be able to reach from .init to the PLT, as does gmon-start.c.
The elf-init.c code has the reverse problem, needing to call from
libc_nonshared.a (linked at the end of shared objects) back to the
_init section at the beginning.
No other functions in *_nonshared.a need to be built this way, as
they only call the PLT (or potentially each other), but all of that
code is linked at the very end of the shared object.
We don't build the standard -static archives with this option as the
performance cost is high enough and the use case is rare enough that
it doesn't seem worthwhile. Instead, we would encourage developers
who need the -static model with huge executables to build a private
copy of glibc and configure it with -mcmodel=large.
Note that libc.so et al don't need any changes; the only changes
are for code that is statically linked into user code built with
-mcmodel=large.
For the assembly code, I just rewrote it so that it unconditionally
uses the large model. To be able to pass -mcmodel=large to
csu/elf-init.c and csu/gmon-start.c, I need to check to see if the
compiler supports that flag, since gcc 4.7 doesn't; I added the
support by creating a small Makefile fragment that just runs the
compiler to check.
Normally, the simulator is notified of absolute pathnames by the
_dl_load_hook hook. However, when a relative pathname is used, the
simulator may not know that the relative path matches a path that
it could figure out in the file system that it has access to.
Instead we provide a simplified version of the realpath function
so we can pass a plausible absolute pathname to the simulator.
Since we're now doing more work at object load time, we also add
a guard so we do no work at all if we're not running on the simulator.
- Override <memcopy.h> so we use full 8-byte word copies on tilegx32
for memmove, then use op_t in memcpy instead of the previous
locally-defined word_t just to avoid proliferating identical types.
- Fix bug in memcpy prefetch that caused us to never prefetch past
the first cache line.
- Optimize misaligned memcpy by inlining _wordcopy_fwd_dest_aligned
instead of just doing a dumb word-at-a-time copy.
- Make memcpy safe for forward copies by doing all the loads from
a given cache line prior to doing a wh64 (cache line zero-fill)
on the destination. Remove now-redundant src == dst check.
- Copy and optimize the generic wordcopy.c routines to use the tile
"double align" instruction instead of the MERGE macro; to avoid
offset addressing mode (which tile doesn't have) by rewriting the
pointer math to load and store with a zero index; and to use
post-increment addresses in the inner loops to improve scheduling.