Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Some MMX fixes from Patrick Baggett.
Original email... Date: Sat, 10 Sep 2011 13:01:20 -0500 From: Patrick Baggett To: SDL Development List <[email protected]> Subject: Re: [SDL] SDL_memcpyMMX uses SSE instructions In SDL_blit_copy.c, the function SDL_memcpyMMX() actually use SSE instructions. It is called in this context: #ifdef __MMX__ if (SDL_HasMMX() && !((uintptr_t) src & 7) && !(srcskip & 7) && !((uintptr_t) dst & 7) && !(dstskip & 7)) { while (h--) { SDL_memcpyMMX(dst, src, w); src += srcskip; dst += dstskip; } _mm_empty(); return; } #endif This implies that the minimum CPU features are just MMX. There is a separate SDL_memcpySSE() function. The SDL_memcpyMMX() function does: #ifdef __SSE__ _mm_prefetch(src, _MM_HINT_NTA); #endif ...which tests at compile time if SSE intrinsics are available, not at run time. It generates the PREFETCHNTA instruction. It also uses _mm_stream_pi() intrinsic, which generates the MOVNTQ instruction. If you replace the "MMX" code with: __m64* d64 = (__m64*)dst; __m64* s64 = (__m64*)src; for(i= len / 64; i--;) { d64[0] = s64[0]; d64[1] = s64[1]; d64[2] = s64[2]; d64[3] = s64[3]; d64[4] = s64[4]; d64[5] = s64[5]; d64[6] = s64[6]; d64[7] = s64[7]; d64 += 8; s64 += 8; } Then MSVC generates the correct movq instructions. GCC (4.5.0) seems to think that using 2x movl is still better, but then again, GCC isn't actually that good at optimizing intrinsics as I've found. At least the code won't crash on my P2 though. :) Also, there is no requirement for MMX to be aligned to the 8th byte. I think the author assumed that SSE's 16 byte alignment requirement must retroactively mean that MMX requires 8 byte alignment. Attached is the full patch. Patrick
- Loading branch information