Performance Evaluation of Implicit and Explicit SIMDization

Abstract – Processor vendors have been expanding Single Instruction Multiple Data (SIMD) extensions to exploit data-level-parallelism in their General Purpose Processors (GPPs). Each SIMD technology such as Streaming SIMD Extensions (SSE) and Advanced Vector eXtensions (AVX) has its own Instruction Set Architecture (ISA) which equipped with Special Purpose Instructions (SPIs). In order to exploit these features, many programming approaches have been developed. Intrinsic Programming Model (IPM) is a low-level concept for explicit SIMDization. Besides, Compiler’s Automatic Vectorization (CAV) has been embedded in modern compilers such as Intel C++ compiler (ICC), GNU Compiler Collections (GCC) and LLVM for implicit vectorization. Each SIMDization shows different improvements because of different SIMD ISAs, vector register width, and programming approaches. Our goal in this paper is to evaluate the performance of explicit and implicit vectorization. Our experimental results show that the behavior of explicit vectorization on different compilers is almost the same compared to implicit vectorization. IPM improves the performance more than CAVs. In general, ICC and GCC compilers can more efficiently vectorize kernels and use SPI compared to LLVM. In addition, AVX2 technology is more useful for small matrices and compute-intensive kernels compared to large matrices and data-intensive kernels because of memory bottlenecks. Furthermore, CAVs fail to vectorize kernels which have overlapping and non-consecutive memory access patterns. The way of implementation of a kernel impacts the vectorization. In order to understand what kind of scalar implementations of an algorithm is suitable for vectorization, an approach based on code modification technique is proposed. Our experimental results show that scalar implementations that have either loop collapsing, loop unrolling, software pipelining, or loop exchange techniques can be efficiently vectorized compared to straightforward implementations.
In sciencedirect
full-text

Paper

High performance implementation of 2-D convolution using AVX2

Abstract – Convolution is the most important and fundamental concept in multimedia processing. The 2-D convolution is used for different filtering operations such as sharpening, smoothing, and edge detection. It performs many mathematical operations on all image pixels. Therefore, it is almost a compute-intensive kernel. In this paper, we use Intrinsic Programming Model (IPM) and AVX2 technology to vectorize this kernel, explicitly. We compare our implementations to Compilers Automatic Vectorization (CAVs), OpenCV library and OpenMP API using ICC, GCC and LLVM compilers, on a single-core. For multi-threading, OpenMP has been used to perform IPM and CAVs implementations on multi-cores. Our experimental results show that the performance of our implementations is much higher than other approaches. In addition, OpenMP improves the performance of our explicit vectorizations significantly using ICC and GCC compilers.
In ieeexplore
Paper

High Performance Implementation of 2D Convolution using Intel’s Advanced Vector Extensions

Abstract – Convolution is the most important and fundamental concept in multimedia processing. For example, for digital image processing 2D convolution is used for different filtering operations. It has many mathematical operations and is performed on all image pixels. Therefore, it is almost a compute-intensive kernel. In order to improve its performance in this paper, we apply two approaches to vectorize it, broadcasting of coefficients and repetition of coefficients using Intrinsic Programming Model (IPM) and AVX technology. Our experimental results on an Intel Skylake microarchitecture show that the performance of broadcasting of coefficients is much higher than repetition of coefficients for different filter sizes and different image sizes. In addition, in order to evaluate the performance of Compiler Automatic Vectorization (CAV), and OpenCV library for this kernel, we use GCC and LLVM compilers. Our experimental results show that the performance of both IPM implementations are faster than GCC’s and LLVM auto-vectorizations.
In ieeexplore
Paper