博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Faster Parallel Reductions on Kepler
阅读量:2444 次
发布时间:2019-05-10

本文共 12154 字,大约阅读时间需要 40 分钟。

Parallel reduction is a common building block for many parallel algorithms. A  provided a detailed strategy for implementing parallel reductions on GPUs, but this 6-year old document bears updating. In this post I will show you some features of the Kepler GPU architecture which make reductions even faster: the shuffle (SHFL) instruction and fast device memory atomic operations.

.

Shuffle On Down

Efficient parallel reductions exchange data between threads within the same thread block. On earlier hardware this meant using shared memory, which involves writing data to shared memory, synchronizing, and then reading the data back from shared memory. Kepler’s shuffle instruction (SHFL) enables a thread to directly read a register from another thread in the same . This allows threads in a warp to collectively exchange or broadcast data. As described in the post , there are four shuffle intrinsics: __shlf()__shfl_down(),__shfl_up(), and __shfl_xor(), but in this post we only use __shfl_down(), defined as follows: (You can find a complete description of the other shuffle functions .)

int __shfl_down(int var, unsigned int delta, int width=warpSize);

__shfl_down() calculates a source lane ID by adding delta to the caller’s lane ID (the lane ID is a thread’s index within its warp, from 0 to 31). The value of var held by the resulting lane ID is returned: this has the effect of shifting var down the warp by delta lanes. If the source lane ID is out of range or the source thread has exited, the calling thread’s own var is returned. The ID number of the source lane will not wrap around the value of width and so the upper delta lanes will remain unchanged. Note that width must be one of (2, 4, 8, 16, 32). For brevity, the diagrams that follow show only 8 threads in a warp even though the warp size of all current CUDA GPUs is 32.

As an example, Figure 1 shows the effect of the following two lines of code, where we can see that values are shifted down by 2 threads.

int i = threadIdx.x % 32;int j = __shfl_down(i, 2, 8);
Figure 1: The shuffle down instruction.Figure 1: The shuffle down instruction.

There are three main advantages to using shuffle instead of shared memory:

  1. Shuffle replaces a multi-instruction shared memory sequence with a single instruction, increasing effective bandwidth and decreasing latency.
  2. Shuffle does not use any shared memory.
  3. Synchronization is within a warp and is implicit in the instruction, so there is no need to synchronize the whole thread block with __syncthreads().

Note also that the Kepler implementation of shuffle instruction only supports 32-bit data types, but you can easily use multiple calls to shuffle larger data types. Here is a double-precision implementation of __shfl_down().

__device__ inlinedouble __shfl_down(double var, unsigned int srcLane, int width=32) {
int2 a = *reinterpret_cast
(&var); a.x = __shfl_down(a.x, srcLane, width); a.y = __shfl_down(a.y, srcLane, width); return *reinterpret_cast
(&a);}

Shuffle Warp Reduce

Now that we understand what shuffle is let’s look at how we can use it to reduce within a warp. Figure 2 shows how we can use shuffle down to build a reduction tree. Please note that I have only included the arrows for the threads that are contributing to the final reduction. In reality all threads will be shifting values even though they are not needed in the reduction.

reduceFigure 2: The reduction algorithm using shuffle down.

After executing the three reduction thread 0 has the total reduced value in its variable v. We’re now ready to write the following complete shfl-based warp reduction function.

__inline__ __device__int warpReduceSum(int val) {
for (int offset = warpSize/2; offset > 0; offset /= 2) val += __shfl_down(val, offset); return val;}

Note, if all threads in the warp need the result, you can implement an “all reduce” within a warp using __shfl_xor() instead of __shfl_down(), like the following. Either version can be used in the block reduce of the next section.

__inline__ __device__int warpAllReduceSum(int val) {
for (int mask = warpSize/2; mask > 0; mask /= 2) val += __shfl_xor(val, mask); return val;}

Block Reduce

Using the warpReduceSum function we can now easily build a reduction across the entire block. To do this we first reduce within warps. Then the first thread of each warp writes its partial sum to shared memory. Finally, after synchronizing, the first warp reads from shared memory and reduces again.

__inline__ __device__int blockReduceSum(int val) {
static __shared__ int shared[32]; // Shared mem for 32 partial sums int lane = threadIdx.x % warpSize; int wid = threadIdx.x / warpSize; val = warpReduceSum(val); // Each warp performs partial reduction if (lane==0) shared[wid]=val; // Write reduced value to shared memory __syncthreads(); // Wait for all partial reductions //read from shared memory only if that warp existed val = (threadIdx.x < blockDim.x / warpSize) ? shared[lane] : 0; if (wid==0) val = warpReduceSum(val); //Final reduce within first warp return val;}

As with warpReduceSum this function returns the reduced value on the first thread only. If all threads need the reduced value this could be quickly broadcast out by writing the reduce value to shared memory, calling __syncthreads, and reading the value in all threads.

Reducing Large Arrays

Now that we can reduce within a thread block, let’s look at how to reduce across a complete grid (many thread blocks). Reducing across blocks means global communication, which means synchronizing the grid, and that requires breaking our computation into two separate kernel launches. The first kernel generates and stores partial reduction results, and the second kernel reduces the partial results into a single total. We can actually do both steps with the following simple kernel code.

__global__ void deviceReduceKernel(int *in, int* out, int N) {
int sum = 0; //reduce multiple elements per thread for (int i = blockIdx.x * blockDim.x + threadIdx.x; i < N; i += blockDim.x * gridDim.x) {
sum += in[i]; } sum = blockReduceSum(sum); if (threadIdx.x==0) out[blockIdx.x]=sum;}

This kernel uses grid-stride loops (see the post ), which has two advantages: it allows us to reuse the kernel for both passes, and it computes per-thread sums in registers before calling blockReduceSum(). Doing the sum in registers significantly reduces the communication between threads for large array sizes. Note that the output array must have a size equal to or larger than the number of thread blocks in the grid because each block writes to a unique location within the array.

The code to launch the kernels on the host is straightforward.

void deviceReduce(int *in, int* out, int N) {
int threads = 512; int blocks = min((N + threads - 1) / threads, 1024); deviceReduceKernel<<
>>(in, out, N); deviceReduceKernel<<<1, 1024>>>(out, out, blocks);}

Here you can see that the first pass reduces using 1024 blocks of 512 threads each, which is more than enough threads to saturate Kepler. We have chosen to restrict this phase to only 1024 blocks so that we only have 1024 partial results. In CUDA the maximum block size is 1024 threads. Since we have restricted the number of partial results to 1024 we can perform the second reduction phase of with a single block of 1024 threads.

In a previous  we discussed how to increase performance by using vector loads. We can apply the same technique to reduction, and while I won’t discuss that optimization in this post, you can find our vectorized solutions in the .

Atomic Power

Another Kepler architecture feature is fast device memory atomic operations, so let’s look at how we can use atomics to optimize reductions. The first way is to reduce across the warp using shuffle and then have the first thread of each warp atomically update the reduced value, as in the following code.

__global__ void deviceReduceWarpAtomicKernel(int *in, int* out, int N) {
int sum = int(0); for(int i = blockIdx.x * blockDim.x + threadIdx.x; i < N; i += blockDim.x * gridDim.x) {
sum += in[i]; } sum = warpReduceSum(sum); if (threadIdx.x & (warpSize - 1) == 0) atomicAdd(out, sum);}

Here we can see the kernel looks very similar to the deviceReduceKernel() above, but instead of assigning the partial reduction to the output array we atomically add the partial reduction directly to final reduction value in memory. This code is much simpler because it launches only a single kernel and it doesn’t use shared memory, a temporary global array, or any __syncthreads(). The disadvantage is that there could be a lot of atomic collisions when adding the partials. However, in applications where there are not many collisions this is a great approach. Another possibly significant disadvantage is that floating point reductions will not be bit-wise exact from run to run, because of the different order of additions ( [PDF]).

Another way to use atomics would be to reduce the block and then immediately use atomicAdd().

__global__ void deviceReduceBlockAtomicKernel(int *in, int* out, int N) {
int sum = int(0); for(int i = blockIdx.x * blockDim.x + threadIdx.x; i < N; i += blockDim.x * gridDim.x) {
sum += in[i]; } sum = blockReduceSum(sum); if (threadIdx.x == 0) atomicAdd(out, sum);}

This approach requires fewer atomics than the warp atomic approach, executes only a single kernel, and does not require temporary memory, but it does use __syncthreads() and shared memory. This kernel could be useful when you want to avoid two kernel calls or extra allocation but have too much atomic pressure to use warpReduceSum() with atomics.

Figure 3 compares the performance of the three device reduction algorithms on a Tesla K20X GPU.

reduction-perf1Figure 3: Reduction Bandwidth on K20X.

Here we can see that the warpReduceSum followed by atomics is the fastest. Thus if you are working on integral data or are able to tolerate changes in the order of operations you may want to consider using atomics. In addition we can see that vectorizing provides a significant boost in performance.

The CUB Library

Now that we have covered the low-level details of how to write efficient reductions let’s look at an easier way that you should consider using first: a library.  is a library of common building blocks for parallel algorithms including reductions that is tuned for multiple CUDA GPU architectures and automatically picks the best algorithm for the GPU you are running on and the data size and type you are processing. Figure 4 shows that the performance of CUB matches or exceeds our custom reduction code.

reduction-perf2Figure 4: Reduction Bandwidth with CUB

Interleaving Multiple Reductions

Before I end this post there is one last trick that I would like to leave you with. It is common to need to reduce multiple fields at the same time. A simple solution is to reduce one field at a time (for example by calling reduce multiple times). But this is not the most efficient method, so let’s look at the assembly for the warp reduction shown in Figure 5.

Figure 5: SASS for a single shuffle reductionsFigure 5: SASS for a single shuffle reductions

Here the arrows show dependencies; you can see that every instruction is dependent on the one before it, so there is no instruction-level parallelism (ILP) exposed. Perform multiple reductions at the same time by interleaving their instructions is more efficient because it increases ILP. The following code shows an example for an x, y, z triple.

__inline__ __device__int3 warpReduceSumTriple(int3 val) {
for (int offset = warpSize / 2; offset > 0; offset /= 2) {
val.x += __shfl_down(val.x, offset); val.y += __shfl_down(val.y, offset); val.z += __shfl_down(val.z, offset); } return val; }
Figure 6: SASS for interleaved shuffle reductions

This leads to the SASS shown in Figure 6 (note that I have only shown the first few lines for brevity).

Here we can see we no longer have dependent instructions immediately following each other. Therefore more instructions are able to issue back-to-back without stalling, reducing the total exposed latency and thereby reducing the overall execution time.

转载地址:http://ctiqb.baihongyu.com/

你可能感兴趣的文章
中文版Windows XP 的新增功能(转)
查看>>
Web Application 開 發 利 器 - WebSnap(三) (转)
查看>>
跟我学 安装Windows Vista Bata2实录(转)
查看>>
Windows Vista IIS 7.0开启方法(转)
查看>>
Windows Vista六大版本详细介绍(转)
查看>>
Windows XP 中注册表内容的导入和导出(转)
查看>>
单一产品不会成功 开源软件开始商业应用(转)
查看>>
RedHat上SSH2的安装和使用(转)
查看>>
安全使用RedHat Linux系统(转)
查看>>
RedHat Enterprise AS4硬盘安装步骤(转)
查看>>
全国第一个高校Linux培训考试中心建立(转)
查看>>
关于Kerberos安装的几个问题(转)
查看>>
Solaris硬盘分区简介(转)
查看>>
gcc编译器小知识FAQ(转)
查看>>
Linux下多线程编程与信号处理易疏忽的一个例子(转)
查看>>
流氓和木马结合 强行关闭你的防火墙(转)
查看>>
SUSE一纸诉状控告SCO 捍卫知识产权(转)
查看>>
debian下编译2.6.13.2内核的步骤及感受(转)
查看>>
预装正版的市场意义(转)
查看>>
创建小于16M XFree86迷你Linux系统(转)
查看>>