Using GPUs to handle Big Data with Java
J On The Beach
Adam Roberts
IBM Runtimes, Hursley, UK
GPU programming basics
Benefits; why should I care?
Options available in our JDKs
Working code
Links to try it out for yourself
High level overview
GPUs are great for doing MANY OF THE SAME OPERATIONS AT ONCE: big
performance benefits (SIMD programming)
Traditionally we'll program using CUDA or OpenCL – like C and C++ and
we'll write JNI code to access data in our Java world using the GPU
Most modern computers shipped with GPUs we can program to (CUDA
drivers for x86-64 Windows, Linux, and IBM's Power LE)
GPU CPU
High level overview
GPUs are great for doing MANY OF THE SAME OPERATIONS AT ONCE: big
performance benefits (SIMD programming)
Traditionally we'll program using CUDA or OpenCL – like C and C++ and
we'll write JNI code to access data in our Java world using the GPU
Most modern computers shipped with GPUs we can program to (CUDA
drivers for x86-64 Windows, Linux, and IBM's Power LE)
GPU CPU
z13
AlphaGo: 1202 CPUs, 176 GPUs,
Titan: 18,688 GPUs, 18,688 CPUs
CERN and Geant: GPUs in use
Oak Ridge and IBM - “the world's
fastest supercomputers by 2017”: two
for $325m
CUDA core: part of the GPU, they execute groups of threads
Kernel: a function we'll run on the GPU
Grid: think of it as a CUBE of BLOCKS which lay out THREADS; our GPU functions
(KERNELS) run on one of these, we need to know the grid dimensions for each
kernel
Threads: these do our computation, far more available than you're used to with
CPUs
Blocks: groups of threads
Recommended reading:
http://docs.nvidia.com/cuda/cuda-c-programming-guide/#thread-hierarchy
nvidia-smi tells you about your GPU's limits
One GPU can have MANY CUDA cores, each CUDA core executes many threads at
once
Our key terms
Why is this important?
To achieve parallelism: a layout of
threads we can use to solve our
big data problems
Block dimensions?
How many threads can run on a block
Grid dimensions?
How many blocks we can have
threadIdx.x? (BLOCKS contain THREADS)
Built in variable to get the current x coordinate of a given THREAD (can
have an x, y, z coordinate too)
blockidx.x? (GRIDS contain BLOCKS)
Built in variable to get the current x coordinate of a given BLOCK (can have
an x, y, z coordinate too)
Grid image is fully credited to http://www.karimson.com/posts/introduction-to-cuda/
CUDA Grid
For figuring out the dimensions we can use the following Java code, we want
512 threads and as many blocks as possible for the problem size
int log2BlockDim = 9;
int numBlocks = (numElements + 511) >> log2BlockDim;
int numThreads = 1 << log2BlockDim;
Size Blocks Threads
500 1 512
1,024 2 512
32,000 63 512
64,000 125 512
100,000 196 512
512,000 1,000 512
1,024,000 2,000 512
Traditional CUDA programming model
Assume we have some data in memory (host side): call it myData
Allocate space on the GPU (device side): cudaMalloc, returns a pointer,
where has it gone? Let's say mySpaceOnGPU
Copy myData from the host to your allocated space (mySpaceOnGPU): look
for cudaMemcpyHostToDevice
Process your data on the GPU in a kernel (look for <<< and >>>)
Copy the result back (what's at mySpaceOnGPU replaces myData on the
host): look for cudaMemcpyDeviceToHost
All done, head for the jamón Ibérico de bellota and Manchego!
 Windows CUDA 7.5 SDK
 New text file: Hola.cu, create a sequence of characters, send to the GPU, run our program, copy the
result back and get a result
 nvcc Hola.cu (build it), run the a.exe next
 Unsafe, we're in the CUDA world now! Can get errors on Windows and seg faults on Linux
char myHostChars[16] = "Hello0000000";
int myHostOffsets[16] = {0,0,0,0,0,32,77,97,108,97,103,97};
shar* myDeviceChars, myDeviceInts;
cudaMalloc(&myDeviceChars, numBytesForChars); // Omitted amount calculating
cudaMalloc(&myDeviceInts, numBytesForOffsets);
cudaMemcpy(myDeviceChars, myHostChars, numBytesForChars,
cudaMemcpyHostToDevice);
cudaMemcpy(myDeviceInts, myHostOffsets, numBytesForOffsets,
cudaMemcpyHostToDevice);
myKernel<<<dimGrid, dimBlock>>>(myDeviceChars, myDeviceInts);
cudaMemcpy(myHostChars, myDeviceChars, numBytesForChars,
cudaMemcpyDeviceToHost);
Thanks to Ingemar Ragnehalm for this idea: http://computer-
graphics.se/hello-world-for-cuda.html
Hello Malaga
__global__ void a(char* inputChars,int* offsetAmounts){
inputChars[threadIdx.x]+= offsetAmounts[threadIdx.x];
}
__global__ : it's a function we can call on the host, it's available to be
called from everywhere
What's a grid again?
A kernel runs on a grid and it's how we can run many threads that work
on different parts of the data
char*? A pointer to a bunch of characters we'll send to the GPU
int*? A pointer to a bunch of ints we'll send to the GPU
threadIdx.x?
We use this as an index to our array, remember lots of threads run on
the GPU. Access each item for our example here
Traditional CUDA programming model with Java...
●
Assume we have some data in RAM (host side, in the JVM heap): call it
myData
●
Create a native method and call this, pass in your object containing the
data as a parameter
●
Enter the JNI world: write .cpp or .c code with a matching signature for
your native method
●
Now use JNI to get a pointer to those elements
●
With this pointer, we can figure out how much memory we need
●
Allocate space on the GPU (device side): cudaMalloc, returns
mySpaceOnGPU
●
Copy myData from your JNI pointer to your allocated space
(mySpaceOnGPU)
●
Process your data on the GPU
●
Copy the result back (what's at mySpaceOnGPU replaces myData on the
host)
●
Release the elements (updating your JNI pointer so the data in our JVM
heap is now the result)
●
All done
Class libraryOur first experiment
-Dcom.ibm.gpu.enable/enforce/disable
40,000,000
400,000,000
Ints sorted
per second
Array length
400m per sec
40m per sec
Sorting throughput for ints
30,000 300,000 3,000,000 30,000,000 300,000,000
JIT optionsLet's make it even easier
What's a JIT anyway?
Just in Time Compiler: compiles our Java bytecode into typically MUCH faster
CPU specific instructions
export IBM_JAVA_OPTIONS=”-Xint“ for no JIT, see the difference for yourself
-Xjit:enableGPU
Use an IntStream and specify our JIT option when running
Primitive types can be sent to the GPU (byte, char, short, int, float, double, long)
Requires a correct PATH! All mentioned in the backup slides
CUDA4JOur Java API for GPUs
Instead of writing lots of native methods and boilerplate code, wouldn't it be great if we can program
what we can in Java and only write the GPU specific logic in CUDA?
●
Similar to JCuda but provides a higher level abstraction and is production quality
•
No arbitrary and unrestricted use of Pointer(long)
•
Fully supported by IBM
•
Still feels like Java instead of C
Write a kernel and compile it into a “fatbin”
nvcc --fatbin AdamKernel.cu
Write your Java code
import com.ibm.cuda.*;
import com.ibm.cuda.CudaKernel.*;
Write Java code to load your fatbin
module = new Loader().loadModule("AdamDoubler.fatbin", device);
Build and run as normal
Only doubling ints; could be any use case where we're doing
the same operation to lots of data
Starting small – what fits on our grid (doSmallProblem())
Bigger but still within size limits for the grid
(doMediumProblem())
Too big (gives us an exception) so we need to break down the
problem and use the slice* API (doChunkingProblem())
All of my example code is in the backup slides
Javadocs: search IBM Java 8 API com.ibm.cuda
* Tip: the offsets are byte offsets, so you'll want your index in Java * the size
of the object!
Show me the code
Spark APIs
 Recommendation algorithms such as
– Alternating least squares
• Movie recommendations on Netflix
• Recommended purchases on Amazon
• Similar songs with Spotify
 Clustering algorithms such as
– K-means (unsupervised learning) – blazingly fast compared
to other clustering methods
• Produce clusters from data to determine which cluster a
new item can be categorised as
• Identify anomalies: transaction fraud or erroneous data
 Classification algorithms such as
– Logistic regression
• Create a model that we can use to predict where to
plot the next item in a sequence
• Healthcare: predict adverse drug reactions based on
known interactions between similar drugs
Machine learning and Spark
●
Behind the scenes improvements to Spark APIs
●
Currently run with the property: spark.mllib.ALS.useGPU
●
Full paper: http://arxiv.org/abs/1603.03820
●
Full implementation at: https://github.com/IBMSparkGPU
Netflix 1.5 GB 12 t, CPU 64 t, CPU GPU
Intel, IBM Java 8 676s N/A 140s
Our ALS routine checks for the property, currently always send work to a GPU
Intel set up we used: 2 Intel(R) Xeon(R) CPU E5-2667 v2 @ 3.30GHz, 16 cores in the machine (SMT-
2), 256 GB RAM, Red Hat Enterprise Linux Server release 6.6 (Santiago)
GPUs present: two Tesla K80Ms
Also available for Power LE and a work in progress
Our GPU work with Spark
●
Free stuff to try if you have a GPU
●
Use hardware accelerators that you have available: API or behind the
scenes optimisations with our free Java implementation
●
Developing story, your feedback is important
Platform CUDA4J Lambdas Spark
64-bit
Windows
N Y N
64-bit Linux
(x86)
Y Y Y
64-bit Power
LE Linux
Y Y Y
https://www.ibm.com/developerworks/java/jdk
https://www.ibm.com/developerworks/java/jdk/spark
Conclusion
Thank You
For feedback, suggestions,
or any questions, email
aroberts@uk.ibm.com
Notices and Disclaimers
Copyright © 2016 by International Business Machines Corporation (IBM). No part of this document may be reproduced or
transmitted in any form without written permission from IBM.
U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM.
Information in these presentations (including information relating to products that have not yet been announced by IBM) has been
reviewed for accuracy as of the date of initial publication and could include unintentional technical or typographical errors. IBM
shall have no responsibility to update this information. THIS document is distributed "AS IS" without any warranty, either express
or implied. In no event shall IBM be liable for any damage arising from the use of this information, including but not limited to,
loss of data, business interruption, loss of profit or loss of opportunity. IBM products and services are warranted according to the
terms and conditions of the agreements under which they are provided.
Any statements regarding IBM's future direction, intent or product plans are subject to change or withdrawal without
notice.
Performance data contained herein was generally obtained in a controlled, isolated environments. Customer examples are
presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual
performance, cost, savings or other results in other operating environments may vary.
References in this document to IBM products, programs, or services does not imply that IBM intends to make such products,
programs or services available in all countries in which IBM operates or does business.
Workshops, sessions and associated materials may have been prepared by independent session speakers, and do not
necessarily reflect the views of IBM. All materials and discussions are provided for informational purposes only, and are neither
intended to, nor shall constitute legal or other guidance or advice to any individual participant or their specific situation.
It is the customer’s responsibility to insure its own compliance with legal requirements and to obtain advice of competent legal
counsel as to the identification and interpretation of any relevant laws and regulatory requirements that may affect the customer’s
business and any actions the customer may need to take to comply with such laws. IBM does not provide legal advice or
represent or warrant that its services or products will ensure that the customer is in compliance with any law.
Notices and Disclaimers (con’t)
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products in connection with this publication
and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM does not
warrant the quality of any third-party products, or the ability of any such third-party products to interoperate with IBM’s
products. IBM expressly disclaims all warranties, expressed or implied, including but not limited to, the implied warranties
of merchantability and fitness for a particular purpose.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any
IBM patents, copyrights, trademarks or other intellectual property right.
IBM, the IBM logo, ibm.com, Bluemix, Blueworks Live, CICS, Clearcase, DOORS®, Enterprise Document
Management System™, Global Business Services ®, Global Technology Services ®, Information on Demand, ILOG,
LinuxONE™, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON, OpenPower, PureAnalytics™,
PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®,
pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, SoDA, SPSS, StoredIQ, Tivoli®, Trusteer®,
urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of International
Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might
be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and
trademark information" at: www.ibm.com/legal/copytrade.shtml.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their
respective owners.
Databricks is a registered trademark of Databricks, Inc.
Apache Spark, Apache Cassandra, Apache Hadoop, Apache Maven, Spark, Apache, and the Apache product logos
including the Spark logo are trademarks of The Apache Software Foundation.
Code listing and benchmark information
Benchmark info
Name Summary Data size Type
MM A dense matrix multiplication: C
= A.B
1,024 ×
1,024
double
SpMM A sparse matrix multiplication: C
= A.B
500,000×
500,000
double
Jacobi2D Solve an equation using the
Jacobi method
8,192 ×
8,192
double
LifeGame Conway’s game of life. Iterate
10,000 times
512 × 512 byte
Measured performance improvement by GPU using four programs using
1-CPU-thread sequential execution
160-CPU-thread parallel execution
Experimental environment used
IBM Java 8 Service Release 2 for PowerPC Little Endian
Download for free at http://www.ibm.com/java/jdk/
Two 10-core 8-SMT IBM POWER8 CPUs at 3.69 GHz with 256GB memory (160
hardware threads in total) with one NVIDIA Kepler K40m GPU (2880 CUDA
cores in total) at 876 MHz with 12GB global memory (ECC off)
Ubuntu 14.10, CUDA 5.5
Benchmark info
Set the PATH to include the CUDA library. For example, set
PATH=<CUDA_LIBRARY_PATH>;%PATH%, where the <CUDA_LIBRARY_PATH>
variable is the full path to the CUDA library. The <CUDA_LIBRARY_PATH> variable is
C:Program FilesNVIDIA GPU Computing ToolkitCUDAv7.5bin, which assumes
CUDA is installed to the default directory.
Note: If you are using Just-In-Time Compiler (JIT) based GPU support, you must also
include paths to the NVIDIA Virtual Machine (NVVM) library, and to the NVDIA
Management Library (NVML). For example, the <CUDA_LIBRARY_PATH> variable is
C:Program FilesNVIDIA GPU Computing
ToolkitCUDAv7.5bin;<NVVM_LIBRARY_PATH>;<NVML_LIBRARY_PATH>.
If the NVVM library is installed to the default directory, the
<NVVM_LIBRARY_PATH> variable is C:Program FilesNVIDIA GPU Computing
ToolkitCUDAv7.5nvvmbin.
You can find the NVML library in your NVIDIA drivers directory. The default location
of this directory is C:Program FilesNVIDIA CorporationNVSMI.
From
https://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ib
m.java.win.80.doc/user/gpu_enabling.html?lang=en
Code in the demo – Sample.java part 1 of 3
import com.ibm.cuda.*;
import com.ibm.cuda.CudaKernel.*;
public class Sample {
private static final boolean PRINT_DATA = false;
private static int numElements;
private static int[] myData;
private static CudaBuffer buffer1;
private static CudaDevice device = new CudaDevice(0);
private static CudaModule module;
private static CudaKernel kernel;
private static CudaStream stream;
public static void main(String[] args) {
try {
module = new Loader().loadModule("AdamDoubler.fatbin", device);
kernel = new CudaKernel(module, "Cuda_cuda4j_AdamDoubler_Strider");
stream = new CudaStream(device);
doSmallProblem();
doMediumProblem();
doChunkingProblem();
} catch (CudaException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
private static void doSmallProblem() throws Exception {
System.out.println("Doing the small sized problem");
numElements = 100;
myData = new int[numElements];
Util.fillWithInts(myData);
CudaGrid grid = Util.makeGrid(numElements, stream);
System.out.println("Kernel grid: <<<" + grid.gridDimX + ", " + grid.blockDimX + ">>>");
buffer1 = new CudaBuffer(device, numElements * Integer.BYTES);
buffer1.copyFrom(myData);
Parameters kernelParams = new Parameters(2).set(0, buffer1).set(1, numElements);
kernel.launch(grid, kernelParams);
int[] originalArrayCopy = new int[myData.length];
System.arraycopy(myData, 0, originalArrayCopy, 0, myData.length);
buffer1.copyTo(myData);
Util.checkArrayResultsDoubler(myData, originalArrayCopy);
}
private static void doMediumProblem() throws Exception {
System.out.println("Doing the medium sized problem");
numElements = 5_000_000;
myData = new int[numElements];
Util.fillWithInts(myData);
// This is only when handling more than max blocks * max threads per kernel
// Grid dim is the number of blocks in the grid
// Block dim is the number of threads in a block
// buffer1 is how we'll use our data on the GPU
buffer1 = new CudaBuffer(device, numElements * Integer.BYTES);
// myData is on CPU, transfer it
buffer1.copyFrom(myData);
// Our stream executes the kernel, can launch many streams at once
CudaGrid grid = Util.makeGrid(numElements, stream);
System.out.println("Kernel grid: <<<" + grid.gridDimX + ", " + grid.blockDimX + ">>>");
Parameters kernelParams = new Parameters(2).set(0, buffer1).set(1, numElements);
kernel.launch(grid, kernelParams);
int[] originalArrayCopy = new int[myData.length];
System.arraycopy(myData, 0, originalArrayCopy, 0, myData.length);
buffer1.copyTo(myData);
Util.checkArrayResultsDoubler(myData, originalArrayCopy);
}
Code in the demo – Sample.java part 2 of 3
private static void doChunkingProblem() throws Exception {
// I know 5m doesn't require chunking on the GPU but this does
System.out.println("Doing the too big to handle in one kernel problem");
numElements = 70_000_000;
myData = new int[numElements];
Util.fillWithInts(myData);
buffer1 = new CudaBuffer(device, numElements * Integer.BYTES);
buffer1.copyFrom(myData);
CudaGrid grid = Util.makeGrid(numElements, stream);
System.out.println("Kernel grid: <<<" + grid.gridDimX + ", " + grid.blockDimX + ">>>");
// Check we can actually launch a kernel with this grid size
try {
Parameters kernelParams = new Parameters(2).set(0, buffer1).set(1, numElements);
kernel.launch(grid, kernelParams);
int[] originalArrayCopy = new int[numElements];
System.arraycopy(myData, 0, originalArrayCopy, 0, numElements);
buffer1.copyTo(myData);
Util.checkArrayResultsDoubler(myData, originalArrayCopy);
} catch (CudaException ce) {
if (ce.getMessage().equals("invalid argument")) {
System.out.println("it was invalid argument, too big!");
int maxThreadsPerBlockX = device.getAttribute(CudaDevice.ATTRIBUTE_MAX_BLOCK_DIM_X);
int maxBlocksPerGridX = device.getAttribute(CudaDevice.ATTRIBUTE_MAX_GRID_DIM_Y);
long maxThreadsPerGrid = maxThreadsPerBlockX * maxBlocksPerGridX;
// 67,107,840 on my Windows box
System.out.println("Max threads per grid: " + maxThreadsPerGrid);
long numElementsAtOnce = maxThreadsPerGrid;
long elementsDone = 0;
grid = new CudaGrid(maxBlocksPerGridX, maxThreadsPerBlockX, stream);
System.out.println("Kernel grid: <<<" + grid.gridDimX + ", " + grid.blockDimX + ">>>");
while (elementsDone < numElements) {
if ( (elementsDone + numElementsAtOnce) > numElements) {
numElementsAtOnce = numElements - elementsDone; // Just do the remainder
}
long toOffset = numElementsAtOnce + elementsDone;
// It's the byte offset not the element index offset
CudaBuffer slicedSection = buffer1.slice(elementsDone * Integer.BYTES, toOffset * Integer.BYTES);
Parameters kernelParams = new Parameters(2).set(0, slicedSection).set(1, numElementsAtOnce);
kernel.launch(grid, kernelParams);
elementsDone += numElementsAtOnce;
}
int[] originalArrayCopy = new int[myData.length];
System.arraycopy(myData, 0, originalArrayCopy, 0, myData.length);
buffer1.copyTo(myData);
Util.checkArrayResultsDoubler(myData, originalArrayCopy);
} else {
System.out.println(ce.getMessage());
}
}
}
Code in the demo – Sample.java part 3 of 3
Code in the demo – Lambda.java part 1 of 2
import java.util.stream.IntStream;
public class Lambda {
private static long startTime = 0;
// -Xjit:enableGPU is our JVM option
public static void main(String[] args) {
boolean timeIt = true;
int numElements = 500_000_000;
int[] toDouble = new int[numElements];
Util.fillWithInts(toDouble);
myDoublerWithALambda(toDouble, timeIt);
double[] toHalf = new double[numElements];
Util.fillWithDoubles(toHalf);
myHalverWithALambda(toHalf, timeIt);
double[] toRandomFunc = new double[numElements];
Util.fillWithDoubles(toRandomFunc);
myRandomFuncWithALambda(toRandomFunc, timeIt);
}
private static void myDoublerWithALambda(int[] myArray, boolean timeIt) {
if (timeIt) startTime = System.currentTimeMillis();
IntStream.range(0, myArray.length).parallel().forEach(i -> {
myArray[i] = myArray[i] * 2; // Done on GPU for us
});
if (timeIt) {
System.out.println("Done doubling with a lambda, time taken: " +
(System.currentTimeMillis() - startTime) + " milliseconds");
}
}
private static void myHalverWithALambda(double[] myArray, boolean timeIt) {
if (timeIt) startTime = System.currentTimeMillis();
IntStream.range(0, myArray.length).parallel().forEach(i -> {
myArray[i] = myArray[i] / 2; // Again on GPU
});
if (timeIt) {
System.out.println("Done halving with a lambda, time taken: " +
(System.currentTimeMillis() - startTime) + " milliseconds");
}
}
private static void myRandomFuncWithALambda(double[] myArray, boolean timeIt) {
if (timeIt) startTime = System.currentTimeMillis();
IntStream.range(0, myArray.length).parallel().forEach(i -> {
myArray[i] = myArray[i] * 3.142; // Double so we don't lose precision
});
if (timeIt) {
System.out.println("Done with the random func with a lambda, time taken: " +
(System.currentTimeMillis() - startTime) + " milliseconds");
}
}
}
Code in the demo – Lambda.java part 2 of 2
Code in the demo – Util.java part 1 of 2
import com.ibm.cuda.*;
public class Util {
protected static void fillWithInts(int[] toFill) {
for (int i = 0; i < toFill.length; i++) {
toFill[i] = i;
}
}
protected static void fillWithDoubles(double[] toFill) {
for (int i = 0; i < toFill.length; i++) {
toFill[i] = i;
}
}
protected static void printArray(int[] toPrint) {
System.out.println();
for (int i = 0; i < toPrint.length; i++) {
if (i == toPrint.length - 1) {
System.out.print(toPrint[i] + ".");
} else {
System.out.print(toPrint[i] + ", ");
}
}
System.out.println();
}
protected static CudaGrid makeGrid(int numElements, CudaStream stream) {
int numThreads = 512;
int numBlocks = (numElements + (numThreads - 1)) / numThreads;
return new CudaGrid(numBlocks, numThreads, stream);
}
/*
* Array will have been doubled at this point
*/
protected static void checkArrayResultsDoubler(int[] toCheck, int[] originalArray) {
long errorCount = 0;
// Check result, data has been copied back here
if (toCheck.length != originalArray.length) {
System.err.println("Something's gone horribly wrong, different array length");
}
for (int i = 0; i < originalArray.length; i++) {
if (toCheck[i] != (originalArray[i] * 2) ) {
errorCount++;
/*
System.err.println("Got an error, " + originalArray[i] +
" is incorrect: wasn't doubled correctly!" +
" Got " + toCheck[i] + " but should be " + originalArray[i] * 2);
*/
} else {
//System.out.println("Correct, doubled " + originalArray[i] + " and it became " + toCheck[i]);
}
}
System.err.println("Incorrect results: " + errorCount);
}
}
Code in the demo – Util.java part 2 of 2
Code in the demo – Loader.java
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
import com.ibm.cuda.CudaDevice;
import com.ibm.cuda.CudaException;
import com.ibm.cuda.CudaModule;
public class Loader {
private final CudaModule.Cache moduleCache = new CudaModule.Cache();
CudaModule loadModule(String moduleName, CudaDevice device) throws CudaException, IOException {
CudaModule module = moduleCache.get(device, moduleName);
if (module == null) {
try (InputStream stream = getClass().getResourceAsStream(moduleName)) {
if (stream == null) {
throw new FileNotFoundException(moduleName);
}
module = new CudaModule(device, stream);
moduleCache.put(device, moduleName, module);
}
}
return module;
}
}
Code in the demo – BuildIt.bat
nvcc -fatbin AdamDoubler.cu
"C:ibm8sr3gasdkbinjava" -version
"C:ibm8sr3gasdkbinjavac" *.java
"C:ibm8sr3gasdkbinjava" -Xmx2g Sample
"C:ibm8sr3gasdkbinjava" -Xmx4g Lambda
"C:ibm8sr3gasdkbinjava" -Xjit:enableGPU={verbose} -Xmx4g
Lambda

Using GPUs to handle Big Data with Java by Adam Roberts.

  • 1.
    Using GPUs tohandle Big Data with Java J On The Beach Adam Roberts IBM Runtimes, Hursley, UK
  • 2.
    GPU programming basics Benefits;why should I care? Options available in our JDKs Working code Links to try it out for yourself
  • 3.
    High level overview GPUsare great for doing MANY OF THE SAME OPERATIONS AT ONCE: big performance benefits (SIMD programming) Traditionally we'll program using CUDA or OpenCL – like C and C++ and we'll write JNI code to access data in our Java world using the GPU Most modern computers shipped with GPUs we can program to (CUDA drivers for x86-64 Windows, Linux, and IBM's Power LE) GPU CPU
  • 4.
    High level overview GPUsare great for doing MANY OF THE SAME OPERATIONS AT ONCE: big performance benefits (SIMD programming) Traditionally we'll program using CUDA or OpenCL – like C and C++ and we'll write JNI code to access data in our Java world using the GPU Most modern computers shipped with GPUs we can program to (CUDA drivers for x86-64 Windows, Linux, and IBM's Power LE) GPU CPU
  • 5.
  • 6.
    AlphaGo: 1202 CPUs,176 GPUs, Titan: 18,688 GPUs, 18,688 CPUs CERN and Geant: GPUs in use Oak Ridge and IBM - “the world's fastest supercomputers by 2017”: two for $325m
  • 7.
    CUDA core: partof the GPU, they execute groups of threads Kernel: a function we'll run on the GPU Grid: think of it as a CUBE of BLOCKS which lay out THREADS; our GPU functions (KERNELS) run on one of these, we need to know the grid dimensions for each kernel Threads: these do our computation, far more available than you're used to with CPUs Blocks: groups of threads Recommended reading: http://docs.nvidia.com/cuda/cuda-c-programming-guide/#thread-hierarchy nvidia-smi tells you about your GPU's limits One GPU can have MANY CUDA cores, each CUDA core executes many threads at once Our key terms
  • 8.
    Why is thisimportant? To achieve parallelism: a layout of threads we can use to solve our big data problems Block dimensions? How many threads can run on a block Grid dimensions? How many blocks we can have threadIdx.x? (BLOCKS contain THREADS) Built in variable to get the current x coordinate of a given THREAD (can have an x, y, z coordinate too) blockidx.x? (GRIDS contain BLOCKS) Built in variable to get the current x coordinate of a given BLOCK (can have an x, y, z coordinate too) Grid image is fully credited to http://www.karimson.com/posts/introduction-to-cuda/ CUDA Grid
  • 9.
    For figuring outthe dimensions we can use the following Java code, we want 512 threads and as many blocks as possible for the problem size int log2BlockDim = 9; int numBlocks = (numElements + 511) >> log2BlockDim; int numThreads = 1 << log2BlockDim; Size Blocks Threads 500 1 512 1,024 2 512 32,000 63 512 64,000 125 512 100,000 196 512 512,000 1,000 512 1,024,000 2,000 512
  • 10.
    Traditional CUDA programmingmodel Assume we have some data in memory (host side): call it myData Allocate space on the GPU (device side): cudaMalloc, returns a pointer, where has it gone? Let's say mySpaceOnGPU Copy myData from the host to your allocated space (mySpaceOnGPU): look for cudaMemcpyHostToDevice Process your data on the GPU in a kernel (look for <<< and >>>) Copy the result back (what's at mySpaceOnGPU replaces myData on the host): look for cudaMemcpyDeviceToHost All done, head for the jamón Ibérico de bellota and Manchego!
  • 11.
     Windows CUDA7.5 SDK  New text file: Hola.cu, create a sequence of characters, send to the GPU, run our program, copy the result back and get a result  nvcc Hola.cu (build it), run the a.exe next  Unsafe, we're in the CUDA world now! Can get errors on Windows and seg faults on Linux char myHostChars[16] = "Hello0000000"; int myHostOffsets[16] = {0,0,0,0,0,32,77,97,108,97,103,97}; shar* myDeviceChars, myDeviceInts; cudaMalloc(&myDeviceChars, numBytesForChars); // Omitted amount calculating cudaMalloc(&myDeviceInts, numBytesForOffsets); cudaMemcpy(myDeviceChars, myHostChars, numBytesForChars, cudaMemcpyHostToDevice); cudaMemcpy(myDeviceInts, myHostOffsets, numBytesForOffsets, cudaMemcpyHostToDevice); myKernel<<<dimGrid, dimBlock>>>(myDeviceChars, myDeviceInts); cudaMemcpy(myHostChars, myDeviceChars, numBytesForChars, cudaMemcpyDeviceToHost); Thanks to Ingemar Ragnehalm for this idea: http://computer- graphics.se/hello-world-for-cuda.html Hello Malaga
  • 12.
    __global__ void a(char*inputChars,int* offsetAmounts){ inputChars[threadIdx.x]+= offsetAmounts[threadIdx.x]; } __global__ : it's a function we can call on the host, it's available to be called from everywhere What's a grid again? A kernel runs on a grid and it's how we can run many threads that work on different parts of the data char*? A pointer to a bunch of characters we'll send to the GPU int*? A pointer to a bunch of ints we'll send to the GPU threadIdx.x? We use this as an index to our array, remember lots of threads run on the GPU. Access each item for our example here
  • 13.
    Traditional CUDA programmingmodel with Java... ● Assume we have some data in RAM (host side, in the JVM heap): call it myData ● Create a native method and call this, pass in your object containing the data as a parameter ● Enter the JNI world: write .cpp or .c code with a matching signature for your native method ● Now use JNI to get a pointer to those elements ● With this pointer, we can figure out how much memory we need ● Allocate space on the GPU (device side): cudaMalloc, returns mySpaceOnGPU ● Copy myData from your JNI pointer to your allocated space (mySpaceOnGPU) ● Process your data on the GPU ● Copy the result back (what's at mySpaceOnGPU replaces myData on the host) ● Release the elements (updating your JNI pointer so the data in our JVM heap is now the result) ● All done
  • 15.
  • 16.
    -Dcom.ibm.gpu.enable/enforce/disable 40,000,000 400,000,000 Ints sorted per second Arraylength 400m per sec 40m per sec Sorting throughput for ints 30,000 300,000 3,000,000 30,000,000 300,000,000
  • 17.
    JIT optionsLet's makeit even easier
  • 18.
    What's a JITanyway? Just in Time Compiler: compiles our Java bytecode into typically MUCH faster CPU specific instructions export IBM_JAVA_OPTIONS=”-Xint“ for no JIT, see the difference for yourself
  • 19.
    -Xjit:enableGPU Use an IntStreamand specify our JIT option when running Primitive types can be sent to the GPU (byte, char, short, int, float, double, long) Requires a correct PATH! All mentioned in the backup slides
  • 21.
  • 22.
    Instead of writinglots of native methods and boilerplate code, wouldn't it be great if we can program what we can in Java and only write the GPU specific logic in CUDA? ● Similar to JCuda but provides a higher level abstraction and is production quality • No arbitrary and unrestricted use of Pointer(long) • Fully supported by IBM • Still feels like Java instead of C Write a kernel and compile it into a “fatbin” nvcc --fatbin AdamKernel.cu Write your Java code import com.ibm.cuda.*; import com.ibm.cuda.CudaKernel.*; Write Java code to load your fatbin module = new Loader().loadModule("AdamDoubler.fatbin", device); Build and run as normal
  • 23.
    Only doubling ints;could be any use case where we're doing the same operation to lots of data Starting small – what fits on our grid (doSmallProblem()) Bigger but still within size limits for the grid (doMediumProblem()) Too big (gives us an exception) so we need to break down the problem and use the slice* API (doChunkingProblem()) All of my example code is in the backup slides Javadocs: search IBM Java 8 API com.ibm.cuda * Tip: the offsets are byte offsets, so you'll want your index in Java * the size of the object! Show me the code
  • 24.
  • 25.
     Recommendation algorithmssuch as – Alternating least squares • Movie recommendations on Netflix • Recommended purchases on Amazon • Similar songs with Spotify  Clustering algorithms such as – K-means (unsupervised learning) – blazingly fast compared to other clustering methods • Produce clusters from data to determine which cluster a new item can be categorised as • Identify anomalies: transaction fraud or erroneous data  Classification algorithms such as – Logistic regression • Create a model that we can use to predict where to plot the next item in a sequence • Healthcare: predict adverse drug reactions based on known interactions between similar drugs Machine learning and Spark
  • 26.
    ● Behind the scenesimprovements to Spark APIs ● Currently run with the property: spark.mllib.ALS.useGPU ● Full paper: http://arxiv.org/abs/1603.03820 ● Full implementation at: https://github.com/IBMSparkGPU Netflix 1.5 GB 12 t, CPU 64 t, CPU GPU Intel, IBM Java 8 676s N/A 140s Our ALS routine checks for the property, currently always send work to a GPU Intel set up we used: 2 Intel(R) Xeon(R) CPU E5-2667 v2 @ 3.30GHz, 16 cores in the machine (SMT- 2), 256 GB RAM, Red Hat Enterprise Linux Server release 6.6 (Santiago) GPUs present: two Tesla K80Ms Also available for Power LE and a work in progress Our GPU work with Spark
  • 27.
    ● Free stuff totry if you have a GPU ● Use hardware accelerators that you have available: API or behind the scenes optimisations with our free Java implementation ● Developing story, your feedback is important Platform CUDA4J Lambdas Spark 64-bit Windows N Y N 64-bit Linux (x86) Y Y Y 64-bit Power LE Linux Y Y Y https://www.ibm.com/developerworks/java/jdk https://www.ibm.com/developerworks/java/jdk/spark Conclusion
  • 28.
    Thank You For feedback,suggestions, or any questions, email aroberts@uk.ibm.com
  • 29.
    Notices and Disclaimers Copyright© 2016 by International Business Machines Corporation (IBM). No part of this document may be reproduced or transmitted in any form without written permission from IBM. U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM. Information in these presentations (including information relating to products that have not yet been announced by IBM) has been reviewed for accuracy as of the date of initial publication and could include unintentional technical or typographical errors. IBM shall have no responsibility to update this information. THIS document is distributed "AS IS" without any warranty, either express or implied. In no event shall IBM be liable for any damage arising from the use of this information, including but not limited to, loss of data, business interruption, loss of profit or loss of opportunity. IBM products and services are warranted according to the terms and conditions of the agreements under which they are provided. Any statements regarding IBM's future direction, intent or product plans are subject to change or withdrawal without notice. Performance data contained herein was generally obtained in a controlled, isolated environments. Customer examples are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual performance, cost, savings or other results in other operating environments may vary. References in this document to IBM products, programs, or services does not imply that IBM intends to make such products, programs or services available in all countries in which IBM operates or does business. Workshops, sessions and associated materials may have been prepared by independent session speakers, and do not necessarily reflect the views of IBM. All materials and discussions are provided for informational purposes only, and are neither intended to, nor shall constitute legal or other guidance or advice to any individual participant or their specific situation. It is the customer’s responsibility to insure its own compliance with legal requirements and to obtain advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulatory requirements that may affect the customer’s business and any actions the customer may need to take to comply with such laws. IBM does not provide legal advice or represent or warrant that its services or products will ensure that the customer is in compliance with any law.
  • 30.
    Notices and Disclaimers(con’t) Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to interoperate with IBM’s products. IBM expressly disclaims all warranties, expressed or implied, including but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents, copyrights, trademarks or other intellectual property right. IBM, the IBM logo, ibm.com, Bluemix, Blueworks Live, CICS, Clearcase, DOORS®, Enterprise Document Management System™, Global Business Services ®, Global Technology Services ®, Information on Demand, ILOG, LinuxONE™, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON, OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®, pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, SoDA, SPSS, StoredIQ, Tivoli®, Trusteer®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at: www.ibm.com/legal/copytrade.shtml. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Databricks is a registered trademark of Databricks, Inc. Apache Spark, Apache Cassandra, Apache Hadoop, Apache Maven, Spark, Apache, and the Apache product logos including the Spark logo are trademarks of The Apache Software Foundation.
  • 31.
    Code listing andbenchmark information
  • 32.
    Benchmark info Name SummaryData size Type MM A dense matrix multiplication: C = A.B 1,024 × 1,024 double SpMM A sparse matrix multiplication: C = A.B 500,000× 500,000 double Jacobi2D Solve an equation using the Jacobi method 8,192 × 8,192 double LifeGame Conway’s game of life. Iterate 10,000 times 512 × 512 byte
  • 33.
    Measured performance improvementby GPU using four programs using 1-CPU-thread sequential execution 160-CPU-thread parallel execution Experimental environment used IBM Java 8 Service Release 2 for PowerPC Little Endian Download for free at http://www.ibm.com/java/jdk/ Two 10-core 8-SMT IBM POWER8 CPUs at 3.69 GHz with 256GB memory (160 hardware threads in total) with one NVIDIA Kepler K40m GPU (2880 CUDA cores in total) at 876 MHz with 12GB global memory (ECC off) Ubuntu 14.10, CUDA 5.5 Benchmark info
  • 34.
    Set the PATHto include the CUDA library. For example, set PATH=<CUDA_LIBRARY_PATH>;%PATH%, where the <CUDA_LIBRARY_PATH> variable is the full path to the CUDA library. The <CUDA_LIBRARY_PATH> variable is C:Program FilesNVIDIA GPU Computing ToolkitCUDAv7.5bin, which assumes CUDA is installed to the default directory. Note: If you are using Just-In-Time Compiler (JIT) based GPU support, you must also include paths to the NVIDIA Virtual Machine (NVVM) library, and to the NVDIA Management Library (NVML). For example, the <CUDA_LIBRARY_PATH> variable is C:Program FilesNVIDIA GPU Computing ToolkitCUDAv7.5bin;<NVVM_LIBRARY_PATH>;<NVML_LIBRARY_PATH>. If the NVVM library is installed to the default directory, the <NVVM_LIBRARY_PATH> variable is C:Program FilesNVIDIA GPU Computing ToolkitCUDAv7.5nvvmbin. You can find the NVML library in your NVIDIA drivers directory. The default location of this directory is C:Program FilesNVIDIA CorporationNVSMI. From https://www.ibm.com/support/knowledgecenter/SSYKE2_8.0.0/com.ib m.java.win.80.doc/user/gpu_enabling.html?lang=en
  • 35.
    Code in thedemo – Sample.java part 1 of 3 import com.ibm.cuda.*; import com.ibm.cuda.CudaKernel.*; public class Sample { private static final boolean PRINT_DATA = false; private static int numElements; private static int[] myData; private static CudaBuffer buffer1; private static CudaDevice device = new CudaDevice(0); private static CudaModule module; private static CudaKernel kernel; private static CudaStream stream; public static void main(String[] args) { try { module = new Loader().loadModule("AdamDoubler.fatbin", device); kernel = new CudaKernel(module, "Cuda_cuda4j_AdamDoubler_Strider"); stream = new CudaStream(device); doSmallProblem(); doMediumProblem(); doChunkingProblem(); } catch (CudaException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } private static void doSmallProblem() throws Exception { System.out.println("Doing the small sized problem"); numElements = 100; myData = new int[numElements]; Util.fillWithInts(myData); CudaGrid grid = Util.makeGrid(numElements, stream); System.out.println("Kernel grid: <<<" + grid.gridDimX + ", " + grid.blockDimX + ">>>"); buffer1 = new CudaBuffer(device, numElements * Integer.BYTES); buffer1.copyFrom(myData); Parameters kernelParams = new Parameters(2).set(0, buffer1).set(1, numElements); kernel.launch(grid, kernelParams); int[] originalArrayCopy = new int[myData.length]; System.arraycopy(myData, 0, originalArrayCopy, 0, myData.length); buffer1.copyTo(myData); Util.checkArrayResultsDoubler(myData, originalArrayCopy); }
  • 36.
    private static voiddoMediumProblem() throws Exception { System.out.println("Doing the medium sized problem"); numElements = 5_000_000; myData = new int[numElements]; Util.fillWithInts(myData); // This is only when handling more than max blocks * max threads per kernel // Grid dim is the number of blocks in the grid // Block dim is the number of threads in a block // buffer1 is how we'll use our data on the GPU buffer1 = new CudaBuffer(device, numElements * Integer.BYTES); // myData is on CPU, transfer it buffer1.copyFrom(myData); // Our stream executes the kernel, can launch many streams at once CudaGrid grid = Util.makeGrid(numElements, stream); System.out.println("Kernel grid: <<<" + grid.gridDimX + ", " + grid.blockDimX + ">>>"); Parameters kernelParams = new Parameters(2).set(0, buffer1).set(1, numElements); kernel.launch(grid, kernelParams); int[] originalArrayCopy = new int[myData.length]; System.arraycopy(myData, 0, originalArrayCopy, 0, myData.length); buffer1.copyTo(myData); Util.checkArrayResultsDoubler(myData, originalArrayCopy); } Code in the demo – Sample.java part 2 of 3
  • 37.
    private static voiddoChunkingProblem() throws Exception { // I know 5m doesn't require chunking on the GPU but this does System.out.println("Doing the too big to handle in one kernel problem"); numElements = 70_000_000; myData = new int[numElements]; Util.fillWithInts(myData); buffer1 = new CudaBuffer(device, numElements * Integer.BYTES); buffer1.copyFrom(myData); CudaGrid grid = Util.makeGrid(numElements, stream); System.out.println("Kernel grid: <<<" + grid.gridDimX + ", " + grid.blockDimX + ">>>"); // Check we can actually launch a kernel with this grid size try { Parameters kernelParams = new Parameters(2).set(0, buffer1).set(1, numElements); kernel.launch(grid, kernelParams); int[] originalArrayCopy = new int[numElements]; System.arraycopy(myData, 0, originalArrayCopy, 0, numElements); buffer1.copyTo(myData); Util.checkArrayResultsDoubler(myData, originalArrayCopy); } catch (CudaException ce) { if (ce.getMessage().equals("invalid argument")) { System.out.println("it was invalid argument, too big!"); int maxThreadsPerBlockX = device.getAttribute(CudaDevice.ATTRIBUTE_MAX_BLOCK_DIM_X); int maxBlocksPerGridX = device.getAttribute(CudaDevice.ATTRIBUTE_MAX_GRID_DIM_Y); long maxThreadsPerGrid = maxThreadsPerBlockX * maxBlocksPerGridX; // 67,107,840 on my Windows box System.out.println("Max threads per grid: " + maxThreadsPerGrid); long numElementsAtOnce = maxThreadsPerGrid; long elementsDone = 0; grid = new CudaGrid(maxBlocksPerGridX, maxThreadsPerBlockX, stream); System.out.println("Kernel grid: <<<" + grid.gridDimX + ", " + grid.blockDimX + ">>>"); while (elementsDone < numElements) { if ( (elementsDone + numElementsAtOnce) > numElements) { numElementsAtOnce = numElements - elementsDone; // Just do the remainder } long toOffset = numElementsAtOnce + elementsDone; // It's the byte offset not the element index offset CudaBuffer slicedSection = buffer1.slice(elementsDone * Integer.BYTES, toOffset * Integer.BYTES); Parameters kernelParams = new Parameters(2).set(0, slicedSection).set(1, numElementsAtOnce); kernel.launch(grid, kernelParams); elementsDone += numElementsAtOnce; } int[] originalArrayCopy = new int[myData.length]; System.arraycopy(myData, 0, originalArrayCopy, 0, myData.length); buffer1.copyTo(myData); Util.checkArrayResultsDoubler(myData, originalArrayCopy); } else { System.out.println(ce.getMessage()); } } } Code in the demo – Sample.java part 3 of 3
  • 38.
    Code in thedemo – Lambda.java part 1 of 2 import java.util.stream.IntStream; public class Lambda { private static long startTime = 0; // -Xjit:enableGPU is our JVM option public static void main(String[] args) { boolean timeIt = true; int numElements = 500_000_000; int[] toDouble = new int[numElements]; Util.fillWithInts(toDouble); myDoublerWithALambda(toDouble, timeIt); double[] toHalf = new double[numElements]; Util.fillWithDoubles(toHalf); myHalverWithALambda(toHalf, timeIt); double[] toRandomFunc = new double[numElements]; Util.fillWithDoubles(toRandomFunc); myRandomFuncWithALambda(toRandomFunc, timeIt); } private static void myDoublerWithALambda(int[] myArray, boolean timeIt) { if (timeIt) startTime = System.currentTimeMillis(); IntStream.range(0, myArray.length).parallel().forEach(i -> { myArray[i] = myArray[i] * 2; // Done on GPU for us }); if (timeIt) { System.out.println("Done doubling with a lambda, time taken: " + (System.currentTimeMillis() - startTime) + " milliseconds"); } }
  • 39.
    private static voidmyHalverWithALambda(double[] myArray, boolean timeIt) { if (timeIt) startTime = System.currentTimeMillis(); IntStream.range(0, myArray.length).parallel().forEach(i -> { myArray[i] = myArray[i] / 2; // Again on GPU }); if (timeIt) { System.out.println("Done halving with a lambda, time taken: " + (System.currentTimeMillis() - startTime) + " milliseconds"); } } private static void myRandomFuncWithALambda(double[] myArray, boolean timeIt) { if (timeIt) startTime = System.currentTimeMillis(); IntStream.range(0, myArray.length).parallel().forEach(i -> { myArray[i] = myArray[i] * 3.142; // Double so we don't lose precision }); if (timeIt) { System.out.println("Done with the random func with a lambda, time taken: " + (System.currentTimeMillis() - startTime) + " milliseconds"); } } } Code in the demo – Lambda.java part 2 of 2
  • 40.
    Code in thedemo – Util.java part 1 of 2 import com.ibm.cuda.*; public class Util { protected static void fillWithInts(int[] toFill) { for (int i = 0; i < toFill.length; i++) { toFill[i] = i; } } protected static void fillWithDoubles(double[] toFill) { for (int i = 0; i < toFill.length; i++) { toFill[i] = i; } } protected static void printArray(int[] toPrint) { System.out.println(); for (int i = 0; i < toPrint.length; i++) { if (i == toPrint.length - 1) { System.out.print(toPrint[i] + "."); } else { System.out.print(toPrint[i] + ", "); } } System.out.println(); } protected static CudaGrid makeGrid(int numElements, CudaStream stream) { int numThreads = 512; int numBlocks = (numElements + (numThreads - 1)) / numThreads; return new CudaGrid(numBlocks, numThreads, stream); }
  • 41.
    /* * Array willhave been doubled at this point */ protected static void checkArrayResultsDoubler(int[] toCheck, int[] originalArray) { long errorCount = 0; // Check result, data has been copied back here if (toCheck.length != originalArray.length) { System.err.println("Something's gone horribly wrong, different array length"); } for (int i = 0; i < originalArray.length; i++) { if (toCheck[i] != (originalArray[i] * 2) ) { errorCount++; /* System.err.println("Got an error, " + originalArray[i] + " is incorrect: wasn't doubled correctly!" + " Got " + toCheck[i] + " but should be " + originalArray[i] * 2); */ } else { //System.out.println("Correct, doubled " + originalArray[i] + " and it became " + toCheck[i]); } } System.err.println("Incorrect results: " + errorCount); } } Code in the demo – Util.java part 2 of 2
  • 42.
    Code in thedemo – Loader.java import java.io.FileNotFoundException; import java.io.IOException; import java.io.InputStream; import com.ibm.cuda.CudaDevice; import com.ibm.cuda.CudaException; import com.ibm.cuda.CudaModule; public class Loader { private final CudaModule.Cache moduleCache = new CudaModule.Cache(); CudaModule loadModule(String moduleName, CudaDevice device) throws CudaException, IOException { CudaModule module = moduleCache.get(device, moduleName); if (module == null) { try (InputStream stream = getClass().getResourceAsStream(moduleName)) { if (stream == null) { throw new FileNotFoundException(moduleName); } module = new CudaModule(device, stream); moduleCache.put(device, moduleName, module); } } return module; } }
  • 43.
    Code in thedemo – BuildIt.bat nvcc -fatbin AdamDoubler.cu "C:ibm8sr3gasdkbinjava" -version "C:ibm8sr3gasdkbinjavac" *.java "C:ibm8sr3gasdkbinjava" -Xmx2g Sample "C:ibm8sr3gasdkbinjava" -Xmx4g Lambda "C:ibm8sr3gasdkbinjava" -Xjit:enableGPU={verbose} -Xmx4g Lambda