Page 2 of 2

Re: Is GPU computing dead already?

Posted: Fri Apr 05, 2013 1:33 am
by Geri
i think that gpgpu conception is useless. i would program a smp monolithic neumann architecture (cpu with lot of true cores) than any assymethric thing. so, for me, N*random core is the winner.

the industry also think this, there is only a few hundred app that can properly utilise gpgpu

and there is 10 million software that runs on REAL cpu-s

this competition is alreday win by cpu.

-you cant write true software for a gpu:
-you cant write TRUE algorythm
-you have no access to the gpu hardware at all, you dont know what happens inside opencl/cuda/or in your shader
-you cant access the real memory
-they calim its 10x fater than cpu - with real algorythms, but sometimes even a 16 mhz 386 is around ~2x faster than a 2000 usd graphics accelerator
-you cant just interact with the os
-no true c++ compiler for the gpgpu
-you need a driver interface to access it, loosing thousands of clock cycles, so you cant use it as a coprocessor
-you cant even write secure code, becouse gpgpu needs to compile the code on-the-fly. no serious software will born on this conception at all
-gpgpu conception is its on the market since 5 years ago. no serious software even made with it. even the flashplayer's only reaction from turning on the gpgpu acceleration is to throw a bsod on some configs
-strong gpu is even more rare than a strong multicore cpu in the computers in these days
-the gpgpu eats a lot of extra power, it makes a lot of extra heat
-you cant call a fopen, you cant boot windows on the gpu... becouse if you will be able to do this once, it will not a gpu any more - it will become a cpu.
-nobody want to program on this crap, amd and intel, please give us more cpu cores thx.
-too low popularity, too many bugs
-therefore, the conception of the gpgpu is wrong in my oppinion; gpgpu makes no sense.

Re: Is GPU computing dead already?

Posted: Fri Apr 05, 2013 9:36 am
by tomasdavid
Eh, Geri, four words on the CPU thing:
Single Instruction, Multiple Data

Re: Is GPU computing dead already?

Posted: Tue Apr 16, 2013 6:57 pm
by dbz
Thanks for the replies, I overall agree with most that has been said. Nvidia dropping OpenCL support and OpenCL not advancing beyond basic C has really killed off my interest in gpu computing. Remarkably, compute shaders suffer the same fate as OpenCL at the moment. Only Nvidia supports OpenGL 4.3. Neither AMD nor Intel support it and even Haswell only supports OpenGL 3.2 so no compute shaders. So it is back to the plain old cpu for me. I am curious to see what AVX2 will bring.

Re: Is GPU computing dead already?

Posted: Wed Apr 17, 2013 7:43 am
by Dade
dbz wrote:OpenCL not advancing beyond basic C


It such huge pain for me: maintaining 2 different code bases for CPU and GPU rendering is such a huge problem.

However, it is an interesting trend: OpenCL C is used portable "assembler" across GPUs/CPUs and there are front-ends to translate the code from C++ (http://lists.cs.uiuc.edu/pipermail/llvm ... 48012.html, https://bitbucket.org/gnarf/axtor/overview), C++ AMP (from Intel), etc. to OpenCL C.

Re: Is GPU computing dead already?

Posted: Thu Apr 18, 2013 7:42 am
by spectral
Dade wrote:
dbz wrote:OpenCL not advancing beyond basic C


It such huge pain for me: maintaining 2 different code bases for CPU and GPU rendering is such a huge problem.

However, it is an interesting trend: OpenCL C is used portable "assembler" across GPUs/CPUs and there are front-ends to translate the code from C++ (http://lists.cs.uiuc.edu/pipermail/llvm ... 48012.html, https://bitbucket.org/gnarf/axtor/overview), C++ AMP (from Intel), etc. to OpenCL C.


What it is ? an OpenCL decompiler ? For which purpose it is for ?

BTW, I have look for the "front ends" to convert C++ to OpenCL, but found nothing ?

Re: Is GPU computing dead already?

Posted: Thu Apr 18, 2013 12:31 pm
by Dade
spectral wrote:What it is ? an OpenCL decompiler ? For which purpose it is for ?


Source-to-source compilers are gaining a lot of "traction" in LLVM. There are many possible applications.

spectral wrote:BTW, I have look for the "front ends" to convert C++ to OpenCL, but found nothing ?


https://bitbucket.org/gnarf/axtor/overview is a back-end that produce an output for OpenCL C. I think, in theory, you can use any language supported by LLVM and get an OpenCL C output. It is a very interesting idea.

Re: Is GPU computing dead already?

Posted: Fri Apr 19, 2013 3:00 am
by keldor314
For what it's worth, both AMD and Nvidia currently use LLVM in their OpenCL toolchains (Nvidia also uses it in the Cuda toolchain). Nvidia's backend is already in the main LLVM distribution (NVPTX target), and AMD's (R600 I believe, or maybe CAL) is supposed to be incorporated in LLVM 3.3.

This means that in theory, you can take any language that supports LLVM and compile it directly to a GPU binary.

Re: Is GPU computing dead already?

Posted: Fri Apr 19, 2013 12:53 pm
by hobold
keldor314 wrote:This means that in theory, you can take any language that supports LLVM and compile it directly to a GPU binary.

The problem is that the GPU target instruction sets have limitations. The LLVM internal representation may contain constructs that cannot directly be mapped to a GPU target ... the respective backend would simply output an error and give up.

The point of the quoted whitepaper is in the code transformations that can be applied to LLVM intermediate representation, such that the transformed code becomes digestible even with the limitations of a typical GPU. These code transformations may have a performance cost (as visible in at least one of the test cases of the paper). And the set of transformations presented may or may not be complete; the paper does not really attempt to find a solid theoretical foundation (this is a bachelor's thesis after all, but an exceptionally bold and insightful one).

Re: Is GPU computing dead already?

Posted: Fri Apr 19, 2013 4:50 pm
by graphicsMan
Sounds interesting. I'll have to look at it.

The fact remains that CPUs cannot scale the way they did in the past. There are also limitations to how well they will scale out in their current trend (adding cores). You'll run up against die size limits rather quickly if you simply try to double or quadruple the number of cores on a single processor.

That's why for Xeon Phi they had to use such scaled back cores; no fancy branch prediction, no huge instruction pipeline, and they traded it for simple mechanisms to enable higher compute. Instead of super smart processors, they opted for dumber ones that have 4 threads per core to help hide latencies (is this enough?), and instead of adding more cores, they opted for wide SIMD on each core.

Of course, unless you have a very simple program to optimize, you'll need to go low-level to take advantage of the parallely goodness, which is a pain in the ass. Compilers are constantly getting better, but it's simply asking too much to take a regular-ol C++ program and have it run near-optimally on a Phi or a GPU. The best you can hope for is easy parallelism over some kinds of loops (pragma approach), which for complex programs is simply not going to cut it.

OpenCL and CUDA force the programmer to think differently in order to get the parallelism. CUDA gives you the latest and greatest features of the NVIDIA GPUs, and allows C++ kernels, but still has severe limitations for the programmer, and only runs on NV (for all real intents and purposes). OpenCL will give you good performance if your program does not require thread cross-talk, but otherwise will likely suck. Phi has a couple of SPMD programming options that seem pretty nice, but do not allow C++ code, and you still have to be careful when and how you invoke that code... If your program is running on a normal Xeon, and you fire calls through to the Phi, you still incur high latency, so you had better batch a lot of work.

All this is to say that (a) new SIMD/SIMT architectures are needed in order to scale because old-style CPUs will scale at a much slower pace going forward, and (b) because (a) is inevitable, we had better continue to improve our programming paradigms, because, while programming these architectures may suck enough now to avoid using GPUs/Phi/similar, eventually if you care about performance and parallelism, you're going to have to move this direction.

Re: Is GPU computing dead already?

Posted: Thu Aug 16, 2018 5:56 am
by bachi
Just want to bring back this thread. It's funny how much has changed in the past 5 years.