Is GPU computing dead already?

Asic transit gloria mundi.
ziu
Posts: 29
Joined: Sat Aug 17, 2013 8:46 pm

Re: Is GPU computing dead already?

Postby ziu » Wed Nov 14, 2018 2:24 am

Not a lot I think. OpenCL was killed in Android and most browsers disable WebCL because they consider it a security hazard for example.

On the positive side:
- NVIDIA now supports a more recent version of OpenCL.
- Adobe supports and uses OpenCL in its products.
- A lot of tools code was open sourced so the barrier to make an OpenCL ICD is now a lot lower. All the GPU hardware backends have basically been merged into LLVM. Intel made available the code for an OpenCL 2.x frontend for LLVM. Khronos put out SPIR-V code for LLVM. AFAIK however this still needs to be all integrated properly. Does anyone still care at this point?
- There are now multi-platform implementations like 'pocl'. But I still think that it is a tad bit on the unstable side and now suitable for production uses yet. At least last time I tested it was like that on my test applications.

- CUDA has found use in AI machine learning applications.
- CUDA keeps evolving but, at least to me, it seems most advances are either tiny increments, or long promised features which still do not work as you would expect them to do. Again and again.
- Google made their own CUDA implementation.

I personally have no issues with only having C support. I think the C++ features are a luxury really. I think SIMT is a really great programming model. It is much simpler to program for, in my opinion, than mixing and matching multi-threaded code with vector code in umpteen hardware dependent implementations.

I think OpenCL C does for multi-cores and SIMD what C did for single-cores. You finally have a portable code base which can compile in multiple platforms and run. You might need hardware specific performance tweaks for optimum performance. But those are minimal. The main issue is that you still cannot access the ROP units in GPUs in OpenCL and that OpenCL/OpenGL integration is still quite crappy. OpenCL was also, I think, thrown off track by Vulkan. Some people seem to think it is a replacement for it, when it isn't, it's basically a replacement for OpenGL and shaders. OpenCL-Next is still nowhere to be seen. Because of that a lot of people have dragged their feet with OpenCL adoption even more.

Debugging on either language platform is still a crapshoot because of the tools, lack of hardware support for proper debugging, memory protection is another crapshoot. I want a debugger which shows me the state of all the threads/fibers in an easy to understand fashion and allows me to single-step through code. I want a profiler which easily allows me to check memory use patterns. Heck even if it was only software emulated it would be better than nothing. Oh the debugger shouldn't crash willy nilly. I know these things are hard though.

With the slowing down of Moore's law one would expect people to give more attention to solutions like SIMT. I expect the hardware to get more parallel and further and further apart from the usual single threaded programming model we know and love in the future. As chip fab process technology loses dominance, it has to be compensated with architectural improvements. This might only be a problem for like a decade until someone comes up with something better than today's silicon. But I think we are going to see at least a decade gap happen which will be filled with novel hardware platform designs. In fact we are already seeing it happen now with both GPUs and AI co-processing units becoming de facto staples.


Return to “GPU”

Who is online

Users browsing this forum: No registered users and 4 guests