There was an article on here a couple of months back that was an intro to blender from a geek / vim perspective. I felt a bit inspired and downloaded it to have a play. It's an absolutely brilliant application - I highly recommend giving it a try.
Having learned blender first is likely why Vim was so appealing to me.
Blender's UI is so much better this way. The idea that the functionality of any professional software tool should be immediately visible is just silly. Photoshop, GIMP, Autocad, 3DS Max, Maya, etc. all follow that philosophy, and just end up with way too many buttons and menus for anyone to want to sift through. Blender shows functionality only where it is needed in the most beautiful modular system I have ever used.
Agreed. It felt really streamlined and the mental model felt instantly comfortable. If you want to rotate something you can select - rotate - fix axis - choose degree, all in a very programmatic way.
One thing I ran into was it seemed there are some shortcuts that are essential and basically wired to the number pad so I had to buy an external keyboard (and non trackpad mouse).
I haven't had time to get back to it again but I'm looking forward to sitting down and having another good play.
Edit I'm not super familiar with autocad but I make software for construction companies so I've seen it used a bit. You can do everything in a vim like command manner. I think it had a lisp prompt.
If you toggle `File -> User Preferences -> Input -> Emulate Numpad` you can have the usual number keys work like the numpad. I actually prefer it that way to so I can change the camera without having to move my hand all the way to the right of the keyboard.
> Edit I'm not super familiar with autocad but I make software for construction companies so I've seen it used a bit. You can do everything in a vim like command manner.
I'll admit, I'm even less familiar with AutoCAD. I really just assumed from my experience with every other AutoDesk software I have seen.
> I think it had a lisp prompt.
AFAIK, AutoLISP is still one of the most prevalent lisps.
Having learned vim first, I think these two comments have made me realize why it was so easy for me to pick up blender in the first place. I never even thought about it, but blender is indeed very "mode" like. I'm just so disgusted with the massive interference that the interfaces in maya and other programs cause. This comment thread makes me strangely happy.
I really wish that modal interfaces were more popular. Modal interfaces are a fantastic way to make complex (or even simple) systems accessible and usable.
There is an irrational fear of modal interfaces that is prolific in teaching software design. It seems like everyone has agreed that Caps Lock sucks, and therefore modal interface is inherently awful.
Blender currently has the best UI I have ever experienced in a desktop app. Their design philosophy is highly modular. But have you tried an older version from the Blender UI dark ages? Things weren't always so intuitive and everything was a big unstructured mess.
I also highly recommend trying out the Pie Menu addon addon in the settings. It's more or less becoming an official part of the Blender UI flow
Good to hear. I can remember some years ago Blender's UI was incredibly annoying to use. It's great to hear there have been efforts in making it easier to use.
Blender is an alien in the world of software, but I agree -- it's brilliant. Never stopped using it since I tried it 15 years ago. Took a long time to get productive, but it was definitely a good time investment.
It's also very dev friendly. And exploring its file format makes for very good nerdy times.
Interesting post and analogy. Maybe it's time for me to look at Blender again too! I think the last time was somewhere around 2007-2009. In 2010 I found http://www.vim3d.com/ and have kind of been hoping it would take off since that seems like an interface I might enjoy more, but I never devoted any time to learning it.
"OpenCL works fine on NVIDIA cards, but performance is reasonably slower (up to 2x slowdown) compared to CUDA, so it doesn't really worth using OpenCL on NVIDIA cards at this moment."
I wonder if that's intentional on NVIDIA's part.
Does it mention which version of OpenCL they're using? I'm looking forward to hearing news about v2.x and SPIR-V.
"I wonder if that's intentional on NVIDIA's part."
I think that's a reasonable guess. NVIDIA only supports OpenCL 1.2 (and it took them about 6 years to get there from 1.0, while other vendors were at 2.x).
(Most) people don't make decisions based on overarching theories of what's good for everyone. They need to solve a problem as easy as possible and most people feel that CUDA is easier to use than OpenCL. Plus, NVidia has been a leader in this space.
And given that CUDA is essentially free and you simply pay for the hardware it is actually not such a bad deal. Economies of scale are such that if you can run your workload on gaming hardware you're going to get incredible performance for very little money.
NVIDIA will prevent you to buy gaming hardware and push you into buying the 10x more expensive ones.
There are 2 NVIDIA drivers. It turns out pinned memory transfers are 2x faster with the Tesla driver (vs the Geforce driver). This can matter a lot for some workloads. For some reason you can't use the faster driver with cheap consumer cards.
So, in my view NVIDIA is dumbing down performance for OpenCL and CUDA alike, and also slowing down the unavoidable OpenCL adoption.
Making decisions and thinking what's good for the ecosystem are two different things. I can still buy nVidia for my company, and think and say at the same time that it's bad.
Of course, while it's the cheap consumer garbage ^W^W pro-gaming hardware, the prices are okay, since these are kept in check by AMD. Once we get to the relabelled consumer garbage ^W^W^W professional compute hardware, you pay for it. Dearly. Why? No one that keeps them in check through competition.
Its pretty easy to slap a new label on a card and call it enterprise. Consumer cards can be kept from eating the enterprise sales by creating artificial barriers around what you can do with each line of card.
He's referring to the fact that the GeForce line of consumer grade GPUs and the professional line of Tesla HPC Compute cards are probably using the same silicon.
I thought Tesla GPUs are tested to higher standards (e.g. Stable at higher clock speed) and don't have disabled/faulty cores. Consumer GPUs have GPU cores disabled permitting higher silicon production yields.
Core disabling and under clocking are what's known as "product binning" and it's why you have 4, 6, 8, and 12 core CPUs. Tesla GPUs are probably the cream of the crop but they're still cut from the same cloth.
Not OP, but my guess would be he meant GTX 1080, Titan & co. I would bet the difference in price in comparison to GTX 1050 doesn't come from increased manufacturing costs.
I think he meant GeForce vs Tesla. The GTX 1080Ti is at the higher end of the GTX line yet it's one of the best in terms of "dollar to performance ratio".
EDIT: even the Titan Xp while being much more expensive than the 1080Ti is still vastly cheaper than Teslas
Even if the users acknowledge it is bad to keep using NVIDIA (and allow them to continue domination), what are they supposed to do?
From my view, it's not just that CUDA > OpenCL. Rather, it's CUDA ecosystem >>> OpenCL ecosystem. The tools, the libraries, the documentation, the community - all of these are superior on the CUDA side. If you give all those up to use OpenCL, how do you make up for that cost?
I once tried to use OpenCL when I was still a student. The only Linux support for my Intel GPU I could find aborted various calls with "not implemented".
I am sure their Windows driver actually worked. However that still meant that for my hardware OpenCL was about as portable as CUDA. Just with a Windows lock-in instead of a NVIDIA lock-in.
Edit: Just to note this was a few years ago. I have no idea what the current state is.
Note I used Intel as example, not only because that is what I had at the time, but also because it is a notable third party OpenCL vendor. If you associate OpenCL with AMD then you might as well use CUDA on NVIDIA.
I wasn't trying to imply that! Just wanted to state that things have changed and support is pretty good nowadays (for Intel GPUs, too, BTW, I just have never personally tried those).
Sure, but nothing's stopping AMD and Intel from enabling CUDA on their processors. I personally think NVIDIA has vastly overinvested in Deep Learning at the expense of starving existing and potential other markets, but that doesn't make me want to code in OpenCL, it makes me want to run CUDA elsewhere.
For now at least, NVIDIA is too busy harassing customers and vendors who build servers based on GeForce instead of Tesla GPUs. Because while they've been hopping up and down with faux outrage, AMD embarked on just such an effort:
Is it true that monopolies necessarily hurt? Look at YKK, pretty much a monopoly. Were for years. But when you really had to go, you could count on the zipper (unless is froze, which case buttons are better).
It might hurt the consumer but it doesn't hurt the one holding the monopoly (nvidia in this case). It would be like Microsoft actively working to port their prized game Halo (Nvidia GPUs) to the PS4 (OpenCL) when they already have a perfectly good platform on the xbox (CUDA) which they happened to create as well.
Yes I believe so. I flipped hardware and software in my analogy to make it fit. You can restrict hardware to only run specific software just like you can restrict software to only run on specific hardware. The point is all about incentive to open your platform to competitors when you are already the leader in both.
I agree 100%, so I waited and waited and waited and waited.
Then I got tired of waiting 4 years for any of the GPU computing software I was interested in to support it and bought an Nvidia GPU and a PC to run it instead and became part of the problem.
not to mention SPIR-V integrates OpenGL with Vulkan and and for OpenCL, moving towards a broader range of language support outside of mere wrappers including but not limited to even Javascript.
Since SPIR-V allows you to effectively define your own Shader APIs, and OpenCL has more and more bindings with other platforms, including ones that support the web like Go, Javascript and a new support layer for WebGL, it is absolutely a game (no pun intended) for Nvidia to lose in the longrun.
When has a monopoly never not been caught frantically trying to buy out knockoff companies that spawned from teams developing on opensrouce alternatives in innovative places after they became comfortable and confident with being a closed source monopoly?
On top of this, graphics programmers, the games industry and advanced web support, especially now that AWS and I think google cloud integrate server hosting with GPUs (and companies like Blazing DB for easier integration) will only exponeniate the rate of advantage opensource has since this is a race with some of the best programmers in the world, with a ton of money behind it.
I am curious what NVIDIAS end game is here. Already they were trailing behind the after party with their releases post AMD Ryzen, and I havn't heard anything at all exciting on NVIDIAs part to counter since the Vulkan release and SPIR-V integration announcements, which are some of the most promising I've seen yet, and expansive for programmers who have otherwise avoided this platform due to the extremely indepth learning curve and requirements to be somewhat experienced C/C++ programmers.
It took me 8months to be able to write a good kernel and some good host code with opencl because I did the entire integration in C and C++ and I've been developing with it since 1.2, but with more languages a broader range of support applications, perspectives and contributions will go to the opensource dev.
So far, outside of currently dominating the industry and embedding some long term runway with large contracts and having a head on red tape involved with larger corporations they have contracts for hardware and support embedded with, I don't see what advantages NVIDIA is getting the industry excited about in the long run.
When it comes to savvy programmers in Silicon Valley, a friend of mine who works as a PhD programmer in Deep Learning with Google Scientists did an unofficial ad hoc poll on her twitter, asking people why so much opencl hate and so much CUDA in the valley? There were lots of responses from lots of people, but I only saw on repeated reason over and over again, the initial learning curve is lower with CUDA and its more convenient. There were no specs of performance advantages in discussion.
In the functionalities I apply OpenCL to, CUDA doesn't even offer the same functionality, and in many ways processes algorithms by simply parsing out algorithms CPUs utilized, but OpenCL never assumed this as an optimal path, and reconsiders the entire structure of the algorithm and data its working with, often times reorganizing formats for processing entirely different from CPUs that NVIDIA incorporates as a default.
It's more work to learn than CUDA, but the advantages are worth it in my experience so far.
As a last consideration, When it comes to NVIDIA trying to make their products compatible with OpenCL and OpenGL, I predict that promise of that to be delayed at best, as it took NVIDIA 5 years to make their hardware std OpenCl 1.2 compatible. Vulkan is extremely large in terms of the source code, and SPIR-V is nontrivial, OpenCL is standard 2.0 now, so if getting OpenCL 1.2 was hard for them to integrate in under 5 years, we can expect a lot longer given the new releases.
The learning curve is now going away with the recent releases, so whats the next step with NVIDIA? Good question. The only advantage they seem to have is that they have an advantage right now, but that becomes circular and perhaps backwards going forward.
This strangely reminds me of Microsoft getting ubuntu to run in a Windows partition checksum for checksum, to offer cross compatibility with other operating systems as an advantage to stay competitive in the demographic that produces the most advancement within their own industry.
OpenCL is being hampered by past perceptions. I think it's absurdly underrated.
The newer OpenCL is as easy to get into (with eg. the Intel SDK), conceptually easier (compiler is in memory, one way to do things instead of two APIs) and opens a broad range of hardware.
> In the functionalities I apply OpenCL to, CUDA doesn't even offer the same functionality, and in many ways processes algorithms by simply parsing out algorithms CPUs utilized, but OpenCL never assumed this as an optimal path, and reconsiders the entire structure of the algorithm and data its working with, often times reorganizing formats for processing entirely different from CPUs that NVIDIA incorporates as a default.
Can you explain what you mean? Reads like gibberish.
Sure sorry for the delay. With Sparse Matrices, CUDA utilizes the same compressed CSR txt format, and compression form on its chip, that CPUs do. They use the same compression format, but NVIDIA takes some of the parsed squares and just parallelizes the computation on GPUs and saves the zero values in memory to throw in at the end of the computation, which is still a fine way to do the computation, and utilizing GPU performance over CPU performance, but it never redesigned the CSR compression intake format.
OpenCL does something similiar with the null values, but redesigns the entire compression format reading in matrix data and the entire computation of accessing the tiles of data differently based on the format they have for parsing and accessing the data.
I am trying to find a good explanation online, but I can only find what I have in the OpenCL 1.2 Book Chapter 22 on SPMV, where they visualize and explain CUDAs format, and then OpenCLs format, and provide performance comparisons between the two methods using the standard 22 matrix sets provided by the University of Florida.
OpenCLs performance was better on every metric, the smallest increase in performance cutting computation time by one half, the rest were closer to a third. They ran their own CUDA testing, but also provide the whitepaper results from CUDAs testing, and use NVIDIAs reported results as their official comparison.
For an influx of repetitive real time data reading in millions of data points every few seconds, this kind of advantage is not of a negligible significance.
It took a long time to work with but of course now I can reap the benefits every few seconds on large datasets forever so I found it worth diving into the performance specs on this case.
While I cant find an online visual of the designs for CUDA vs OpenCL explained in CH22, the source code for the std SPMV I speak of here is on one of the authors github: "bgaster" is the username. You should be able to download the code working cross platform and read in matrices of your choice, and compare performance on your own if you have datasets you want to look at.
The OpenCL book I highly recommend. It explains how to use opencl and provides comprehensive examples, walking through concepts with code for 22 chapters. The source code for that book is there with the rest of bgasters stuff. It's definitely not a trivial language to take on. The learning curve is steep. However, I learned GPUs through OpenCL, so I don't know if it would be easier coming from CUDA, biases about what to expect because the learner is already reaping the benefits of CUDA aside.
I took time initially to read through the entire 1.2 openCL API over a couple of weeks. It's one of the most comprehensive and detailed API's I've ever read and I find myself dissapointed with documentation of other groups in comparison. Once you are familiar with the scope of functionality available to you, I typically keep the PDF API open and search F for concepts and find whats available to me as I code through things. The most recent API is just as good and can be found here:https://www.khronos.org/registry/OpenCL/specs/opencl-2.2.pdf
Also, one of the authors of OpenCL works fulltime at Apple now, and opencl comes installed and simply working on any OSX operating system. So if you happen to have a mac, it should be relatively easy to download some working source code and try out.
I have OSX, and was also able to download an SDK for my NVIDIA graphics card on an asus zenbook running fedora 24, and got it up and running just fine in an hour or so, for total install and testing.
Intel pricing and performance increases since AMD went away (but it seems to be coming back now, so that's good - and indeed, Intel already cut prices in some segments).
but other than its easier, are there any performance benefits?
I think the whole point of Vulkan and particularly SPIR-V, was to offer support for broadened languages including but not limited to Javascript.
OpenCL even has go bindings now.
SPIR-V allows you to define your own SHADER APIs.
Right now, it's alot to learn, but going forward, it hard to ignore the advantages in the functionalities and the broadened support to lower the barrier for learning by increasing the number of languages you can work with OpenCL in, particularly ones that predominately support web dev as we move to cloud developing and hosting AWS and Google Cloud with GPUs.
The cloud is a big one here to consider for potential shift in market share because if one group hosting the cloud finds OpenCL to be particularly beneficial, all they have to do is offer support ontop of a stack like Blazing DB, which I imagine a GPU server support stack will be engulfed within cloud platforms backend within 5 years anyways, and suddenly thousands and thousands of companies rely on it and it becomes and industry standard.
The shift in a market where cloud hosting exists on GPUs, becomes more based on what large companies who have the money and talent to go in depth and scale out support for OpenCL based functionalities moreso than the barrier every single developer or hip web dev hosting on the cloud is wanting to overcome to invest in low level kernel writing and hardware optimization.
Throw the games industry in with Vulkan, with opensource games on PCS always competing with console based games, and I see alot more support in the long run with OpenCL.
> I wonder if that's intentional on NVIDIA's part.
I highly doubt it, a normal resource allocation conflict would suffice.
NVIDIA has everything to gain by being ahead on each and every metric and to purposefully hobble that would eventually come out and would make people decide to buy a non-NVIDIA product in the meantime.
I think the reason is more that NVIDIA has been able to tailor CUDA to a much larger degree to match the architecture of their cards and by extension anybody that writes for CUDA automatically benefits from that. OpenCL is more general but that also immediately implies that it will be less efficient. And it doesn't take all that much in terms of missing optimizations (a single memory fetch penalty would do) to get a 2x penalty. GPU programming is much less forgiving when it comes to subtle mistakes than regular programming because it immediately gets multiplied by a very large factor rather than a single missed cycle.
A nice example of the reverse: ATI cards dominated the bitcoin GPU mining scene because of their ability to do a particular operation in one clocktick instead of two on the NVIDIA cards.
The real issue is that NVIDIA does not care about OpenCL as long as CUDA is usable. Since their drivers are proprietary, no one else (who does care) can make better OpenCL support happen.
We used to have both an OpenCL and CUDA implementation in earlier versions. The CUDA version was slightly faster (10-15%?) on Nvidia cards, but not worth the effort of maintaining both implementations.
I don't think it has anything to do with the driver itself. At least in my experience, the custom code you write performs more or less the same across both CUDA and OpenCL. The issues arise when you are using the toolkit libraries (like BLAS and FFT). cuBLAS and cuFFT are much much tuned for NVIDIA hardware than any OpenCL versions available at the moment.
Not really, but they are only supported in CUDA. For OpenCL you will have to use clBLAs and clFFT (written and maintained by AMD). This means they are not exactly tuned for other architectures.
I prefer to use OpenCL, but I think the second key factor why Nvidia dominates (the first is they integrated tooling) is that cuBLAS, cuFFT, cuDNN et.al work out of the box when you install CUDA Toolkit. clBLAS and clFFT? Good luck using them even on AMD hardware, yet alone on someone else's. And, of course, there is no cuDNN equivalent (that is beyond experimental)...
I have rarely observed that much difference between CUDA and OpenCL. I have, however, noticed that CUDA (at least by default) is more aggressive about picking faster but less accurate instructions for transcendental functions like inverse square root. You can ask for the same kind of stuff by passing extra OpenCL compiler flags, but it seems that CUDA optimises more aggressively by default (maybe analogous to the kind of stuff -ffast-math does).
NVIDIA supports CL1.2 and it's intentionally slow so people will use CUDA.
Plain CL2.0 shouldn't bring performance improvements unless the kernels used by Blender perform things like work_group functions. Also CL2.0 will require people to specify if a WG is uniform, otherwise they must assume it's not uniform and that's taxing in some cases.
SPIR-V on the other hand should be interesting to see.
Note that this is a wiki article, and that section in particular hasn't been updated in quite a while. Other parts of the article are talking about recent significant speedups in the OpenCL renderer so it makes me wonder how much they've closed the gap with CUDA on NVIDIA hardware.
I would test myself but don't have an NVIDIA card handy.
This only really holds true if you are running OpenCL 1.2 under CUDA.
If you run native OpneCL 2.0 on NVIDIA GPUs you aren't getting a major penalty, there will still be things that CUDA is going to be faster at but that has nothing to do with intentional gimping.
Blender's progress has been astounding. I remember a while back they anounced the OpenCL implementation was being put on hold for a undetermined amount of time due to limitations with AMD cards. This really is an exciting announcement. It's great to see it on HN too.
Blender is probably one of the most successful OSS projects since Linux. In the span of five years, you wouldn't be caught dead using it over Maya. Now, its visa versa.
One of the reasons for their success are the Open Movie and Open Game projects[0]. It forces them to actually accommodate the needs of working artists and designers in their tool. It also showcases the power of Blender and gets the attention of potential users. The production values are actually pretty good for the projects which get's peoples attention.
I can't talk about the specifics of the software design.
But if you take into account how the Blender project is handled with consideration of how to sustain itself via the Blender Foundation, and all the related projects, you can observe how an outlier it is compared to all similar ambitious projects that are not doing so well.
This is a bad and a good sign: it makes evident how little FOSS projects take in account it's sustainability over time, on the other hand it presents an opportunity for huge improvements in this area.
My hunch is Professionals as users might also be a benefit as they will tend to put up with bigger learning curves if they see benefits in the end. Also professionals don't tend to care as much about look of the UI (harder to do without dedicated design resources) vs efficiency of using it.
- FOSS don't have great track record for this kind of challenge (gimp has never replaced photoshop nor LibreOffice MSO)
That's like saying Linux doesn't have a great track record cause it hasn't replaced MS Windows. To be successful, they don't have to replace, just service a different niche / enable new or additional uses (e.g. I don't have $$$, I need better support&customization)
Linux is an exception in the non programming professional world. Accounting, project management, book edition, magazine layout, architecture, 3D object design... They are all eaten by proprietary software.
A few successful examples of usage of X (like Gimp) in the industry doesn't make it remotely in the same league as proprietary star Y (like Photoshop). Don't try to create a reality out of the thing we all wish were true. FOSS is not winning those battles yet.
I don't animate these days, but I remember the day I first tried Blender back when it was version 1.4 (I think - it fit on a floppy then). What caught my eye was that the entire UI was rendered in OpenGL, one could pan and zoom everything, even the 2d tool panels. That little detail hinted at good design. And I loved the tiling UI so much. I used Blender for several animation commissions at a time when studios had never heard of it. The Blender community was always one of the friendliest, most helpful I've ever had the pleasure of being part of. The Blender institute, under the direction of Blender's original author is a superb example of how to grow a successful project and community.
I remember downloading it on the internet connection at school, copying it to a floppy, and bringing it home. My school had some classes in 3D Studio Max, but there was no way I could get a copy of that. So Blender and another program called Nendo were my toys. I didn't have any money when they were paying to open source it, but I was definitely cheering from the sidelines!
The vfx in Amazon's Man in the High Castle were done in Blender. There's a pretty big cultural shift that would need to happen for Blender to become common place in the film industry, but I'm starting to feel optimistic.
I must have missed that news, that showreel is phenomenal! Linked in the comments was a Reddit AMA with the VFX creators, including this really interesting reply on their choice of Blender:
Disney and Weta are both big on Maya for animation. Pixar uses their own custom software (Presto?) for animation, but I think character modeling is mainly in Maya. Also a bit of Z-Brush and Modo thrown in across the board, plus all the random software from The Foundry.
For effects most shops use Houdini.
I'm not aware of any major studios or effects houses using Blender, but that would be cool!
Everywhere I've ever worked has been strictly (and almost religiously) a Maya shop. It's more than likely inertia since your content creators are almost always completely slammed and making them change up their workflows just isn't viable but I've definitely never seen professional animators use Blender.
> In the span of five years, you wouldn't be caught dead using it over Maya. Now, its visa versa.
Agreed on Maya, but Houdini's node based workflow has ruined me on essentially any other piece of 3D/CAD software. As a programmer it is so obviously the right way to do 95% of the things...
I think the Maya thing is hyperbole, I just couldn't justify the price tag after I got out of school, but it's still def a standard.
But really, as much as I like blender and open source, you hit the nail on the head, Houdini is on another level of awesomeness. Basically functional modelling done right.
I just wish that it had as good of tools for NURBS as it does for polygons, but at that point it basically becomes Rhino/Grasshopper, and that's not really what Houdini's for. I also kind of wish that VEX wasn't a thing, I don't really want to have to learn an entire mini language to get into the guts. I try to do all of that in VOPs unless it gets complicated.
Houdini has always intrigued me. Is there a good video/article I can look at to get a good overview of the powerful aspects of the node based stuff? (without having to spend dozens of hours learning the program!)
I don't know of any good videos, off the top of my head, but at a high level, all operations that one performs (creating geometry, editing objects, etc) are compositions of functions (nodes). So, after a long editing session, you can pop up the node graph and see exactly what operations/functions were performed and in what order. This allows one to programmatically modify the chain of operations later, or to package up the session as a script that can be shared with other users. You can also easily write your own bits of functionality and inject them into the sequence of function calls. It is super powerful, makes workflows reproducible, and way more.
I just wish they'd put some more effort into letting the renderer be used standalone --- I'm not particularly interested in the modeller, but I would really like a renderer with a C++ API that would allow me to play with procedural volumetric effects.
Technically you can use it standalone, but it's not a great experience. There are no binaries, it's painful to build, and the only way to interact with it is via an undocumented XML format; and I haven't found any way to do truly procedural volumes or geometry (i.e. via a density function callback specified by the user).
Right now I have an extremely patched version of Povray, but I'd love to switch to something faster.
I did look at it, but the API appears to be completely undocumented, there aren't any precompiled libraries for it (the ordinary binaries don't contain the API), and it's painfully complicated to build.
The nicest renderer I've found, API-wise, was OSPRay, which was simple and elegant to use... but doesn't support volumetric rendering in any way. I have in the past got this working, more or less, with Mitsuba with some custom code but I'd rather like to avoid customisations. They're too much work to maintain.
I personally think Mitsuba is the dark horse of free renderers. Blazing fast, good highly modular design, good docs, super easy Python bindings, GPU preview. The only problem is it's a one-man show, and the one man seems to have been doing other things lately. Nevertheless it's the renderer I think deserves to become the "standard".
The performance of OpenCL has generally been fine for me, particularly on AMD GPUs, but I have to say I think CUDA is a lot simpler to work with.
OpenCL is one of those things that I never felt fully comfortable working with, but I felt productive in CUDA after a week or two. Granted, I learned them in that order, so it's possible that CUDA got an unfair head-start, but I stand by my initial thesis.
I tried picking up OpenCL then CUDA as well and had a similar experience.
CUDA simply feels less hacked together to me. It's like working with a well thought out and documented code base (CUDA) vs working with some library that has little documentation and contradicting syntax and formats (OpenCL).
IMO that's why AMD is losing the "deep learning" battle, it's just not easy to develop using OpenCL. At least, not as easy as it should be.
Losing? When it comes to TensorFlow et al they haven't really shown up for the fight.
It's a vicious cycle though, since everything uses CUDA/CuDNN everyone buys nVidia cards, which means that developer interest in getting AMD cards fast isn't there.
If AMD GPU cores were in phones etc, they would probably be getting TF support for client work at least. Heck, considering they sold off that dept to Qualcomm, which is getting supported...
There's really no reason AMD themselves couldn't hire 5 developers to integrate with TensorFlow. I agree it's a vicious cycle, but I feel if they release support for tensorflow it wouldn't take long to shift people from Nvidia to AMD. AMD has a significantly cheaper offering, and no company wants to be tied by only one supplier.
Considering how small of a dent that would make in their bottom line (financially), it's really amazing they haven't yet... The potentially upside is easily in the tens (if not hundreds) of millions of dollars.
So definitely, I agree they intentionally have decided to not support deep learning. For the life of me, I don't know why.
There's really no reason AMD themselves couldn't hire 5 developers to integrate with TensorFlow.
The problem is that it isn't just TensorFlow. It is also Caffe, Theano, Torch, etc., etc., etc.. For deep learning, Nvidia GPU acceleration is almost a given.
If AMD can write an abstraction layer similar to CuDNN, and write the initial integration to TensorFlow and Caffee I think they'd gain enough community support to finish out the rest.
80% of deep learning users use the TensorFlow, Caffe, or Torch frameworks.
I can't get into the weeds of it because of some NDA chicanery, but there are a couple vector transformations that I have to do on a not-quite-big-data-but-also-too-big-for-regular-CPU set of data. The initial version of the project was doing some basic stuff in Python, but that proved to be slow so I rewrote most of it in Haskell and Accelerate.
The question should be is NVIDIA supporting FreeBSD yet?
And the answer is so far not. And likely it won't happen, they are supporting Linux because it is a large enough target for their scientific customers, FreeBSD just does not have the market share there.
Agreeing on a kernel ABI? ha ha not gonna happen :D
There are open source drivers. Only on nvidia they (nouveau) are seriously worse than the proprietary ones. For Intel GPUs, the open source driver is the only driver. For AMD, there are some proprietary drivers but they are becoming obsolete, the fully open amdgpu stack is getting better and better all the time.
I don't quite get this page; isn't it more accurate to say 'amd on par with nvidia'? It seems for amd they use opencl, for nvidia cuda; but you can run opencl on nvidia too (1.2 only, apart from experimental, partial 2.0 support in the very latest drivers, but still).
I mean, there are numerical libraries that run 2x as fast on nvidia compared to their most optimized opencl implementations, because they use 'gpu assembly' specific for nvidia cards; how does that fit the 'opencl on par with cuda' claim? It depends on what effort is spent on optimizing for a certain platform, not what api is used...
I'm working in opencl myself but it's frustrating that I'll never get as much performance as I would when using opencl, even when I'm using gtx gpus myself.
Blender is a really fun program to use, Tons of tutorials and information on youtube etc...
I think more education could use 3d programs like this to help with algorithm visualization.
I made this video about worker in tech with blender.
The worker is slaving away at his terminal, he is writing code that creates the 'feed' of apps/entertainment/media/etc.. for the insatiable appetite of society (Represented by the somewhat-similar-to-a-hungry-hippo character in the depths).
Who are these other two glowing beings? What do they represent? My friends have tried to guess some explanations, but I'll let each audience member decide for themselves.
Would it be a completely stupid idea to write a CUDA-based OpenCL back-end? i.e., an OpenCL-to-CUDA translator, so you can program your kernels in one single language but still get the benefit of the NVidia CUDA compiler?
Or are their machine models so different that that is an unreasonable thing to even try..
It's likely better to just write/use libs which abstract the CUDA/OpenCL level away for generic GPU type tasks. Unless you have really strict performance requirements or are writing a very specific piece of GPU code, in which case I cant image OpenCL-to-CUDA translator would handle that case well either.
They both have very similar ideas for the core of the APIs then CUDA adds a bunch of stuff on top of it for ease of use and performance which is tightly coupled with the hardware in some cases. It would be a lowest common denominator situation with a lot of those features in CUDA.
As the other poster said, you need Clover, and then you can use clinfo [1] to check you have everything installed and working.
I can't say for sure what's the latest state of Blender; somebody on Freenode #radeon mentioned a few months ago that Blender is failing to compile its OpenCL kernel, while before that somebody mentioned it as working, but quite a bit slower than with the proprietary driver. I suggest to try it yourself and report any bugs you encounter as blocking [2].
The 3D artists using GPU production rendering require at least 4 of the top of the line cards for their workstation. My workstation crams x5 980Tis in the case (2 off the board with PCIe risers).
Joining the community using this approach not only requires the hardware and ability to build it but also new rendering software and a lot of time to learn a new approach/mindset/workflow. The best software available is crucial. It is worth every penny to invest in the best rendering software when entering this environment. Right now, there are 3 that matter and none of the GPU-specific remdering softwares support OpenCL. There is a unique exception with V-Ray, the last gen maverick of rendering engines. V-Ray's future in GPU rendering could be bright if the new companies don't entirely outpace them in GPu development. Either way, every part of the people actually using this solution in the real world is investing all of their time, money, energy into Nvidia right now.
The devs at Redshift, my chosen renderer, insist OpenCL is not even close to having what they need.
Pseudo-realtime feedback could actually advance the craft to a new era and Nvidia is carrying the entire ecosystem.
Just out of curiosity (it's not my field), what are those 3 renderers that matter right now?
Somehow I read your comment as V-Ray is not among them as being last-gen. I remember that years ago there was a lot of buzz regarding Arnold (and it was justified to some extent AFAIR, at least judging by opinions of pleased 3D crowd), but maybe it's last-gen too now? Many years ago there was Brazil, but quick googling shows it's only for Rhino now? I haven't heard about Redshift till now, though.
The 3 GPU renderers that matter are: V-Ray RT, Octane, Redshift
CPU renderers are a different story but Arnold is right up there with V-Ray. CPU renderers are tried and true, reliable and robust.
V-Ray is the only company making both CPU and GPU renderers. The GPU version can render most of the same shaders but the backward compatibility requirements slow it's progress. It's OpenCL support is always behind CUDA as well.
Edit here's the post https://news.ycombinator.com/item?id=13379597