The paradigm change in power computing
12 years ago
So, I bought a new computer, which is not very exciting in itself, but what's interesting is that it represents the paradigm change that will affect us as 3D enthusiasts.
In previous years, I was able to replace the old machine after three years with a new one at the same original price with triple render power. That's just what we get from Moore's Law, more or less. And it was true for about a decade since I switched to PC systems (being on Amiga before, I'm not able to make a direct comparison).
Now, I did get my render speedup of 3.5. Which looks good at first. But.
But.
-- I waited five years before I bought a new machine, not three. That should theoretically yield me a much higher factor.
-- I bought a more expensive system than I used to.
-- I had to overclock it to get to this performance.
All these points show clearly that I had to put more effort into getting the new machine to the "accustomed" speedup. And let's not forget that a major part of the speedup in the last years stems from adding processor cores. My last machines went up from one to two to four to six cores (plus hyperthreading).
The paradigm is shifting. The market does no longer support boosting speed and power in desktops on a yearly base. The need for the ever-faster computer is over. Only a nice part of the market, like us renderers, powergamers, or simulators still needs more computing capacity. The overall trend is going into a different direction:
-- Energy saving - the new Ivy Bridge generation is not even faster than Sandy Bridge, it just has a lower consumption
-- Tablets - less power, longer battery life, different needs
-- Specialist chips (as in graphics cards)
Well, that's not really a novel observation. You can read it in every article that discusses the shrinking PC market. But it makes me wonder how it will affect the rendering efforts of hobbyists and enthusiasts.
-- The move away from increased tact to more cores already affects all tasks negatively that utilize only a single core. A better parallelization is needed for all simulation, hair, expression, real-time preview, and particle modules.
-- Games are already deeply into graphics card utilization. Yet, CUDA or OpenCL are not as widespread in the 3D rendering process as they could be.
-- Intelligent algorithms are needed for a dynamic load/unload of proxies and referenced objects, perhaps even a blend between various detail levels on the fly.
-- Scenes need to be constructed more efficiently, with tools allowing us to concentrate on the detail while not losing sight of the overall scene.
-- Rendering on several computers at the same time needs to be easier and hassle-free (this affects mostly certain companies whose licensing conditions are crappy, since about every 3D package has distributed rendering at least on scene level already).
All of this is in the works already (I am one impatient guy, yes) - announced, in progress, sometimes even available. What's really exciting (or annoying, depending on your disposition) is that the paradigm change in power computing also causes a paradigm shift in rendering. We will need to put more thought into modularity, proxy creation, referencing; even our baking and caching habits need to change. I am really curious when the first inventive company comes along with an application that helps along structure and efficiency -- and no, I have actually no idea how that will look.
In previous years, I was able to replace the old machine after three years with a new one at the same original price with triple render power. That's just what we get from Moore's Law, more or less. And it was true for about a decade since I switched to PC systems (being on Amiga before, I'm not able to make a direct comparison).
Now, I did get my render speedup of 3.5. Which looks good at first. But.
But.
-- I waited five years before I bought a new machine, not three. That should theoretically yield me a much higher factor.
-- I bought a more expensive system than I used to.
-- I had to overclock it to get to this performance.
All these points show clearly that I had to put more effort into getting the new machine to the "accustomed" speedup. And let's not forget that a major part of the speedup in the last years stems from adding processor cores. My last machines went up from one to two to four to six cores (plus hyperthreading).
The paradigm is shifting. The market does no longer support boosting speed and power in desktops on a yearly base. The need for the ever-faster computer is over. Only a nice part of the market, like us renderers, powergamers, or simulators still needs more computing capacity. The overall trend is going into a different direction:
-- Energy saving - the new Ivy Bridge generation is not even faster than Sandy Bridge, it just has a lower consumption
-- Tablets - less power, longer battery life, different needs
-- Specialist chips (as in graphics cards)
Well, that's not really a novel observation. You can read it in every article that discusses the shrinking PC market. But it makes me wonder how it will affect the rendering efforts of hobbyists and enthusiasts.
-- The move away from increased tact to more cores already affects all tasks negatively that utilize only a single core. A better parallelization is needed for all simulation, hair, expression, real-time preview, and particle modules.
-- Games are already deeply into graphics card utilization. Yet, CUDA or OpenCL are not as widespread in the 3D rendering process as they could be.
-- Intelligent algorithms are needed for a dynamic load/unload of proxies and referenced objects, perhaps even a blend between various detail levels on the fly.
-- Scenes need to be constructed more efficiently, with tools allowing us to concentrate on the detail while not losing sight of the overall scene.
-- Rendering on several computers at the same time needs to be easier and hassle-free (this affects mostly certain companies whose licensing conditions are crappy, since about every 3D package has distributed rendering at least on scene level already).
All of this is in the works already (I am one impatient guy, yes) - announced, in progress, sometimes even available. What's really exciting (or annoying, depending on your disposition) is that the paradigm change in power computing also causes a paradigm shift in rendering. We will need to put more thought into modularity, proxy creation, referencing; even our baking and caching habits need to change. I am really curious when the first inventive company comes along with an application that helps along structure and efficiency -- and no, I have actually no idea how that will look.
Bottom line: with devices becomming cheaper and less power hungry - if not faster - it might be time for a fan-out strategy in the hobby range too. Sat thing is that I don't know ANY render software that can distribute a single frame to multiple machines... so the "per frame" time might increase but the overall render time for animations might decrease. Also: hobbyists might share their rendering power as a "render farm" of sorts. Would only need a platform to automatically distribute project and model files and collect results.
Then there are render farms that make offering rendering capacity their business model; it's safe to assume that they will keep your data safe as well. -- Advantage: They will take care of the render technicalities, have their machines set up already. Problem: Costs an arm and a leg, compared with local machines. And there may be issues with proprietary plugins, or commercial add-ons that they don't have and don't want to buy just for your job.
If you distribute rendering capacity among several volunteers on a "render for me and I will render for you" base, it depends on your relationship towards the other people. Ideally these are all your friends and will not steal your models. However, a network beyond just your friends is more effective. -- Advantage: Render is ideally almost free (need to count in the power consumption). Problem: There may be issues with installations, plugins, or add-ons on the remote machine. There is a risk of getting infected with viruses if the render job utilizes active content (e.g. self-made scripts). The network for rendering may not be available at all times. You may need a fixed IP to communicate, or a central server with a fixed IP where all the data passes through. Also, the "social component" of such a network needs maintenance -- communication, group hugs, and back pats.
BURP is certainly an interesting idea in the latter direction. In theory, this would be an enormous advantage for Blender, in practice however the success of such a network depends on the ratio of CPU cycles offered vs. used. Also, the composition of the user base is essential. In other volunteer projects (like World Community Grid) a few projects benefit from a huge volunteer base. In BURP, it looks as if the volunteers would be 3D practitioners themselves and therefore will want to render projects themselves sooner or later. This skews the aforementioned ratio...
I certainly hope that the competition by Blender will sooner or later convince all major 3D software producers to offer free render nodes, otherwise it will be quite difficult to make good use of shared render networks... except, of course, for Blender itself.
Since I always used Blender, I got Cycles for free. Check this https://www.youtube.com/watch?v=12njO5FAYKk
Now it has hair support in the development version! https://www.youtube.com/watch?v=I_Q200SBjGo
> Rendering on several computers at the same time needs to be easier and hassle-free
There's a built-in add-on in Blender for that. http://wiki.blender.org/index.php/D.....ance/Netrender
Officially it doesn't support single-frame distributed rendering yet, but I have a simple solution for that (PM me if you're interested).
I am actually using Cinema4D, but the development there points into the same direction. (Recently, Maxon was looking to hire developers with experience in CUDA... so I just wait and see...) While the net renderer for C4D can do frame-wise distributed rendering, it is still not capable to distribute a single frame and have another machine render a few buckets... *sigh*
Initially, I thought that if you are rendering animations, it doesn't make much of a difference whether you distribute a single frame or multiple frames, in fact the former might have organisational overhead - but on a closer look, there are quite a few features where even in animation development, you'd want to render single images as fast as possible, e.g. one-frame light tests.
VRay, also available for C4D, is developing distributed rendering of both types. In fact, the integration with other main applications has this feature already, the C4D version just lags behind. *double sigh* Also, CUDA is developed for VRay as well.
Unfortunately, all the commercial applications cost money, and multi-machine rendering may cost even more, depending on the license model. Even worse, if you are using plugins, these might add to the cost by having worse renderfarm license restrictions than the mother app. Blender definitely has a huge advantage there!
Moore's laws was always about value for money. Computing power doubled every 18 months for any given amount of money you spent. Having hit a usefulness plateau, Moore's law is not halving the cost of computers every 18 months instead. Or it would have, if not for the recession, and every HDD and had the chip fabrication factories getting flooded. Even so, I could replace my current laptop for half the money I spent on it three years ago.
Unfortunately, current chip trends are as much about physics as they are about customer needs. Silicon simply doesn't do too well past the 3GHz stage. Hopefully graphene may change that.
Memory is also speeding up and getting cheaper, and SSDs are twice as fast as they were a few years ago, which was even then twice as fast as the best HDD. Both have a dramatic effect on render speeds and thresholds for complex scenes. Cloud computing can also give me access to as much power as I want for short periods of time, again for much lower cost than if I brought the hardware myself. And Blender at £0 is already infinitely cheaper than trueSpace was at £100.
My phone is also as powerful now (theoretically) as my first laptop, which I did simple CGI on. In a few years time, I'll be able to by a hundred phones as powerful as my current one for a few pounds each. (The batteries will be shot, but the CPUs won't be.) Plug them into the mains, and run them off wifi and I'll have a pretty powerful and compact render farm. Hopefully by then some decent inroads will have been made into parallelism.
I had thought about getting some kind of render farm by using older hardware which I can get for cheap, or even have sitting around. However, this turned out to be a vexing thing. Even my previous computer, which is absolutely fine and which I used for 3D comfortably until a few months ago, has less than a third of the power of my current machine. So for every three frames that my current one does, this older system does one, or a quarter of the total workload. That's not an impressive factor. If I use the computer before that, I have a tenth of the current computing power added... that already makes no sense whatsoever.
I could upgrade the older machine(s). Since a render node needs no graphics card (no GPU rendering...) the power supplies should work fine, as well as the case, drives, HDs... changing the CPU, mainboard, and memory to competitive versions would perhaps cost 500-600 Euros. Would be a cute project, but considering that my current machine still doesn't render 24/7, it's neither urgent nor even necessary.
Maybe in a year or two... and after that, it's time to review the whole process and setup again...
But as you say, few people render 24/7.
Yes, here it is: http://helmer.sfe.se/
Other people used the Mac Mini as render node, also not taking up much space. Bit more expensive though, I think.