How to do for a fas...
 
Avisos
Vaciar todo

How to do for a fast rendering

24 Respuestas
5 Usuarios
0 Me gustas
1,771 Visitas
0
Topic starter

Hello

I have a small question about the rendering. The project I'm working on is becoming more and more complex, and even small 30-second renders can take up to 10 minutes. What rendering settings can I adopt to make it go faster and get an idea without requiring final quality?

In the rendering box, I've tried changing the video quality parameter (mp4), but it doesn't change much apart from the weight of the video...

Many thanks in advance for your help

17 respuestas
0

The only way I know is to render from proxies.

0

If you have Intel or AMD graphics, you can try using the rendering presets x264-vaapi.mp4 or hevc-vaapi.mp4. The default doesn't have very high quality, so you have to increase it by varying parameters (bitrate, mostly). If you have Nvidia graphics you can use the ...-nvenc.mp4 presets which are also more efficient than vaapi.
If you have a cpu with many cores/threads you can do a render farm; see: https://cinelerra-gg.org/download/CinelerraGG_Manual/Render_Farm_Usage.html.

0
Topic starter

Thank you very much for your invaluable help.

I haven't tried it yet, but I understand the principle of proxies. It's really great ^^

I have an nvidia card, so I'll use h264-nvenc.mp4 and also try h265-nvenc.mp4

For the “do a render farm” I have a Ryzen7 and if I've understood correctly by opening several instances of cin on my computer I could take advantage of it and it could be interesting. I'll look into this possibility. I also have a laptop, to see if we can make a network from the Internet ...

0

@chapolin

And of course, you can also try Background Rendering (and even in conjunction with the Render Farm)

    https://cinelerra-gg.org/download/CinelerraGG_Manual/Background_Rendering.html

0
Topic starter

I've done quite a bit of testing, so I have a few questions.

When I use h264.mp4 to set the quality (which affects the weight of the rendered video) I decrease the number to increase the quality and I increase it to decrease the quality, for example 0 gives a high quality but with a high weight and 24 a lower quality but with a lower weight. As a result, when I use h264-nvenc.mp4 or h265-nvenc.mp4, I don't understand the quality setting any more because it doesn't seem to work the same way with the 'Quality' parameter, so I guess I should use 'Bitrate' instead?

I've finally figured out more or less how to use the rendering cluster (it's very complicated because I have trouble with English). So without an nfs network. To be sure, as I'm on a single computer, in the 'Hostname' parameter I have to indicate 'localhost'? is that right?

I did a test (with localhost as hostname) and 3 instances of cin (commands: cin -d 10000 && cin -d 10001 && cin -d 10002 ) and I think it worked well and I got several rendering files. But as I wanted to test on a small selection because my project is really big, in the timeline I made a small selection of one minute and in the render box, in the “render range” parameter I indicated “selection”. As a result, I got render files that don't really correspond to my selection, which is a bit weird. I guess you'd better use  “project” for "render range" ?

phylsmith2004 17/06/2024 8:52 pm

@chapolin 

But as I wanted to test on a small selection because my project is really big, in the timeline I made a small selection of one minute and in the render box, in the “render range” parameter I indicated “selection”. As a result, I got render files that don't really correspond to my selection, which is a bit weird.

So glad you reported this bug which has been in the code since 2019.  Fortunately, Andrew-R has found a fix for it and it will be in next month's release.  So now it will start/stop according to your In/Out points or a selection.  The whole project and single frame work as they have been.

0
Topic starter

So tonight I rendered with 15 instances of cin plus the main project and was able to render with a renderfarm (using h264). It worked perfectly for the image and I went from 1 hour 32 to 44 minutes with the renderfarm.
Concatenating the different files works perfectly for the image.

So that was it, you have to render the whole project for it to work.

On the other hand, concatenation doesn't work well for sound, and there are sound artifacts at the splice points.

0

@chapolin

As you already discovered "localhost" is correct, but it made me realize that I should put an actual example of using this method in the manual.

"but as I wanted to test on a small selection because my project is really big, in the timeline I made a small selection of one minute and in the render box, in the “render range” parameter I indicated “selection”. As a result, I got render files that don't really correspond to my selection, which is a bit weird. I guess you'd better use  “project” for "render range" ?"

About the above, I am guessing everyone has just been doing the whole project and never tried "selection" or "in/out pointers".  I will have to test to see if this always worked this way or not.  However, you can cut out some of the time using the render farm by making a selection or using in/out pointers because it does stop at the ending point and the first file is always empty.

I have not heard the audio issue yet.

 
0
Topic starter

Speaking of the manual and “Render Farm Usage”, if I may make a suggestion, the chapter on multicore usage, and therefore probably the usage of a single computer and most modern users, comes in 5th place ( https://cinelerra-gg.org/download/CinelerraGG_Manual/Render_Farm_Usage.html). Personally, as I followed the order in which I read and studied the chapters, I started by setting up an nfs server on my computer and spent a lot of time doing so before realizing, after reading the chapter on multicore, that it was useless. For me, it's not a lost cause, as I've studied nfs and the possibility of networking with my laptop, but I think that for many other users it could be discouraging, as setting up nfs is complex. I think it might be useful to put the chapter on multicore first.

Another quick question: I've successfully rendered using h265-nvenc.mp4 with a single cin. But in the case of a render farm multicores I guess it's better to use all 16 cores of my ryzen 7 with h264.mp4 ?

Esta publicación ha sido modificada el hace 4 meses por chapolin
0

Good proposal for the manual, I see to change it.

Using bitrates to arrive at the desired quality is the basis and the most important way. Each codec has its own custom parameters that can make it more convenient to specify the quality. Unfortunately, everyone implements these parameters in their own way, so we have to read the documentation for each codec. There are the official “white papers” but they are too long and complex, at least for me. A couple of suggestions:

For hevc_nvenc settings:
https://gist.github.com/nico-lab/c2d192cbb793dfd241c1eafeb52a21c3
For hevc via software (cpu) see x265 parameters:
https://x265.readthedocs.io/en/master/introduction.html#

If you do “GPU vs CPU” renderig tests, let us know the results, that maybe we put them in the manual.
If you use all the cpu cores in the render farm, be careful about the temperature, especially for long renders.
Good user fary54 created a script for the render farm that cycles the cores involved so that there is some cooling. See:
https://www.cinelerra-gg.org/bugtracker/view.php?id=575

0
Topic starter

Thank you very much for the suggestions for h265. It is indeed very vast. I've already found some indications and carried out some small conclusive tests.

For the time being, I have to stop working on the video for a few days to take care of the music of my video, but I plan to test the differences between h264, h265 and h265-nvenc. Let's see if there's a big difference between my NVIDIA GeForce GTX 1050 Ti graphics card, which is a small card, and the Ryzen 7 processor with 32 GB of ram. I didn't really note the times, but it seems to me that the rendering between h264 and h265-nvenc is roughly similar (around 1 hour 30 minutes for my project) if I remember correctly . To be confirmed.

I've downloaded fary54's script and will try to get it working 🙂
Is there any way to record all CIN settings in a reusable profile at some point? (I ask because if I use the script I think it will change my CIN settings)

Esta publicación ha sido modificada el hace 4 meses 5 veces por chapolin
0

@chapolin

About "Is there any way to record all CIN settings in a reusable profile at some point?" 

You can make a backup copy of your $HOME/.bcast5 to save all of your settings, index files, etc. before you change things.  Most of the actual settings are in $HOME/.bcast5/Cinelerra_rc so that is the file that is most important for settings.  The .idx, .xml, and .mkr files in the $HOME/.bcast5 directory are there to keep track of the video files you have loaded in the past -- this means that if you backup ALL of $HOME/.bcast5, when you restore it those files will look like what they did before any changes.  If you restore JUST Cinelerra_rc, your settings should be like they were originally AND plugin settings, etc. will have any new values you may have changed while working on the new stuff.

0
Topic starter

Thank you very much, I understand about .bcast5. I think I'll make myself a little bash script to simply make backups from the command line, it's handy.

So today I wanted to spend a little more time with CIN to study the following page: https://cinelerra-gg.org/download/CinelerraGG_Manual/GPU_hardware_decoding.html
And so, in relation to my usual uses and what I've written above, I had a big surprise when I discovered the following lines in my terminal:
“HW device init failed, using SW decode.”

I tried a few different things, and in the case of my project where there are a lot of cartoons based on png images, I realized that it's the pngs that induce the use of SW, even if I've selected HW (cuda or vdpau).
At first I thought that the GPU simply wasn't working on my system (but then why could I still use h265-nvenc?) and at the same time the CPU and GPU monitoring was really unclear.

Luckily, I realized that launching an empty project didn't present the message: “HW device init failed, using SW decode.”
So I launched a new project with only a pure mp4 video coming directly from my camera and finally everything really works.

1)I tested a render with h264 (preset=veryslow) + HW=none and in this case the CPU monitoring is at 100% almost all the time (all 16 cores of the Ryzen 7). GPU = 0% and Video Engine Utilization (nvidia) = 0%. Rendering took 2 minutes 40 seconds. (On my system, beware of CPU monitoring as it uses the GPU).

2)Then I tested the same project with h265-nvenc (preset=lossless) + HW=cuda and in this case the GPU was at 11% and Video Engine Utilization (nvidia) = 100%. The CPUs were mostly between 0 and 20%, but between 2 and 4 cores that regularly and for a long time reached 100%, even though they were constantly active. rendering took 8 seconds.

In both cases, I rendered a 31-second selection and the renderings are approximately 820 MB in size.

Note that apart from rendering, all my cores are below 5%.

Esta publicación ha sido modificada el hace 4 meses por chapolin
0
Topic starter

I tested it too:

- Rendering with h264-nvenc (preset=lossless) + HW=vdpau. GPU = 13% and Video Engine Utilization (nvidia) = 89%. 4 or 5 cpu cores are active up to around 40-60%, with one or two rising to 100%
Rendering took 7 seconds.

- Rendering with h264-nvenc (preset=lossless) + HW=cuda). GPU = 13% and Video Engine Utilization (nvidia) = 69%. 4 or 5 cpu cores are active up to around 40%, with one or two rising to around 83%.
Rendering took 7 seconds.

In both cases the rendering is 859 MB and a 31 second selection as above.

 

With vdpau I get this alert: [swscaler @ 0x7fb7b0d27a40] deprecated pixel format used, make sure you did set range correctly

 

Esta publicación ha sido modificada el hace 4 meses 3 veces por chapolin
andreapaz 03/06/2024 9:32 pm

@chapolin  Very interesting your results, thank you. “range” I think is the “YUV color range” found in

Settings --> preferences --> appearance

If it is set to “jpeg” try putting it to “mpeg”, otherwise the opposite. Then see if you still have the swscaler error.

0
Topic starter

set to “jpeg” try putting it to “mpeg”

I've tried several combinations but the alerts are still there. So I use cuda which doesn't cause any alerts.

 

 

HW device init failed, using SW decode.

About this message, what happens next? rendering uses h264 (I imagine with settings similar to those initially selected for h265-nvenc?) for problematic files like png? and h265-nvenc at the same time for files that work?

 

phylsmith2004 05/06/2024 1:59 pm

@chapolin 

"HW device init failed, using SW decode" just means that instead of using your graphics board to handle the video, it will just use the software the same as it would have if not setting the hardware capability.

Página 1 / 2
Compartir: