CUDA

From wiki.mikejung.biz
Jump to: navigation, search

CUDA Video Decoder Basics

Nvidia's CUDA Video Decoder has been around for a while now, seems like it's been around for a decade or more. Anyway, it's a video decoder that utilizes a parallel processing platform, known as CUDA, that takes advantage of the GPU's ability to perform TEXAS sized amounts of work in a very short period of time. What this means for your video is that you can add additional DirectShow filters during playback which can improve video quality by doubling or even quadrupling the video size, then apply downscaling algorithms which render the final frame to your screen. Pretty cool eh? Now you can use your GPU for more than just gaming!

CUDA is able to help with the video decoding process by reading video streams from a raw file and decompressing the video to be played for you. Video frames are not usually stored in their pure format, as the amount of size needed to store every single rendered frame would be pretty large, and not easily accessible. To solve part of the problem videos are compressed in a certain format which reduce the total size of the frames by using an algorithm to remove any duplicated data or identify patterns which could be used to reduce the amount of space needed to store the video.

For example if someone emailed me and said: "oooooooooo", a way to compress that would be to "o"x10 which is 4 characters instead of 10 like the first email. That's a 60% size reduction which adds up if you send billions of emails every second of the day, all day. So this example might not be the most accurate but I hope the point is clear, it takes time to do the math and decode the message once it's been compressed. Having the GPU handle this while your CPU does other stuff is a good way to computer better.

CUDA is a developer toolkit which contains special libraries that help augment the CPU in come tasks when doing it's computing type things. These CUDA software libraries work with many types of programming languages like Python, or Java. So, the idea is that you can potentially speed up existing or new applications my orders of magnitude, which means faster things and condensed and efficient computing. Nvidia basically just evolved the way computing will work in the next few years.

CUDA and DXVA2 can be used together to handle different types of operations on the video stream. You don't have to have one or the other, and if you can use both DXVA2 and CUDA for decoding, or encoding, you should enable both and one of them will handle it, at least that's how most media players usually configure themselves. MediaPlayers like MPC-HC and PotPlayer will automatically try and use any type of hardware decoding they can, so if a certain operation isn't supported by DXVA2, but is supported by CUDA then usually the player takes advantage of this and uses one decoder.

CUDA links and more information

  • Official Nvidia CUDA about page
  • Official Nvidia CUDA Video Decoder Docs
  • I learned some of this information from this nice guide
MadVR Main Page PotPlayer Main Page PotPlayer DSR PotPlayer Decoder Comparison GTX 970 wiki madVR Chroma Upscaling Benchmarks
madVR 720p Chroma Upscaling Screens madVR 1080p Chroma Upscaling Screens madVR 720p Image Doubling with NNEDI3 madVR 720p Image Upscaling MadVR Processing Options NNEDI3 Main Page


.