Sign up, and CougarBoard will remember which categories you want to view. Sign up
Dec 26, 2024
10:53:05am
AggieWeekendCougar All-American
The answer is maybe. I'm not sure how big the YOLO models are. If you take
you total ram and subtract the size of the weigths/parameters of the model this gives you a baseline for how much remaining RAM you would have (if running it all in the GPU). Then, you take your batch size (meaning the number of images you want to process at a time) and multiply that by both the original image size and by the storage space needed for intermediate results in the network. You can scale your batch size up/down until you hit the sweet spot of maximizing throughput.

Sometimes with a simple model, you can spend more time pushing data to/from the graphics card than the computation warrants. So, you would be using all your graphics card VRAM, but not maximizing the usage of the GPU processing power. Sometimes if you don't have a big enough batch size, you aren't using all the memory or all the GPU processing. So, you find a sweet spot where you are using as much GPU VRAM as possible and using as close to 100% of the GPU processing power as possible.

tl;dr- The thing that the graphics card RAM does is allow bigger models and allows bigger batches. The newer versions of YOLO (live v9) probably require more than 8GB, especially if you are trying to do the semantic segmentation instead of just bounding boxes.
AggieWeekendCougar
Bio page
AggieWeekendCougar
Joined
Sep 23, 2013
Last login
Dec 27, 2024
Total posts
15,835 (123 FO)