
WoolyAI - Hypervise & Maximize GPU Infra
Run Unified GPU portable(Nvidia& AMD) Pytorch ML containers
62 followers
Run Unified GPU portable(Nvidia& AMD) Pytorch ML containers
62 followers
Woolyai is now available as software that can be installed on-premise and on cloud GPU instances. With WoolyAI, you can run your ML PyTorch workloads in unified, portable (Nvidia and AMD) GPU containers, increasing GPU throughput from 40-50% to 80-90%.





WoolyAI - Hypervise & Maximize GPU Infra
WoolyAI - Hypervise & Maximize GPU Infra
@masump We are capturing the specific optimization and transferring it to the vendor-specific optimization if it exists. For example, if PTX has specific optimization, then we do transfer those.
Streak Hunter
Congrats on the launch! Hopefully, the three years of development pay off
BITHUB
Love the vision behind this! Wishing you all the success on Product Hunt and beyond. 🌟
ByteNite
WoolyAI - Hypervise & Maximize GPU Infra
@fabcairo thanks for the feedback. The performance is close to the native, with some overhead for different utilization metrics that the technology layer collects. The ability to parallelize execution of concurrent workloads is much more than native CUDA.
WoolyAI is a CUDA Abstraction layer. On top of this layer, we have built a GPU Cloud service(WoolyAI Acceleration Service) with "Actual GPU Resources Used" billing, NOT "GPU Time Used" billing for Data Scientist to run Pytorch applications from CPU environment
WoolyAI is turbocharging my AI projects! 🚀 Loving the speed and ease. Like if you're into AI acceleration! Wishing you all the best with your AI ventures!
Ghost Jobs
Interesting, what do you think about including VRAM also as a pricing metrics, or you refer to VRAM when you talk about memory.
Anyway good luck to you!
WoolyAI - Hypervise & Maximize GPU Infra
@ghost_jobs Our current utilization model is based on measuring VRAM during the kernel execution time and not idle time.