There are no items in your cart
Add More
Add More
| Item Details | Price | ||
|---|---|---|---|
Instructor: CampusX
Language: Hindi
Validity Period: 1095 days
This course shows you how to use Ollama and other such tools to run, customize, and connect open source large language models on your own system. You’ll learn what Ollama is, how its models work, and how you can use it to build real Gen AI projects—without relying on cloud APIs.
We start with the basics: how models are structured, what commands to use, and how to run them locally. Then we move to practical work—using the Ollama library in Python, creating your own model files, and calling models through the REST API.
Once you’re comfortable, you’ll learn how to make Ollama do more:
generate embeddings, use tools, and build RAG (Retrieval-Augmented Generation) systems that pull real data into your model’s answers.
Finally, we’ll cover advanced setups—running multiple models together, handling concurrency, and using GPUs for faster performance.
By the end of this course, you’ll know how to:
If you want to understand how Gen AI works under the hood and build real projects with it, this course is for you.
Course Duration: 19+ hours