Updated on May 24, 2024.

On May 23, 2024, we hosted our IntelliJ IDEA Livestream episode with Kenneth Kousen, where we discussed working with open-source LLMs on your local hardware and using the newest Java features to access these models.

Session abstract

General AI models, like ChatGPT, Claude AI, or Gemini, have a broader scope, and their answers to questions can be correspondingly imaginative. But many vendors don’t want to be held responsible for awkward answers, so they add “guard rails” to limit the responses. Those limitations often restrict the models so much as to make them unable to answer reasonable questions.

In this talk, we’ll discuss the Ollama system, which allows you to download and run open-source models on your local hardware. That means you can try out so-called “uncensored” models, with limited guard rails. What’s more, because everything is running locally, no private or proprietary information is shared over the Internet. Ollama also exposes the models through a tiny web server, so you can access the service programmatically.

We’ll look at how to do all of that, and how to use the newest Java features, like sealed interfaces, records, and pattern matching, to access AI models on your own hardware.

Asking questions

Kenneth will try to answer all of your questions during the session. If we run out of time, we’ll publish the answers to any remaining questions in a follow-up blog post.

Your speaker and host



Happy developing!

Leave a Reply

Your email address will not be published. Required fields are marked *