Windows version is here

Great news - you can now use Suverenum on Windows.

Run AI locally and keep your document chats private. Everything happens on your device.

:right_arrow: Get Windows version

Try it out and share your thoughts. We’re here for any feedback or questions!

Could you tell me please. Does this version have an auto-update feature?

Guys, why can’t I manually select the neural network I want? What’s the point of such restrictions? I thought I would get a more convenient version of Ollama, but so far it resembles a beautiful iPhone that you can’t control at all.

Hi @Songraf,

Thanks for giving it a shot and sharing honest thoughts!

Good news on both points:

Auto-updates: Already there. We ship updates roughly every week, so you’ll get improvements automatically.

Model selection: You can change models anytime in Settings. We auto-pick during onboarding to help most users get started quickly, but technical users like yourself can switch to any model you want.

We’re early in our journey, so your feedback really matters. It helps us understand what to improve and prioritize. What are you planning to use Suverenum for? Your use case would help us build features that actually matter to you.


But I can’t install different versions like I do for Ollama. The system won’t let me manually install anything heavier. Yes, I only have 6GB of video memory, but I have 40GB of RAM and I don’t mind a little lag for the sake of quality. Will this be possible in the future?

Thanks for the feedback and for clarifying your setup.

You’re right - we currently limit model selection based on GPU memory to ensure smooth performance for most users. But we hear you on wanting more control, especially when you’re willing to trade speed for quality and have plenty of RAM to work with.

We’re planning to add an advanced/dev mode that will let technical users like you override these restrictions and install any model you want. This is coming soon.

Your feedback helps us prioritize this feature, so thank you for being specific about your needs.

Stay tuned - we’ll announce it here when it’s ready.

@Songraf We just released developer mode. It also includes layers offloading to cpu and manual granular model selection. Please try, looking for feedback!

Thank you so much! By the way, I haven’t opened the app in a very long time. Now that I’ve opened it, it’s not offering me an auto-update. Perhaps my version is too old. Or maybe it’s a bug. Unfortunately, I couldn’t find my version anywhere. So I can’t tell you what version I had. To update the program, I had to click the red button in the settings to delete everything, after which the program reset and updated. It’s a good thing I didn’t have any important chats. Otherwise, it would have been sad :sweat_smile:

Thank you one more time for your answer :folded_hands:t2:

Unfortunately, even with these settings, I cannot use neural networks with parameters larger than 4B. In Ollama, I use Gemma 27B. With my 40GB of RAM, this is quite acceptable for me.