Publish Date: 28 Jan 2026 Last update: 28 Jan 2026
Introduction
Demonstration Devices
Installing & Setting Up LM Studio
Download LM Studio App Image
Setting up LM Studio
Using LLM for translation in Luna Translator
Setup Translator setting in Luna Translator
Advance Setup Host and Client (Optional)
Conclusion
Note: Please use ctrl + F to go to the topic you are interested
Last time, I covered how to use Luna Translator. If you have not read that yet, please check out “here (Linux)”, as I won’t cover it here. This time, I am going to cover how to use LLM for translation in Luna Translator, in case you don’t have a reliable internet connection. The best part is that you don’t need to be a programmer to do it. However, to run LLM (Large Language Model) locally, you will probably need a GPU with at least 8 GB VRAM to have a good experience. Otherwise, it will be slow.
PC: GMKtec M7
Ram: 32 GB DDR5 4800 mhz
OS: Fedora 43 KDE
Although I am doing the demo on Fedora, the guide will apply to SteamOS and bazzite as well. As I have used those OS too.
LM Studio is probably the easiest for anyone to run LLM locally without too much hassle, and this is how I personally use it as well. Moreover, you don’t have to open the scary terminal in the installation process, as I will be using the app image version. Below is the setup guide.
1. Downloading LM Studio App image
Step 1: Download LM Studio via this “Link”
2. Setting up LM Studio
Step 1: Click and launch the app image
Step 2: Choose Power User as your level
Step 3: Download the LLM model from the “model search” in setting
Note 1: Red Circle to search LLM model Green Circle for Download
Note 2: Personal recommendation for LLM model is “aya expanse 8b”
Note 3: For most of you out there, you probably have an 8 GB VRAM GPU, so any model larger than 6 GB in size (mostly the size of most 8b models) will be very slow to run.
Step 4: Setup Local Server Copy the Address (http://xxx.x.x.x:1234)
Note 1: Green circle for adjusting the LLM model stays loaded before ejecting (= Free up your VRAM)
Note 2: Blue Circle is very important and copy it, as it will be used in the setup for the Luna translator
Note 3: If possible reserve the IP address for the Computer by going into the router settings “DHCP reservation”, preventing a different IP address from being assigned to the computer.
1. Setup Translator setting in Luna Translator
Step 1: Go to the Translator setting click on LLM setting (Red circle)
Note: Turn on the setting with the green circle, then turn off all the settings with the blue circle
Step 2: Enter the http://xxx.x.x.x:1234 in the red circle field Then click refresh at the green Circle field to pick the LLM used for translation
Note: You can customize the prompt for LLM, like how to present the content in the “system prompt.”
Step 3: Enjoy
Note: Please make sure to eject the LLM model after reading, so that you won’t encounter a laggy gaming experience due to a lack of VRAM when you are playing other games
I guess someone out there will say, “I have this powerful PC (Host) that I want to run my LLM. Then, using a weaker laptop or tablet (Client) to read VN with translation.” I got you covered. However, I only recommend using it in your local network.
Step 1: At LM Studio developer setting Turn on “Serve on Local Network” note down the new “http://xxx.x.x.x:1234”
Step 2: Follow a similar idea at Setting LLM in Luna Translator Put the new API Endpoint address in the LLM translator setting on your weaker PC with Luna Translator
Step 3: Enjoy
Hope you find the guide useful. If you want to learn more about how to set up Luna Translator “Here (Windows)” and “Here (Linux)”. Happy reading.