Publish Date: 30 December 2025 Last update: 30 December 2025
Introduction
Installing & Setup of LM Studio
Installing LM Studio
Setting up LM Studio
Using LLM for translation in Luna Translator
Setup Translator setting in Luna Translator
Advance Setup Host and Client (Optional)
Conclusion
Note: Please use ctrl + F to go to the topic you are interested
Last time, I covered how to use Luna Translator. If you have not read that yet, please check out “here (Windows)”, as I won’t cover it here. This time, I am going to cover how to use LLM for translation in Luna Translator, in case you don’t have a reliable internet connection. The best part is that you don’t need to be a programmer to do it. However, to run LLM (Large Language Model) locally, you will probably need a GPU with at least 8 GB VRAM to have a good experience. Otherwise, it will be slow.
LM Studio is probably the easiest for anyone to run LLM locally without too much hassle, and this is how I personally use it as well. Below is the setup guide
1. Installing LM Studio
Step 1: Download LM Studio via this “Link”
Note: LM Studio is also available for Mac and Linux too
Step 2: Click on the exe and install LM Studio
Step 3: Done
2. Setting up LM Studio
Just a heads up, the below tutorial is probably a bit different from what you see for first time setup. As I have already set it up before, and uninstalled it for the purpose of going through it once more for the tutorial.
Step 1: Check technical level setting Choose Power User (This will be asked if this is the first time you launch the app)
Note: If you have used it before and want to switch it, you can go to the bottom left corner to change it
Step 2: Enable “Local LLM Service” in app settings under the developer tab
Step 3: Download the LLM model from the “model search” in setting
Note 1: Red Circle to search LLM model Green Circle for Download
Note 2: Personal recommendation for LLM model is “aya expanse 8b”
Note 3: For most of you out there, you probably have an 8 GB VRAM GPU, so any model larger than 6 GB in size (mostly the size of most 8b models) will be very slow to run.
Step 4: Setup Local Server Copy the Address (http://xxx.x.x.x:1234)
Note 1: Green circle for adjusting the LLM model stays loaded before ejecting (= Free up your VRAM)
Note 2: Blue Circle is very important and copy it, as it will be used in the setup for the Luna translator
1. Setup Translator setting in Luna Translator
Step 1: Go to the Translator setting click on LLM setting (Red circle)
Note: Turn on the setting with the green circle, then turn off all the settings with the blue circle
Step 2: Enter the http://xxx.x.x.x:1234 in the red circle field Then click refresh at the green Circle field to pick the LLM used for translation
Note: You can customize the prompt for LLM, like how to present the content in the “system prompt”
Step 3: Enjoy
Note: Please don’t forget to eject the LLM model after reading, so that you won’t encounter a laggy gaming experience due to a lack of VRAM when you are playing other games
I guess someone out there will say, “I have this powerful PC (Host) that I want to run my LLM. Then, using a weaker laptop or tablet (Client) to read VN on with translation.” I got you covered. However, I only recommend using it in your local network.
Step 1: At LM Studio developer setting Turn on “Serve on Local Network” note down the new “http://xxx.x.x.x:1234”
Step 2: Follow a similar idea at Setting LLM in Luna Translator Put the new API Endpoint address in the LLM translator setting on your weaker PC with Luna Translator
Step 3: Enjoy
Hope you find the guide useful. If you want to learn more about how to set up Luna Translator “Here (Windows)” and “Here (Linux) [coming soon]”. Happy reading.