Fresh Proxmox install, having a dreadful time. Trying not to be dramatic, but this is much worse than I imagined. I’m trying to migrate services from my NAS (currently docker) to this machine.
How should Jellyfin be set up, lxc or vm? I don’t have a preference, but I do plan on using several docker containers (assuming I can get this working within 28 days) in case that makes a difference. I tried WunderTech’s setup guide which used an lxc for docker containers and a separate lxc of jellyfin. However that guide isn’t working for me: curl doesn’t work on my machine, most install scripts don’t work, nano edits crash, and mounts are inconsistent.
My Synology NAS is mounted to the host, but making mount points to the lxc doesn’t actually connect data. For example, if my NAS’s media is in /data/media/movies or /data/media/shows and the host’s SMB mount is /data/, choosing the lxc mount point /data/media should work, right?
Is there a way to enable iGPU to pass to an lxc or VM without editing a .conf in nano? When I tried to make suggested edits, the lxc freezes for over 30 minutes and seemingly nothing happens as the edits don’t persist.
Any suggestions for resource allocation? I’ve been looking for guides or a formula to follow for what to provide an lxc or VM to no avail.
If you suggest command lines, please keep them simple as I have to manually type them in.
Here’s the hardware: Intel i5-13500 64GB Crucial DR5-4800 ASRock B760M Pro RS 1TB WD SN850X NVMe
I run jellyfin on an LXC, so first get jellyfin installed personally I would separate jellyfin and your other docker containers, I have a separate VM for my podman containers. I need jellyfin up 100% of the time so that’s why its separate.
Work on the first problem, getting jellydin installed I wouldn’t use docker, just follow the steps for installing it on Ubuntu directly.
Second, to get the unprivileged lxc to work with your nas share follow this forum post: https://forum.proxmox.com/threads/tutorial-unprivileged-lxcs-mount-cifs-shares.101795/
Thirdly, read through the jellyfin docs for hardware acceleration. Its always best practice to not just run scripts blindly on your machine.
Lastly take a break if you can’t figure it out, when I’m stuck I always need to take a day and just think stuff over and I usually figure out why its not working by just doing that.
If you need any help let me know!
So I got Jellyfin running last night as an unprivileged LXC using a community script. It’s accessible via web browser, and I could connect my NAS. Now I’m having NAS-server connection issues and “fatal player” issues on certain items. I appreciate the support, I’m going to need a lot of it haha
curl doesn’t work on my machine, most install scripts don’t work, nano edits crash, and mounts are inconsistent.
If your system is that fucked, I would wipe it and start over. And don’t run any scripts or extra setup guides, they’re not necessary.
Personally I run all my containers in a Debian VM because I haven’t bothered migrating them to anything proxmox native. But gpu accel should work fine if you follow the directions from jellyfin: https://jellyfin.org/docs/general/post-install/transcoding/hardware-acceleration/
Just make sure you follow the part about doing it in docker.
That’s where I’m at, dude. I bought into the idea of Proxmox because I was led to believe that it makes docker deployment easier…but I’m thinking it would actually work if I just used a VM
I don’t know if containers on proxmox is easy, but containers in a Debian VM is trivial.
Like docker directly on proxmox? Docker on proxmox isn’t going to be any better than docker on anything else.
VMs and LXC are where proxmox has its best integration.
Docker in a VM on proxmox, while maybe not the recommended way of doing things, works quite well though.
It may be better now but I’ve always had problems with Docker in LXC containers; I think this has to do with my storage backend (Ceph) and the fact that LXC is a pain to use with network mounts (NFS or SMB); I’ve had to use bind mounts and run privileged LXCs for anything I needed external storage for.
Proxmox is about managing VMs and LXCs. I’d just create a VM and do all your docker in there. Perhaps make a second VM so you can shuffle containers around while doing upgrades.
If you plan to have your whole setup be exclusively Docker and you have no need for VMs or LXCs, then Proxmox might be a bunch of overhead you don’t need.
I use the LXCs for simple stuff that does a bare-metal type install within them, and I use the VMs for critical services like OPNSense firewall/routers. I also have a Proxmox cluster across three machines so I can live-migrate VMs during upgrades and prevent almost any downtime. For that use case it’s rock solid. It’s a great product and it offers a lot.
If you just need a single machine and only Docker, it’s probably overkill.
Well, the plan was to use a couple VMs for niche things that I’d love to have and many services. But if I can’t get Proxmox working as advertised, I’ll throw most of that out of the window
The easiest solution if you want to have managed VMs IMHO is to just make a large VM for all your docker stuff on Proxmox and then you get the best of both worlds.
Abstracting docker into its own VM isn’t going to add THAT much overhead, and the convenience of Proxmox for management of the other VMs will make that situation much easier.
LXC for docker can be made to work, but it’s fiddly and it probably won’t gain you much in the long run.
Now, all these other issues you seem to be having with the Proxmox host itself; are you sure you have networking set up correctly, etc? curl should be working no problem; I’m not sure what’s going on there.
That’s good to know at least. I was getting anxious last night thinking that I signed up for something I’d never get running. So curl is working now…not sure why it wasn’t earlier, but I’ve used it since and it is confirmed working. And networking (as in internet connectivity) is working, but now I’m struggling with the NAS mount: it was working perfectly at first, but now it’s randomly shifting between “available” and “unknown”.
How should Jellyfin be set up, lxc or vm
Either way. I prefer lxc, personally, but to each their own. lxc I think is drastically easier, in part because you don’t need to pass through the whole GPU…
Is there a way to enable iGPU to pass to an lxc or VM without editing a .conf in nano?
You don’t need to pass the igpu, you just need to give the LXC access to render and video groups, but yes, editing the conf is easiest. I originally wrote out a bunch here, then remembered there is a great video.
https://www.youtube.com/watch?v=0ZDr5h52OOE
My Synology NAS is mounted to the host, but making mount points to the lxc doesn’t actually connect data
Do they show up as resources? I add my mount points at the CLI personally, this is the best way imo:
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/mediaThis is done from the host, not inside the LXC.
Does your host see the mounted NAS? After you added the mount point, did you fully stop the container and start it up again?
Edit: You can just install curl/wget/etc BTW, its just Debian in there.
apt install curlEdit 2: I must have glossed over the mount part.
Dont add your network storage manually, do it through proxmox as storage, by going to Datacenter > Storage > Add, and enter the details there. This will make things a lot easier.
Do they show up as resources? I add my mount points at the CLI personally, this is the best way imo: pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media
I’d love to check that, but you lost me…
So the NAS was added like you suggested; I can see the NAS’s storage listed next to local data. How does one command an lxc or vm to use it though?
This line right here shares it with the LXC, I’ll break it down for you:
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/mediapct is the proxmox container command, youre telling it to set the mount point (mp0, mp1, mp2, etc). That point on the host is /mnt/pve/yourmountname. In the container is on the right, mp=/your/path/. So inside the container if you did an ls command in the directory /your/path/, it would list the files in /mnt/pve/yourmountname.
The yourmountname part is the name of the storage you added. You can go to the shell at the host level in the GUI, and go to /mnt/pve/ then enter ls and you will see the name of your mount.
So much like I was mentioning with the GPU, what youre doing here is sharing resources with the container, rather than needing to mount the share again in your container. Which you could do, but I wouldn’t recommend.
Any other questions I’ll be happy to help as best as I can.
Edit: forgot to mention, if you go to the container and go to the resources part, you’ll see “Mount Point 0” and the mount point you made listed there.
Are there different rules for a VM with that command? I made a 2nd NAS share point as NFS (SMB has been failing, I’m desperate, and I don’t know the practical differences between the protocols), and Proxmox accepted the NFS, but the share is saying “unknown.” Regardless, I wanted to see if I could make it work anyway so I tried ‘pct set 102 -mp1 /mnt/pve/NAS2/volume2/docker,mp=/docker’
102 being a VM I set up for docker functions, specifically transferring docker data currently in use to avoid a lapse in service or user data.
Am I doing this in a stupid way? It kinda feels like it
For the record, I prefer NFS
And now I think we may have the answer…
OK so that command is for LXCs, and not for VMs. If youre doing a full VM, we’d mount NFS directly inside the VM.
Did you make an LXC or a VM for 102?
If its an lxc, we can work out the command and figure out what’s going on.
If its a VM, we’ll get it mounted with NFS utils, but how is going to depend on what distribution you’ve got running on there (different package names and package managers)
Ah, that distinction makes sense…I should’ve thought of that
So for the record, my Jellyfin-lxc is 101 (SMB mount, problematic) and my catch-all Docker VM is 102 (haven’t really connected anything, and I don’t care how it’s done as long as performance is fine)
Ok we can remove it as an SMB mount, but fair warning a few bits of CLI to do this thoroughly.
- Shut down 101 and 102
- In the Web GUI, go to the JF container, go to resources, and remove that mount point. Take note of where you mounted it! We’re going to mount it back in the same spot.
- Go to the web GUI, go to Storage, select the SMB mount of the NAS, and select Edit - then uncheck Enable.
- With it selected, go ahead and click remove
- For both 101 and 102, lets make sure they aren’t set to start from boot for now. Go to each of them, and under the options section, you’ll see “Start at Boot”. If they say yes, change it to No (click edit or double click and remove the check from the box).
- Reboot your server
- Lets check that the mounting service is gone, go to the host then shell, and enter
systemctl list-units "*.mount" - If you don’t see mnt-pve-thenameofthatshareyoujustremoved.mount, its removed.
That said - I like to be sure, so lets do a few more things.
umount -R /mnt/pve/thatshare- Totally fine if this throws an error- Lets check the mounts file.
cat /proc/mounts- a whooole bunch of stuff will pop up. Do you see your network share listed there? If so, lets go ahead and delete that line.nano /proc/mounts, find the line if its still there, and remove it.ctrl+xthenyto save.
Ok, you should be all clear. Lets go ahead and reboot one more time just to clear out anything if you had to make any further changes. If not, lets re-add.
Go ahead and add in the NAS using NFS in the storage section like you did previously. You can mount to that same directory you were using before. Once its there, go back into the Shell, and lets do this again:
ls -la /mnt/pve/thenameofyourmount/Is your data showing up? If so, great! If not, lets find out whats going on.
Now lets add back to your container mount. You’ll need to add that mount point back in again with:
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/media(however you had it mounted before in that second step).Now start the container, and go to the console for the container.
ls -la /whereveryoumountedit- if it looks good, your JF container is all set and now working with NFS! Go back to the options section, and enable “Start at Boot” if you’d like it to.Onto the VM, what distribution is installed there? Debian, fedora, etc?
Well, now the jelly lxc is failing to boot "run_buffer: 571 Script exited with status 2 Lxc_init: 845 failed to run lxc.hook.pre-start for container “101"”
But the mount seems stable now. And the VM is Debian 12
Friend, thank you. My users and I greatly appreciate it. You just taught me how to solve one of the biggest problems I’ve been having. Just tested a movie through Jellyfin after using that cli.
Got any pointers for migrating config files from my NAS’s docker containers to Proxmox’s LXCs/VMs?
No worries!
So if you’ve got docker containers going already, you don’t need them to be LXCs.
So why not keep them docker?
Now there are a couple of approaches here. A VM will have a bit higher overhead, but offers much better isolation than lxc. Conversely, lxc is lightweight but with less host isolation.
If we’re talking the *arr stack? Meh, make it an lxc if you want. Hell, make it an lxc with dockge installed, so you can easily tweak your compose files from the web, convert a docker run to compose, etc.
If you have those configs (and their accompanying data) stored on the NAS itself - you dont have to move them. Let’s look at that command again…
pct set 100 -mp0 /mnt/pve/NAS/media,mp=/mediaSo let’s say your container data is stored at /opt/dockerstuff/ on your NAS, with subdirectories of dockerapp1 and dockerapp2. Let’s say your new lxc is number 101. You have two options:
- Mount the entire directory
pct set 101 -mp0 /mnt/Pve/NAS/opt/dockerstuff,mp=/opt/dockerstuff- Mount them specifically for each container to get a bit more granular in control
pct set 101 -mp0 /mnt/Pve/NAS/opt/dockerstuff/dockerapp1,mp=/opt/dockerstuff/dockerapp1 pct set 101 -mp1 /mnt/Pve/NAS/opt/dockerstuff/dockerapp2,mp=/opt/dockerstuff/dockerapp2Either will get you going
I think I’m getting a grip on some of the basics here. I was trying to make a new mount for my NAS’s docker data…separate drive and data pool. In the process of repeated attempts to get the SMB mount to get accepted, I noticed my NAS’s storage isn’t working as intended suddenly.
‘cat /etc/pve/storage.cfg’ shows the NAS still ‘pvesm status’ says “unable to activate storage…does not exist or is unreachable”
I thought it was related to too much resource usage, but that’s not the case
What do you get putting in:
showmount <ip address of NAS>“Hosts on 192.168.0.4:” As a novice, I get the feeling that means it’s not working
https://tteck.github.io/Proxmox/ this is a good place to start. Also highly recommend youtube videos lots of good stuff there.
Yes, I tried a couple of those. They were giving me errors
The linked repository is unmaintained, and some of the scripts are broken as a result.
The scripts have moved to this repository.
Last time i used then they worked fine.
Try the scripts in the new one, and if they still give you errors let me know and i’ll be happy to try and help you.
Also, please don’t run scripts from the internet without reading trough them first. Even from a trusted source. You never know what random people could have written in there. 😅Any tips on copy-pasting those commands into a console window? Every function I’ve tried has failed, but I’m willing to keep trying
It always works for me to just paste with ctrl+shift+v directly in the terminal window of the web gui.
Interesting, what browser do you use? Sounds like I may have to switch from Firefox if that’s the case because this lack of quality of life is ridiculous
I use firefox as well, on linux mint, not windows. So if you use windows, that may be the culprit. I haven’t used it in a long time so i don’t know if that could be the case.
Well, this is the first step in me eventually dropping Windows (assuming all goes well). I’d like to wipe my main PC and do a dual-boot situation eventually, but migrating services to an actual server takes priority…got kiddos counting on their cartoons, can’t let em down

