Uploading a somewhat large file eats all the RAM.
Steps to reproduce
On Mattermost 4.4.3, upload a file of ~ 100MB, and see htop going red.
Well, that it works.
Okay, first of all, I fully realize it’s an issue mainly caused by the fact I have a pretty modest server. I have deployed Mattermost on a VM with 1.5GB and two vCPUS @ 2.60GHz. I know it’s not what the doc recommands, but for 10 people, it does the job really well.
Until I try to upload, as I said above, any file of about a hundred megaBytes. Mattermost seems to first copy the file onto the RAM and then copy it on disk, probably to speed up the process when handling several connections at once, which I perfectly agree with.
What I don’t understand is that, even with more than a GB of free memory, I’m still unable to upload a 98MB file. It’s still going crazy. bin/platform goes up to the GB of used memory, my workload jumps to 3 or 4, until the engine timeouts and stops the request (the server doesn’t crash, though). The end-user sees his file cancelling, and I see this in the logfile.
api/v4/files:SqlFileInfoStore.Save code=500 rid=xn3dbz9u87rhpcchxuwtx1shao uid=4xqp6pd9hfrejnmpea96977z7o ip=192.168.0.254 We couldn’t save the file info [details: context deadline exceeded]
But I don’t think it’s actually an SQL issue. As I said above, the whole system is overloaded, which only then causes the MySQL process to timeout.
Is there any way to directly write onto the disk ? In a temp file or something ? I know it’s not the optimized solution, but I need storage over speed, especially since I have just about ten users.
Or maybe is it something else and I’m just completely wrong, in that case please help me. xD
Have a good day