[SOLVED] Uploading a somewhat large file eats all the RAM

Summary

Uploading a somewhat large file eats all the RAM.

Steps to reproduce

On Mattermost 4.4.3, upload a file of ~ 100MB, and see htop going red.

Expected behavior

Well, that it works.

Observed behavior

Hi,

Okay, first of all, I fully realize it’s an issue mainly caused by the fact I have a pretty modest server. I have deployed Mattermost on a VM with 1.5GB and two vCPUS @ 2.60GHz. I know it’s not what the doc recommands, but for 10 people, it does the job really well.
Until I try to upload, as I said above, any file of about a hundred megaBytes. Mattermost seems to first copy the file onto the RAM and then copy it on disk, probably to speed up the process when handling several connections at once, which I perfectly agree with.

What I don’t understand is that, even with more than a GB of free memory, I’m still unable to upload a 98MB file. It’s still going crazy. bin/platform goes up to the GB of used memory, my workload jumps to 3 or 4, until the engine timeouts and stops the request (the server doesn’t crash, though). The end-user sees his file cancelling, and I see this in the logfile.

api/v4/files:SqlFileInfoStore.Save code=500 rid=xn3dbz9u87rhpcchxuwtx1shao uid=4xqp6pd9hfrejnmpea96977z7o ip=192.168.0.254 We couldn’t save the file info [details: context deadline exceeded]

But I don’t think it’s actually an SQL issue. As I said above, the whole system is overloaded, which only then causes the MySQL process to timeout.

Is there any way to directly write onto the disk ? In a temp file or something ? I know it’s not the optimized solution, but I need storage over speed, especially since I have just about ten users.

Or maybe is it something else and I’m just completely wrong, in that case please help me. xD

Have a good day

Hi @Tangeek,

I’m not sure what your specific issue would be here, but we definitely don’t handle large files well so that would likely be it. If I remember correctly, we load it into memory and then if it’s an image, we make a couple copies (also in memory) to generate a thumbnail and such. That’s part of the reason that we use a default file size limit of 50 MBs.

I can check to see if we have a ticket to improve that, but in the mean time, I’d recommend using something like Google Drive or Dropbox and sending a link back and forth.

Hi,

Thank you for the honest reply ! :slight_smile: Knowing that the file is indeed copied into the RAM, I can safely say it’s “just” a memory issue on my part. I actually feel better knowing that, I was afraid it would be a major memory leak bug. I don’t have enough RAM, eh, I can live with it.

If it could be improved, it would most be welcome, but I understand the necessity to manage uploads that way, so if I’m the only one who complained about it, consider it low priority. (Maybe “if the file is not a multimedia file we need to analyze, write directly to disk” as an option or something ?)
But I do think it would be nice if it could be detailed in the doc, I don’t remember reading it somewhere.

In the meantime, we did just what you recommended. I set up a Seafile server next to the Mattermost, so large files can be uploaded there.

Thanks again for the reply. :slight_smile:

Hi @Tangeek,

Pleased your issue is resolved and glad hmhealey could help :slight_smile:

Thanks for the information.