Optimize file attachment memory usage where possible

Start date

Due date

Description

Customer use case: The default max file size upload is 50MB. However, people in a large prospect often collaborate on much larger files (could be up to 1GB).

In our hardware requirements doc, we cite attachment size as a large factory for memory requirements. In many cases, we can optimize our attachment handling by streaming the attachment to S3 or disk without holding the entire file in memory.

Further conversation in pre-release: https://pre-release.mattermost.com/core/pl/huod3abtmpgoxyp4uatmkyrhyc

QA Test Steps

Upload files. Suggested use-cases: - small images - large images - large non-images I would specifically appreciate more testing against the S3 backend, and with plugins that process file uploads

Checklist

Activity

Show:

Linda MitchellFebruary 13, 2019 at 8:22 PM

Chatted with Jesse about this one, and I agree that the length of time this has been soaking, plus full release tests, are sufficient for general file upload testing.

As for the large files,

There's still an actual change pending to enable "XLarge" files: https://mattermost.atlassian.net/browse/MM-10188

So we can test as needed when that comes through. Closing this ticket, no new release tests needed.

Linda MitchellFebruary 13, 2019 at 5:41 AM

Asking JH (since he helped review the PR) for some help testing.

Lev BroukDecember 16, 2018 at 3:00 PM
Edited

To test the memory and performance improvements, from mattermost-server repo execute

go test -v -run nothing -benchmem -bench UploadFile ./app

The old code paths are still in the code, the new code path is UploadFileX

Linda MitchellMarch 7, 2018 at 6:24 AM

Done
Pinned fields
Click on the next to a field label to start pinning.

Details

Assignee

QA Assignee

Reporter

Sprint

QA Testing Areas

API
Plugins
Other (write in QA test steps)

Checklist

Created November 8, 2017 at 12:25 AM
Updated February 13, 2019 at 8:22 PM
Resolved December 13, 2018 at 10:59 PM

Flag notifications