Performance issues with large binary data in database
Performance issues with large binary data in database
- Subject: Performance issues with large binary data in database
- From: Markus Ruggiero via Webobjects-dev <email@hidden>
- Date: Wed, 11 Sep 2019 11:12:19 +0200
A customer of mine has issues with performance. We have identified that
ec.saveChanges() sometimes takes ages to finish (tens of seconds, sometimes a
minute or more). The application is basically a document hub, nothing too
fancy. But the developer (who is not available anymore) once decided to store
all documents in the database (PostgreSQL). The docs are primarily PDFs with
sizes of several 100k up to multi mega bytes. The problems cannot be tied
directly to the size of the documents, sometimes a large doc goes through in a
couple seconds whereas sometimes a smaller one takes ages. We have found that
the bottelneck must be inside the editing context, the database statements go
through nicely. I think performance gets lost during ec snapshot handling in
conjunction with JVM memory requirements and garbage collection, but we have
not been able to really pinpoint the location where time is iost.
I tend to recommend to the customer to store the files on disk and only keep
metadata in the database relieving the editing context from handling multi mega
byte snapshots.
So my questions to the community are as follows:
- anyone has experience storing multi-megabyte binary data in the database?
- how would one analyse such a situation (where EOEditingContext et.al is not
debuggable due to lack of source)?
- what would you recommend?
Thanks for any tips.
---markus---
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden