I’m using pg_dump
to dump a large (396 GB) PostgreSQL database:
pg_dump --clean --create mydatabase
After running for a day, a statement fails with ERROR: compressed data is corrupt
and pg_dump
exits.
100% data integrity is not my top priority here. It’s probably just one broken row that’s preventing me from moving or backing up a db that took many weeks to create. I’m fine with losing that row.
Is there any way to create a dump that ignores such data errors? I found nothing in the pg_dump
docs.
If not, how do I find and delete all corrupted rows in my table? Rows in other tables are pointing to them and will need to be deleted as well.