Note for documentation writers: with this fix the end user can dump huge databases, but it is still not an automatic process.
The process to follow is:
- use offset and limit to dump the db in many passes, keeping limit fixed and increasing offset, until there are no more table rows exported
- import each one of the dumps generated
It probably helps to first export db schema and later only export db data (makes it easier to import without fear of getting conflicts for existing tables)
If would not spend more time in trying to make the process more automatic - also user might incur in filesystem limits for big files, have troubles with zipping or editing dumped data etc...