Uploaded image for project: 'eZ Publish / Platform'
  1. eZ Publish / Platform
  2. EZP-18096

ezsqldumpschema.php out of memory error when dumping big schemas

    Details

    • Type: Bug Bug
    • Status: Open
    • Priority: Medium Medium
    • Resolution: Unresolved
    • Affects Version/s: 4.2.0, 4.3.0, 4.4.0, 4.5.0beta1
    • Fix Version/s: Future
    • Component/s: Database related
    • Labels:
      None

      Description

      Set up a biggish (ie real-life) ez db.

      Run the db dump script, in data-only, array-format mode

      witness:

      Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 48 bytes) in D:\htdocs\ezp\installs\ezpublish-4.5.0-beta1\lib\ezdb\classes\ezmqldb.php on line 521

        Issue Links

          Activity

          Hide
          Ole Marius Smestad added a comment -

          Fixed in
          master (4.5.0beta2) 2ce01e71cdb8c43272af7cb74e9f3b8bc1db48c2

          Show
          Ole Marius Smestad added a comment - Fixed in master (4.5.0beta2) 2ce01e71cdb8c43272af7cb74e9f3b8bc1db48c2
          Hide
          Gaetano Giunta added a comment - - edited

          Hunk in ezsqldumpschema.php missing.

          Otoh the unk in ezdbschemainterface.php seems to be in.

          Net effect: core functionality for this is in, but it is not available for usage to end user.

          Rescheduled to 4.6 most likely

          Show
          Gaetano Giunta added a comment - - edited Hunk in ezsqldumpschema.php missing. Otoh the unk in ezdbschemainterface.php seems to be in. Net effect: core functionality for this is in, but it is not available for usage to end user. Rescheduled to 4.6 most likely
          Show
          Gaetano Giunta (Inactive) added a comment - New PR: https://github.com/ezsystems/ezpublish-legacy/pull/803
          Hide
          Gaetano Giunta (Inactive) added a comment -

          Note for documentation writers: with this fix the end user can dump huge databases, but it is still not an automatic process.

          The process to follow is:

          • use offset and limit to dump the db in many passes, keeping limit fixed and increasing offset, until there are no more table rows exported
          • import each one of the dumps generated
            It probably helps to first export db schema and later only export db data (makes it easier to import without fear of getting conflicts for existing tables)

          If would not spend more time in trying to make the process more automatic - also user might incur in filesystem limits for big files, have troubles with zipping or editing dumped data etc...

          Show
          Gaetano Giunta (Inactive) added a comment - Note for documentation writers: with this fix the end user can dump huge databases, but it is still not an automatic process. The process to follow is: use offset and limit to dump the db in many passes, keeping limit fixed and increasing offset, until there are no more table rows exported import each one of the dumps generated It probably helps to first export db schema and later only export db data (makes it easier to import without fear of getting conflicts for existing tables) If would not spend more time in trying to make the process more automatic - also user might incur in filesystem limits for big files, have troubles with zipping or editing dumped data etc...

            People

            • Assignee:
              unknown
              Reporter:
              Gaetano Giunta
            • Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

              Dates

              • Created:
                Updated:
                Resolved: