Uploaded image for project: 'eZ Publish / Platform'
  1. eZ Publish / Platform
  2. EZP-20285

As a user I would like to have acces to an ezpublish::upgrade command

    Details

      Description

      As a user I'm tired of having to do upgrades manually, this is a possible way to get upgrades automated in 5.x.

      We need something like:

      • To extend the SPI Persistence interface and Repository with a method to get database version:
        • (int) version()
      • To extend Repository or a new system information api with version() method which returns the code version as generated by either build or some internal version number*
      • To extend SPI Persistence, Repository and Bundles that need to handle migrations with:
        • upgrade( $fromVersion, $toVersion )
        • downgrade( $fromVersion, $toVersion )
          • In the case of persistence, $fromVersion always needs to be the same as returned by version()
      • Add a ezpublish::upgrade console command

      How it can work:
      1. The command will check the Repository version and compare it to the system info version, if different offer user to upgrade or downgrade based on how they compare
      2. emit the upgrade() / downgrade() command on all bundles that provides them
      3. Core Bundle should have one, and pass it on to Repository which again calls SPI Persistence

      It is important that persistence is first component called, as it is in a potential un stable state if versions between db and code does not match.

      * how the version numbers should look like is a bit of a question, maybe an internal version number that can work across git, community and enterprise builds. But if ee version numbers are used, then all kinds of x.y.z versions where x is higher than 5 needs to be accepted by the functions. Additional gotcha: the version numbers might not make sense for other Bundles then the once from eZ, others will have their own version numbers.

      Additional: It might be possible to setup composer to do such an upgrade as part of a post step after dependencies are updated as well, automating it further.

        Issue Links

          Activity

          Hide
          Roland Benedetti added a comment -

          I like it very much.
          Here you mention upgrades, but would that also be used to install patches / services pack?

          Show
          Roland Benedetti added a comment - I like it very much. Here you mention upgrades, but would that also be used to install patches / services pack?
          Hide
          André Rømcke added a comment - - edited

          If patches / service packs are delivered via composer, then this would more or less already be solved.

          Missing piece after that would be a read only and/or offline mode, and make sure the eZ Publish installation is forced into such a mode during composer updates.

          Show
          André Rømcke added a comment - - edited If patches / service packs are delivered via composer, then this would more or less already be solved. Missing piece after that would be a read only and/or offline mode, and make sure the eZ Publish installation is forced into such a mode during composer updates.
          Hide
          Roland Benedetti added a comment -

          Thanks, this clarifies a bit, though I am very unclear on the workload.
          I need also to get a better understanding of composer to better follow the discussion.

          Show
          Roland Benedetti added a comment - Thanks, this clarifies a bit, though I am very unclear on the workload. I need also to get a better understanding of composer to better follow the discussion.
          Hide
          André Rømcke added a comment - - edited

          One thing I overlooked is "cluster" setups, aka many servers.
          There is a good write up on upgrading/deploying when scaling horizontally here: http://continuousdelivery.com/patterns/
          ( found via this Norwegian blog: http://lab.digipost.no/pages/nor/110-produksjonssetting_uten_nedetid )

          Short: Like normal, you take down one server at a time, upgrading it (or replacing it with a new and updated instance or using symlinks to updated app to be able to quickly roll back later), but furthermore you use a "Expand/Contract pattern" for db updates which differentiate between expanding the schema (new columns, new tables and index changes) from contract changes (db cleanup). So the process is: expand db (pre update), update code (update) and cleanup db (post update).

          The tricky part is to find a way to do the pre update step in composer, you would need to call a separate service and provide it with info on from and to versions before the composer code update is actually done.

          @Roland: This is probably a bit technical, but Composer consists of two set of files, what you define in composer.json can be a dependency and a version of it, examples "ezsystems/ezpublish": "dev-master", "symfony/symfony": "2.2.*", "zetacomponents/webdav": "1.1.3". As you see some of these versions are relative, so there is a second file called composer.lock, this will lock those versions to a specific point in time (in the form of a git commmit sha256 checksum) every time someone performs an update, and is the source of exact version info every time someone performs an install.
          So this system is very simple, and we will need to do some custom stuff around it to be able to use it for our needs.

          Show
          André Rømcke added a comment - - edited One thing I overlooked is "cluster" setups, aka many servers. There is a good write up on upgrading/deploying when scaling horizontally here: http://continuousdelivery.com/patterns/ ( found via this Norwegian blog: http://lab.digipost.no/pages/nor/110-produksjonssetting_uten_nedetid ) Short: Like normal, you take down one server at a time, upgrading it (or replacing it with a new and updated instance or using symlinks to updated app to be able to quickly roll back later), but furthermore you use a "Expand/Contract pattern" for db updates which differentiate between expanding the schema (new columns, new tables and index changes) from contract changes (db cleanup). So the process is: expand db (pre update), update code (update) and cleanup db (post update). The tricky part is to find a way to do the pre update step in composer, you would need to call a separate service and provide it with info on from and to versions before the composer code update is actually done. @Roland: This is probably a bit technical, but Composer consists of two set of files, what you define in composer.json can be a dependency and a version of it, examples "ezsystems/ezpublish": "dev-master", "symfony/symfony": "2.2.*", "zetacomponents/webdav": "1.1.3". As you see some of these versions are relative, so there is a second file called composer.lock, this will lock those versions to a specific point in time (in the form of a git commmit sha256 checksum) every time someone performs an update, and is the source of exact version info every time someone performs an install. So this system is very simple, and we will need to do some custom stuff around it to be able to use it for our needs.
          Hide
          Gaetano Giunta (Inactive) added a comment - - edited

          I'm afraid trying to automate fully a rolling-upgrade of clusters is impossible.

          First off, when you do the rolling upgrade, to have no downtime, you need to alter the load-balancer rules so that incoming traffic is switched to server A while you upgrade server B, then to server B while you upgrade server A.
          And if you are a conscious user, you will probably want to run tests against updated server A before proceeding.
          This by itself makes it a pie-in-the-sky target - manual intervention between the tasks executed by the upgrade script is necessary.

          Second: the code will be way too complicated and brittle - just look at how hard it is to write good fault-tolerant-resource-managers (mysql anyone? not the same usecase but the same type of difficulty: the number of things which can go wrong and settings which can be different per-user-setup is huge)

          Third: what we currently need for Legacy is not an automated updater, but a plain and simple pdf manual, stating how to actually do a rolling update, including

          • no site downtime
          • the least amount of cache clearing runs

          As for the db changes: the split in pre/post is good for single server updates, rolling updates might actually entail less downtime than what you propose by:

          Show
          Gaetano Giunta (Inactive) added a comment - - edited I'm afraid trying to automate fully a rolling-upgrade of clusters is impossible. First off, when you do the rolling upgrade, to have no downtime, you need to alter the load-balancer rules so that incoming traffic is switched to server A while you upgrade server B, then to server B while you upgrade server A. And if you are a conscious user, you will probably want to run tests against updated server A before proceeding. This by itself makes it a pie-in-the-sky target - manual intervention between the tasks executed by the upgrade script is necessary. Second: the code will be way too complicated and brittle - just look at how hard it is to write good fault-tolerant-resource-managers (mysql anyone? not the same usecase but the same type of difficulty: the number of things which can go wrong and settings which can be different per-user-setup is huge) Third: what we currently need for Legacy is not an automated updater, but a plain and simple pdf manual, stating how to actually do a rolling update, including no site downtime the least amount of cache clearing runs As for the db changes: the split in pre/post is good for single server updates, rolling updates might actually entail less downtime than what you propose by: dumping the live schema to 2nd instance updating 2nd instance having all upgraded servers connect to it, one by one drop old instance this is of course possible when user sessions are not in db (see comments about "mutating-schema-changes" in http://exortech.com/blog/2009/02/01/weekly-release-blog-11-zero-downtime-database-deployment/ )

            People

            • Assignee:
              Unassigned
              Reporter:
              André Rømcke
            • Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

              • Created:
                Updated: