Uploaded image for project: 'eZ Publish / Platform'
  1. eZ Publish / Platform
  2. EZP-26118

Cluster with php7 fails when creating images

    XMLWordPrintable

    Details

    • Type: Story
    • Status: Closed
    • Priority: High
    • Resolution: Fixed
    • Affects Version/s: 1.5.0, 1.4.1, 1.6.0-beta1
    • Fix Version/s: QA tracked issues
    • Component/s: Misc
    • Labels:
    • Environment:

      Ubuntu 16.04
      php 7.0
      cluster with redis
      Env: prod

      Description

      Having redis cluster with php7, when I try to create an image, I have the error:

      POST http://ezdfs1.ezp/api/ezp/v2/content/objects
      500 Internal Server Error
      
      {"ErrorMessage":{"_media-type":"application\/vnd.ez.api.ErrorMessage+json","errorCode":500,"errorMessage"
      :"Internal Server Error","errorDescription":"A DBAL error occured while writing var\/site\/storage\/images
      \/3\/8\/1\/0\/183-1-eng-GB\/imagem04.jpg"}}
      
      Steps to Reproduce:
      • Access your installation as admin in admin interface
      • Create one image. Having your developer tools active, publish the image.
        On the publishing action, we can see a notification error and on developer tools, we can see the error referred above.
        If I go to my dfs var dir (/mnt/ezdfs/var/site/storage/images/3/8/1/0/183-1-eng-GB/imagem04.jpg), I can see that the image was created.

      If I create some object such Folders or articles, without attachments, then I have no errors.

      More info about my cluster environment:

      My cluster configuration is set with 4 servers as:

      • Server1(Varnish in round robin) -
      • Server2 (ezdfs1)(with ezplatform) -> 10.0.5.2
      • Server3 (ezdfs1)(with ezplatform) -> 10.0.5.3
      • Server4 (with MariaDB DB) - 10.0.5.4

      I have redis server on both server2 (10.0.5.2) and server3(10.0.5.3) configured as cluster (Master/Slave)

      Regarding sessions, I have in php.ini:

      session.save_handler = redis
      session.save_path = "tcp://10.0.5.2:6379" ;sessions are kept in first server
      

      And I have my ezplatform set as:

      ezpublish:
          # Repositories configuration, setup default repository to support solr if enabled
          repositories:
              default:
                  storage: ~
                  search:
                      engine: %search_engine%
                      connection: default
      
          # Siteaccess configuration, with one siteaccess per default
          siteaccess:
              list: [site]
              groups:
                  site_group: [site]
              default_siteaccess: site
              match:
                  URIElement: 1
      
          # System settings, grouped by siteaccess and/or siteaccess group
          http_cache:
              # As of 5.4 only use "http"
              # "single_http" and "multiple_http" are deprecated but will still work.
              purge_type: http
          system:
              default:
                  io:
                      metadata_handler: dfs
                      binarydata_handler: nfs
                      url_prefix: "storage"
              site_group:
                  # Pool to use for cache, needs to be differant per repository (database).
                  cache_pool_name: '%cache_pool%'
                  # These reflect the current installers, complete installation before you change them. For changing var_dir
                  # it is recommended to install clean, then change setting before you start adding binary content, otherwise you'll
                  # need to manually modify your database data to reflect this to avoid exceptions.
                  var_dir: var/site
                  # System languages. Note that by default, content, content types, and other data are in eng-GB locale,
                  # so removing eng-GB from this list may lead to errors or content not being shown, unless you change
                  # all eng-GB data to other locales first.
                  languages: [eng-GB]
                  http_cache:
                      # Fill in your Varnish server(s) address(es).
                      purge_servers: [http://192.168.2.201:6081]
                  session:
                      name: ~
      
      # new doctrine connection for the dfs legacy_dfs_cluster metadata handler.
      doctrine:
          dbal:
              connections:
                  dfs:
                      driver: pdo_mysql
                      host: 10.0.5.4
                      port: 3306
                      dbname: ezp
                      user: ezp
                      password: "ezp"
                      charset: UTF8
      
      # declare the handlers
      ez_io:
          binarydata_handlers:
              nfs:
                  flysystem:
                      adapter: nfs_adapter
          metadata_handlers:
              dfs:
                  legacy_dfs_cluster:
                      connection: doctrine.dbal.dfs_connection
      
      oneup_flysystem:
          adapters:
              nfs_adapter:
                  local:
                      # The last part, $var_dir$/$storage_dir$, is required for legacy compatibility
                      directory: "/mnt/ezdfs/$var_dir$/$storage_dir$"
      
      stash:
          caches:
              default:
                  drivers: [ Redis ]
                  Redis:
                      servers:
                          -
                              server: 10.0.5.2
                              port: 6379
                          -
                              server: 10.0.5.3
                              port: 6379
      

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                Unassigned
                Reporter:
                paulo.nunes-obsolete@ez.no Paulo Nunes (Inactive)
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: