Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-2119

W16 Optimize Secondary Index - Research

    Details

    • Type: Epic
    • Status: Done
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: Qserv
    • Labels:
      None
    • Epic Name:
      W16 Secondary Index
    • Story Points:
      48
    • WBS:
      02C.06.02.03
    • Team:
      Data Access and Database
    • Cycle:
      Winter 2016

      Description

      Work on the secondary index (objectId --> chunkId / subChunkId mapping). This needs to be scalable to 40B entries. Since we are planning to ingest all data from DRP in <2 days, building should take <2 days. This epic involves researching applicable technologies (including experimenting with most promising ones). Deliverable: proposed technology / architecture along with measures performance at production scale (40 B entries).

        Attachments

          Issue Links

            Activity

            Hide
            kelsey Mike Kelsey [X] (Inactive) added a comment -

            Discovered that performance plots are inaccurate: Labels for "Clock" should have been elapsed wall-clock time, but were instead total CPU usage. Regenerating all performance data with both CPU and clock times done properly.

            Show
            kelsey Mike Kelsey [X] (Inactive) added a comment - Discovered that performance plots are inaccurate: Labels for "Clock" should have been elapsed wall-clock time, but were instead total CPU usage. Regenerating all performance data with both CPU and clock times done properly.
            Hide
            kelsey Mike Kelsey [X] (Inactive) added a comment -

            Updated performance plot for secondary index options, with single-client one-by-one access. Plot shows CPU time for initialization and for queries (1M trials), and total wall clock time for each test. Some interesting features include steps in file access at 2 GB file limit, and significant wall-clock penalty from XRootD, where most CPU time is off-loaded to the four server processes.

            Show
            kelsey Mike Kelsey [X] (Inactive) added a comment - Updated performance plot for secondary index options, with single-client one-by-one access. Plot shows CPU time for initialization and for queries (1M trials), and total wall clock time for each test. Some interesting features include steps in file access at 2 GB file limit, and significant wall-clock penalty from XRootD, where most CPU time is off-loaded to the four server processes.
            Hide
            kelsey Mike Kelsey [X] (Inactive) added a comment -

            After discussions with Jacek about the single-host (laptop) performance tests, we are dropping the multi-host implementation tests. A dual-processor laptop handles loading 10B entries into a MySQL/InnoDB database (single giant table) in 18 hours, and processes queries at about 70 QPS. Splitting into multiple smaller tables in the same MySQL instance allows up to ~10k QPS on the same host.

            Show
            kelsey Mike Kelsey [X] (Inactive) added a comment - After discussions with Jacek about the single-host (laptop) performance tests, we are dropping the multi-host implementation tests. A dual-processor laptop handles loading 10B entries into a MySQL/InnoDB database (single giant table) in 18 hours, and processes queries at about 70 QPS. Splitting into multiple smaller tables in the same MySQL instance allows up to ~10k QPS on the same host.
            Hide
            kelsey Mike Kelsey [X] (Inactive) added a comment -

            These plots show the performance for a MySQL/InnoDB database on a dual-core MacOSX system, with 1 TB of spinning disk and 8 GB memory. The plots show wall-clock time for the test job, and so measure principally server performance, rather than the client code.

            Init (orange with squares) shows the time required to fill an empty database with zeroes using a series of cascaded SELECT ... UNION ALL ..., with one query of up to 40M final entries per table. This requires about 8 us per entry.

            Load (blue with triangles) shows the time to fill an empty database from external files with 40M entries each (one file per table). About 1/3 of the time shown for each test was needed to create the external files, the rest was used by MySQL to load the data. The net result (where the input files have to be created before loading) is just slightly slower than Init.

            Run (red with squares) shows the time required for 1 million random queries on the specified database. The query rate drops substantially as the database size goes from 10M to 100M entries, but beyond that it is constant, at just over 70 QPS.

            Update (green with circles) shows the time required to overwrite an external file of 10M entries into an existing database. This test dumps the whole file into a single table (no overhead from rebalancing to load multiple smaller tables), and is constant at just 10 us/entry, no matter how large the database is.

            Show
            kelsey Mike Kelsey [X] (Inactive) added a comment - These plots show the performance for a MySQL/InnoDB database on a dual-core MacOSX system, with 1 TB of spinning disk and 8 GB memory. The plots show wall-clock time for the test job, and so measure principally server performance, rather than the client code. Init (orange with squares) shows the time required to fill an empty database with zeroes using a series of cascaded SELECT ... UNION ALL ... , with one query of up to 40M final entries per table. This requires about 8 us per entry. Load (blue with triangles) shows the time to fill an empty database from external files with 40M entries each (one file per table). About 1/3 of the time shown for each test was needed to create the external files, the rest was used by MySQL to load the data. The net result (where the input files have to be created before loading) is just slightly slower than Init . Run (red with squares) shows the time required for 1 million random queries on the specified database. The query rate drops substantially as the database size goes from 10M to 100M entries, but beyond that it is constant, at just over 70 QPS. Update (green with circles) shows the time required to overwrite an external file of 10M entries into an existing database. This test dumps the whole file into a single table (no overhead from rebalancing to load multiple smaller tables), and is constant at just 10 us/entry, no matter how large the database is.
            Hide
            kelsey Mike Kelsey [X] (Inactive) added a comment -

            Single host performance tests for the secondary index are complete. They were run on a dual-core MacBook Pro with 1TB spinning disk and 8 GB memory. The MySQL implementation using InnoDB should meet the performance requirements on a proper server system (multiple core, 2-4 TB SSD storage).

            Show
            kelsey Mike Kelsey [X] (Inactive) added a comment - Single host performance tests for the secondary index are complete. They were run on a dual-core MacBook Pro with 1TB spinning disk and 8 GB memory. The MySQL implementation using InnoDB should meet the performance requirements on a proper server system (multiple core, 2-4 TB SSD storage).

              People

              • Assignee:
                Unassigned
                Reporter:
                fritzm Fritz Mueller
                Reviewers:
                Fritz Mueller
                Watchers:
                Jacek Becla, Mike Kelsey [X] (Inactive)
              • Votes:
                0 Vote for this issue
                Watchers:
                2 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved:

                  Summary Panel