Uploaded image for project: 'Data Management'
  1. Data Management
  2. DM-12523

Load NEOWISE-R Year 1 Single Exposure (L1b) Source Table into PDAC

    XMLWordPrintable

    Details

    • Type: Story
    • Status: Done
    • Resolution: Done
    • Fix Version/s: None
    • Component/s: None
    • Labels:

      Description

      This ticket documents data preparation and loading for the following WISE catalogs:

      Catalog name PDAC database PDAC table IRSA download page
      NEOWISE-R Year 1 Single Exposure (L1b) Source Table neowiser_year1_00 neowiser_yr1_p1bs_psd http://irsa.ipac.caltech.edu/data/download/neowiser_year1/

      Documentation for loading the catalogs:

        Attachments

          Issue Links

            Activity

            Hide
            gapon Igor Gaponenko added a comment - - edited

            Downloading NEOWISE-R Year 1 Single Exposure (L1b) Source Table data


            Table name neowiser_yr1_p1bs_psd
            Temporary data folder (NCSA) /datasets/gapon/wise/neowiser_yr1_p1bs_psd/
            Catalog data (IPAC) https://irsa.ipac.caltech.edu/data/download/neowiser_year1/
            Schema file (IPAC) http://irsa.ipac.caltech.edu/data/download/neowiser_year1/neowiser_yr1_p1bs_psd-schema.txt

            Catalog schema

            The original schema file was downloaded as:

            /datasets/gapon/wise/neowiser_yr1_p1bs_psd/neowiser_yr1_p1bs_psd.txt
            

            Then it was translated into MySQL schema definition (SQL DDL) file using this tool (available in GitHub package txt2schema.py):

            % python \
              /datasets/gapon/development/db_pdac_wise/tools/txt2schema.py \
                neowiser_yr1_p1bs_psd.txt \
                > neowiser_yr1_p1bs_psd.sql
            

            Made two modifications to thw original schema:

            1. renaming column dec into decl as a workaround to an existing limitation of the Qserv SQL parser (see )
            2. manual addition of the PRIMARY key

            The relevant sections of the final schema file are presented below:

            CREATE TABLE `neowiser_yr1_p1bs_psd` (
             
               `source_id`    VARCHAR(16)  NOT NULL,
                    `decl`  DECIMAL(10,7)  DEFAULT NULL,
             
                PRIMARY KEY(`source_id`)
             
            ) ENGINE=MyISAM;
            

            Saved (TBC) the schema file into the corresponding GitHub package at:

            Downloaded files into the temporary location at NCSA

            The files were placed at the GPFS folder:

            /datasets/gapon/wise/neowiser_yr1_p1bs_psd/data/
            

            The operation succeeded with:

            • 70 compressed files (bz2)
            • 3.7 TB of total data amount
            Show
            gapon Igor Gaponenko added a comment - - edited Downloading NEOWISE-R Year 1 Single Exposure (L1b) Source Table data Table name neowiser_yr1_p1bs_psd Temporary data folder (NCSA) /datasets/gapon/wise/neowiser_yr1_p1bs_psd/ Catalog data (IPAC) https://irsa.ipac.caltech.edu/data/download/neowiser_year1/ Schema file (IPAC) http://irsa.ipac.caltech.edu/data/download/neowiser_year1/neowiser_yr1_p1bs_psd-schema.txt Catalog schema The original schema file was downloaded as: /datasets/gapon/wise/neowiser_yr1_p1bs_psd/neowiser_yr1_p1bs_psd.txt Then it was translated into MySQL schema definition (SQL DDL) file using this tool (available in GitHub package txt2schema.py ): % python \ /datasets/gapon/development/db_pdac_wise/tools/txt2schema .py \ neowiser_yr1_p1bs_psd.txt \ > neowiser_yr1_p1bs_psd.sql Made two modifications to thw original schema: renaming column dec into decl as a workaround to an existing limitation of the Qserv SQL parser (see ) manual addition of the PRIMARY key The relevant sections of the final schema file are presented below: CREATE TABLE `neowiser_yr1_p1bs_psd` (   `source_id` VARCHAR (16) NOT NULL , `decl` DECIMAL (10,7) DEFAULT NULL ,   PRIMARY KEY (`source_id`) ) ENGINE=MyISAM; Saved ( TBC ) the schema file into the corresponding GitHub package at: https://github.com/lsst-dm/db_pdac_wise/blob/master/sql/neowiser_yr1_p1bs_psd.sql Downloaded files into the temporary location at NCSA The files were placed at the GPFS folder: /datasets/gapon/wise/neowiser_yr1_p1bs_psd/data/ The operation succeeded with: 70 compressed files (bz2) 3.7 TB of total data amount
            Hide
            gapon Igor Gaponenko added a comment - - edited

            Processing downloaded files of the catalog

            Uncompressing the files

            The resulted files were put into the same folder where the compressed ones were located. The original files were replaced with the uncompressed ones. The total amount of data in the folder is 18 TB:

            % du -hs /datasets/gapon/wise/neowiser_yr1_p1bs_psd/data/
            18T	/datasets/gapon/wise/neowiser_yr1_p1bs_psd/data/
            

            Translating files from the unl format into the TSV format

            This stage is needed because TSV is the only input data format supported by the LSST DB catalog partitioning tools. The output are placed at:

            /datasets/gapon/wise/neowiser_yr1_p1bs_psd/tsv
            

            The translation tool unl2tsv is in GitHub at:

            This operation finished with 18 TB of data within the output folder.

            Show
            gapon Igor Gaponenko added a comment - - edited Processing downloaded files of the catalog Uncompressing the files The resulted files were put into the same folder where the compressed ones were located. The original files were replaced with the uncompressed ones. The total amount of data in the folder is 18 TB: % du -hs /datasets/gapon/wise/neowiser_yr1_p1bs_psd/data/ 18T /datasets/gapon/wise/neowiser_yr1_p1bs_psd/data/ Translating files from the unl format into the TSV format This stage is needed because TSV is the only input data format supported by the LSST DB catalog partitioning tools. The output are placed at: /datasets/gapon/wise/neowiser_yr1_p1bs_psd/tsv The translation tool unl2tsv is in GitHub at: https://github.com/lsst-dm/db_pdac_wise/tree/master/tools This operation finished with 18 TB of data within the output folder.
            Hide
            gapon Igor Gaponenko added a comment - - edited

            Partitioning

            The destination folder for the partitioned data is:

            /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/chunks
            

            Running the partitioned in 70 streams (one per file)

            % cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/
            % for f in `ls tsv/`; do \
              nohup srun /datasets/gapon/wise/neowiser_yr1_p1bs_psd/bin/partition.sh ${f::-4} \
              >& /datasets/gapon/wise/neowiser_yr1_p1bs_psd/logs/${f::-4}.partition.log&
            done
            

            Where the script's code is shown below:

            % cat /datasets/gapon/wise/neowiser_yr1_p1bs_psd/bin/partition.sh
            #!/bin/bash
             
            set -e
             
            echo "host: "`hostname`
             
            name=$1
             
            if [ -z $name ]; then
                echo "usage: <name>"
                exit 1
            fi
             
            source /datasets/gapon/stack/loadLSST.bash
            setup -N -t qserv-dev qserv_distrib
             
            cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/
             
            mkdir -p partitioned_x16/chunks/${name}/
            rm    -f partitioned_x16/chunks/${name}/*
            rm    -f partitioned_x16/log/${name}.log
             
            sph-partition \
              --verbose \
              -c partitioned_x16/neowiser_yr1_p1bs_psd.cfg \
              --out.dir=partitioned_x16/chunks/${name}/ \
              --in=tsv/${name}.tsv
            

            Analyzing results

            When the partitioned finished:

            % cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/
            % du -hs partitioned_x16/chunks
            56T	partitioned_x16/chunks
            

            This operation finished with 56 TB of data within the output folder.

            First, checked the log files to make sure no errors were reported. The files are located at:

            % cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/log
            % for f in `ls log/*.partition.log`; do tail -1 $f; done
            

            IMPORTANT: each of these files has the very valuable report from the partitioning tool, which includes (to mention just two most useful in the current context):

            • the total number of chunks and sub-chunks
            • identifiers of chunks

            Collecting unique chunk identifiers from those reports:

            % cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16
            % for f in `ls ../log/*.partition.log`; do
              cat ${f};
            done | grep '"id"' | awk '{print substr($2,0,length($2)-1)}' > all_chunks.txt
             
            % wc -l all_chunks.txt
            492164 all_chunks.txt
             
            % cat all_chunks.txt  | sort -n -u > all_chunks_unique_sorted.txt
            % wc -l all_chunks_unique_sorted.txt
            146332 all_chunks_unique_sorted.txt
            

            OBSERVATION: this (146332) is very important as it matches the total number of chunks reported when partitioning other catalogs as per DM-9372. Therefore no additional (new) chunk placement strategy will be needed for this catalog. The last test is just to make sure the chunks have the same number as the ones of the previously loaded partitioned catalogs. Using the Object-alike table for that:

            % diff \
              /datasets/gapon/wise/allsky_4band_p1bs_psd/partitioned_x16/all_chunks_unique_sorted.txt \
              /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/all_chunks_unique_sorted.txt
            

            CONCLUSION::

            • this catalog has the same set of chunks as the previously loaded AllWise 4-band one (see DM-10740)
            • the same chunk distribution policy can be used
            Show
            gapon Igor Gaponenko added a comment - - edited Partitioning The destination folder for the partitioned data is: /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/chunks Running the partitioned in 70 streams (one per file) % cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/ % for f in ` ls tsv/`; do \ nohup srun /datasets/gapon/wise/neowiser_yr1_p1bs_psd/bin/partition .sh ${f::-4} \ >& /datasets/gapon/wise/neowiser_yr1_p1bs_psd/logs/ ${f::-4}.partition.log& done Where the script's code is shown below: % cat /datasets/gapon/wise/neowiser_yr1_p1bs_psd/bin/partition .sh #!/bin/bash   set -e   echo "host: " ` hostname `   name=$1   if [ -z $name ]; then echo "usage: <name>" exit 1 fi   source /datasets/gapon/stack/loadLSST . bash setup -N -t qserv-dev qserv_distrib   cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/   mkdir -p partitioned_x16 /chunks/ ${name}/ rm -f partitioned_x16 /chunks/ ${name}/* rm -f partitioned_x16 /log/ ${name}.log   sph-partition \ --verbose \ -c partitioned_x16 /neowiser_yr1_p1bs_psd .cfg \ --out. dir =partitioned_x16 /chunks/ ${name}/ \ -- in =tsv/${name}.tsv Analyzing results When the partitioned finished: % cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/ % du -hs partitioned_x16 /chunks 56T partitioned_x16 /chunks This operation finished with 56 TB of data within the output folder. First, checked the log files to make sure no errors were reported. The files are located at: % cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/log % for f in ` ls log/*.partition.log`; do tail -1 $f; done IMPORTANT : each of these files has the very valuable report from the partitioning tool, which includes (to mention just two most useful in the current context): the total number of chunks and sub-chunks identifiers of chunks Collecting unique chunk identifiers from those reports: % cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16 % for f in ` ls .. /log/ *.partition.log`; do cat ${f}; done | grep '"id"' | awk '{print substr($2,0,length($2)-1)}' > all_chunks.txt   % wc -l all_chunks.txt 492164 all_chunks.txt   % cat all_chunks.txt | sort -n -u > all_chunks_unique_sorted.txt % wc -l all_chunks_unique_sorted.txt 146332 all_chunks_unique_sorted.txt OBSERVATION : this ( 146332 ) is very important as it matches the total number of chunks reported when partitioning other catalogs as per DM-9372 . Therefore no additional (new) chunk placement strategy will be needed for this catalog. The last test is just to make sure the chunks have the same number as the ones of the previously loaded partitioned catalogs. Using the Object -alike table for that: % diff \ /datasets/gapon/wise/allsky_4band_p1bs_psd/partitioned_x16/all_chunks_unique_sorted .txt \ /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/all_chunks_unique_sorted .txt CONCLUSION: : this catalog has the same set of chunks as the previously loaded AllWise 4-band one (see DM-10740 ) the same chunk distribution policy can be used
            Hide
            gapon Igor Gaponenko added a comment -

            Preparing the staging area for chunks

            ATTENTION: sine this catalog has exactly the same collection of chunks as allsky_4band_p1bs_psd (see DM-10740) then most of the procedures explained earlier would apply for this catalog as well. And for the very same reason many details on setting up the staging area will be omitted below. Please see the above mentioned ticket for details if needed.

            General considerations and a setup

            The staging area for the catalog is located at:

            /datasets/gapon/production/wise_catalog_load/production_load/neowiser_yr1_p1bs_psd/
            

            The area will contain symbolic links pointing to the corresponding chunks in the partitioned area. The links will be organized into two layers:

            <host>/<folder>/<chunks>
            

            Where the <folder> names are actually the names (w/o any extension) of the original files downloaded from IRSA/IPAC.

            Obtaining existing worker-to-chunks map from PDAC

            A suitable already exists in the GitHub package * db_pdac_wise* which is deployed at:

            % ls -l /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/
            -rw-r--r-- 1 gapon grp_202 32197 Jun  4 19:04 lsst-qserv-db01
            -rw-r--r-- 1 gapon grp_202 32199 Jun  4 19:04 lsst-qserv-db02
            -rw-r--r-- 1 gapon grp_202 32199 Jun  4 19:04 lsst-qserv-db03
            ...
            

            The number of entries matches the total number of chunks 146332:

            % wc -l /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/*
              4878 /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/lsst-qserv-db01
              4878 /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/lsst-qserv-db02
              ..
              4877 /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/lsst-qserv-db29
              4877 /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/lsst-qserv-db30
            146332 total
            

            The link generator will be using the maps from that folder on the next stage.

            Generating scripts to populate area

            Generated a list of all files to be linked. This is a lengthy process due to the large number of chunk files created during the partitioning phase:

            % cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/
            % for d in `ls chunks/`; do
              ls -l chunks/${d}/* | grep .txt; done > all_chunks_and_overlaps.txt
            % wc -l all_chunks_and_overlaps.txt
            980412 all_chunks_and_overlaps.txt
            

            Using a custom version of the generator which reads the above created list of chunk files and overlaps, and the earlier created worker-to-chunks map:

            % scripts/chunk2folder.py folder > scripts/stage_folder.source
            % wc -l  scripts/stage_folder.source
            4261 scripts/stage_folder.source
             
            % scripts/chunk2folder.py index > scripts/stage_index.source
            % wc -l scripts/stage_index.source 
            4201 scripts/stage_index.source
             
            % scripts/chunk2folder.py chunk > scripts/stage_chunk.source
            % wc -l scripts/stage_chunk.source
            1960825 scripts/stage_chunk.source
            

            Populating the area

            Run all 3 scripts sequentially. It's very IMPORTANT to create folders before setting up the links. There is an undesired latency in the GPGS metadata service which may delay catalog stats reporting thus causing the subsequent (link and index) stages to fail if the second stages are mixed together with the folder operations.

            % chmod +x scripts/*.source
            % nohup srun scripts/stage_folder.source >& log/stage_folder.log&
            % nohup srun scripts/stage_index.source >& log/stage_index.log&
            % nohup srun scripts/stage_chunk.source >& log/stage_chunk.log&
            

            Show
            gapon Igor Gaponenko added a comment - Preparing the staging area for chunks ATTENTION : sine this catalog has exactly the same collection of chunks as allsky_4band_p1bs_psd (see DM-10740 ) then most of the procedures explained earlier would apply for this catalog as well. And for the very same reason many details on setting up the staging area will be omitted below. Please see the above mentioned ticket for details if needed. General considerations and a setup The staging area for the catalog is located at: /datasets/gapon/production/wise_catalog_load/production_load/neowiser_yr1_p1bs_psd/ The area will contain symbolic links pointing to the corresponding chunks in the partitioned area. The links will be organized into two layers: <host>/<folder>/<chunks> Where the <folder> names are actually the names (w/o any extension) of the original files downloaded from IRSA/IPAC. Obtaining existing worker-to-chunks map from PDAC A suitable already exists in the GitHub package * db_pdac_wise* which is deployed at: % ls -l /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/ -rw-r--r-- 1 gapon grp_202 32197 Jun 4 19:04 lsst-qserv-db01 -rw-r--r-- 1 gapon grp_202 32199 Jun 4 19:04 lsst-qserv-db02 -rw-r--r-- 1 gapon grp_202 32199 Jun 4 19:04 lsst-qserv-db03 ... The number of entries matches the total number of chunks 146332 : % wc -l /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/ * 4878 /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/lsst-qserv-db01 4878 /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/lsst-qserv-db02 .. 4877 /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/lsst-qserv-db29 4877 /datasets/gapon/development/db_pdac_wise/wise_worker2chunks/lsst-qserv-db30 146332 total The link generator will be using the maps from that folder on the next stage. Generating scripts to populate area Generated a list of all files to be linked. This is a lengthy process due to the large number of chunk files created during the partitioning phase: % cd /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/ % for d in ` ls chunks/`; do ls -l chunks/${d}/* | grep .txt; done > all_chunks_and_overlaps.txt % wc -l all_chunks_and_overlaps.txt 980412 all_chunks_and_overlaps.txt Using a custom version of the generator which reads the above created list of chunk files and overlaps, and the earlier created worker-to-chunks map: % scripts /chunk2folder .py folder > scripts /stage_folder . source % wc -l scripts /stage_folder . source 4261 scripts /stage_folder . source   % scripts /chunk2folder .py index > scripts /stage_index . source % wc -l scripts /stage_index . source 4201 scripts /stage_index . source   % scripts /chunk2folder .py chunk > scripts /stage_chunk . source % wc -l scripts /stage_chunk . source 1960825 scripts /stage_chunk . source Populating the area Run all 3 scripts sequentially. It's very IMPORTANT to create folders before setting up the links. There is an undesired latency in the GPGS metadata service which may delay catalog stats reporting thus causing the subsequent (link and index) stages to fail if the second stages are mixed together with the folder operations. % chmod +x scripts/*. source % nohup srun scripts /stage_folder . source >& log /stage_folder .log& % nohup srun scripts /stage_index . source >& log /stage_index .log& % nohup srun scripts /stage_chunk . source >& log /stage_chunk .log&
            Hide
            gapon Igor Gaponenko added a comment - - edited

            Loading into PDAC

            The loading protocol is explained in details at:

            Creating database and initializing CSS

            % /bin/sudo -u qserv \
              /datasets/gapon/development/db_pdac_wise/scripts/run_create_database.bash \
               --verbose &>run_create_database.log
             
            % /bin/sudo -u qserv \
              /datasets/gapon/development/db_pdac_wise/scripts/setup_css.bash \
              >& setup_css.log
            

            % source /datasets/gapon/stack/loadLSST.bash
            % setup -t qserv-dev qserv_distrib; setup -j -r /datasets/gapon/development/qserv
            % qserv-admin.py "DUMP EVERYTHING;" | grep neowise
            /DBS/neowiser_yr1_00	READY
            /DBS/neowiser_yr1_00/.packed.json	{"partitioningId":"0000000030","releaseStatus":"RELEASED","storageClass":"L2"} 
            /DBS/neowiser_yr1_00/TABLES	\N
            /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd	READY
            /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd/partitioning	\N
            /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd/partitioning/.packed.json	{"dirColName":"source_id","dirDb":"neowiser_yr1_00","dirTable":"neowiser_yr1_p1bs_psd","latColName":"decl","lonColName":"ra","subChunks":"1"} 
            /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd/schema	(
            /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd/sharedScan	\N
            /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd/sharedScan/.packed.json	{"lockInMem":"1"} 
            

            Configuring MySQL server and Docker container

            These actions are required to ensure each worker is properly configured. See loading protocol ([]DM-10740]) for other n-band protocol for details.

            Loading partitioned tables

            % date
            Tue Nov 21 17:38:46 CST 2017
            % nohup /bin/sudo -u qserv \
              /datasets/gapon/development/db_pdac_wise/scripts/run_load_object.bash \
              --verbose \
              >& run_load_object.log&
            

            Monitoring a progress of the operation with:

            % for n in `seq --format "qserv-db%02.f" 1 30`; do ssh -n $n /bin/sudo -u qserv du -hs /qserv/data/mysql/neowiser_yr1_00; done
            

            Testing results:

            % for f in `ls /tmp/neowiser_yr1_00/log/qserv-db*_load_object.log`; do tail -1 $f; done
            % for f in `ls /tmp/neowiser_yr1_00/log/qserv-db*_load_object.log`; do tail -1 $f; done | wc -l
            

            Partially FAILED for workers 15 and 19 (see next comment for fixup).

            Show
            gapon Igor Gaponenko added a comment - - edited Loading into PDAC The loading protocol is explained in details at: https://confluence.lsstcorp.org/display/DM/Loading+WISE+catalogs+into+PDAC Creating database and initializing CSS % /bin/sudo -u qserv \ /datasets/gapon/development/db_pdac_wise/scripts/run_create_database . bash \ --verbose &>run_create_database.log   % /bin/sudo -u qserv \ /datasets/gapon/development/db_pdac_wise/scripts/setup_css . bash \ >& setup_css.log % source /datasets/gapon/stack/loadLSST . bash % setup -t qserv-dev qserv_distrib; setup -j -r /datasets/gapon/development/qserv % qserv-admin.py "DUMP EVERYTHING;" | grep neowise /DBS/neowiser_yr1_00 READY /DBS/neowiser_yr1_00/ .packed.json { "partitioningId" : "0000000030" , "releaseStatus" : "RELEASED" , "storageClass" : "L2" } /DBS/neowiser_yr1_00/TABLES \N /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd READY /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd/partitioning \N /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd/partitioning/ .packed.json { "dirColName" : "source_id" , "dirDb" : "neowiser_yr1_00" , "dirTable" : "neowiser_yr1_p1bs_psd" , "latColName" : "decl" , "lonColName" : "ra" , "subChunks" : "1" } /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd/schema ( /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd/sharedScan \N /DBS/neowiser_yr1_00/TABLES/neowiser_yr1_p1bs_psd/sharedScan/ .packed.json { "lockInMem" : "1" } Configuring MySQL server and Docker container These actions are required to ensure each worker is properly configured. See loading protocol ([] DM-10740 ]) for other n-band protocol for details. Loading partitioned tables % date Tue Nov 21 17:38:46 CST 2017 % nohup /bin/sudo -u qserv \ /datasets/gapon/development/db_pdac_wise/scripts/run_load_object . bash \ --verbose \ >& run_load_object.log& Monitoring a progress of the operation with: % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n /bin/sudo -u qserv du -hs /qserv/data/mysql/neowiser_yr1_00 ; done Testing results: % for f in ` ls /tmp/neowiser_yr1_00/log/qserv-db *_load_object.log`; do tail -1 $f; done % for f in ` ls /tmp/neowiser_yr1_00/log/qserv-db *_load_object.log`; do tail -1 $f; done | wc -l Partially FAILED for workers 15 and 19 (see next comment for fixup).
            Hide
            gapon Igor Gaponenko added a comment - - edited

            Restarting failed load on worker nodes 15, 19 (and later 21)

            Two (one hour later - another one) worker nodes got spontaneously restarted after 30 hours of loading (reported in [IHS-606]] as seen from:

            [gapon@lsst-bastion01 ~]$ date
            Wed Nov 22 20:19:50 CST 2017
             
            [gapon@lsst-bastion01 ~]$ for n in qserv-master01 `seq --format="qserv-db%02.0f" 1 30`; do ssh -n $n echo '`hostname`": "`uptime`'; done
            lsst-qserv-master01:  20:19:53 up 6 days, 9:10, 1 user, load average: 0.00, 0.01, 0.05
            lsst-qserv-db01:  20:19:53 up 6 days, 9:15, 0 users, load average: 0.84, 0.98, 1.04
            lsst-qserv-db02:  20:19:53 up 6 days, 9:15, 0 users, load average: 0.68, 0.93, 1.01
            lsst-qserv-db03:  20:19:54 up 6 days, 9:14, 0 users, load average: 1.03, 1.05, 1.05
            lsst-qserv-db04:  20:19:54 up 6 days, 9:15, 0 users, load average: 1.12, 1.08, 1.05
            lsst-qserv-db05:  20:19:54 up 6 days, 9:15, 0 users, load average: 0.68, 0.97, 1.03
            lsst-qserv-db06:  20:19:54 up 6 days, 9:15, 0 users, load average: 1.07, 1.04, 1.01
            lsst-qserv-db07:  20:19:55 up 6 days, 9:14, 0 users, load average: 0.70, 0.80, 0.88
            lsst-qserv-db08:  20:19:55 up 6 days, 9:15, 0 users, load average: 0.84, 0.93, 0.97
            lsst-qserv-db09:  20:19:55 up 6 days, 9:14, 0 users, load average: 0.93, 0.97, 0.97
            lsst-qserv-db10:  20:19:55 up 6 days, 9:14, 0 users, load average: 0.88, 1.01, 1.04
            lsst-qserv-db11:  20:19:56 up 6 days, 9:14, 0 users, load average: 0.84, 1.07, 1.11
            lsst-qserv-db12:  20:19:56 up 6 days, 9:14, 0 users, load average: 0.97, 1.00, 1.07
            lsst-qserv-db13:  20:19:56 up 6 days, 9:14, 0 users, load average: 0.92, 0.97, 1.00
            lsst-qserv-db14:  20:19:57 up 6 days, 9:13, 0 users, load average: 0.78, 0.90, 0.94
            lsst-qserv-db15:  20:19:57 up 35 min, 0 users, load average: 0.01, 0.15, 0.18
            lsst-qserv-db16:  20:19:57 up 6 days, 9:13, 0 users, load average: 1.10, 1.08, 1.05
            lsst-qserv-db17:  20:19:57 up 6 days, 9:12, 0 users, load average: 1.05, 1.01, 0.98
            lsst-qserv-db18:  20:19:57 up 6 days, 9:13, 0 users, load average: 1.25, 1.09, 1.07
            lsst-qserv-db19:  20:19:58 up 2:03, 0 users, load average: 0.24, 0.26, 0.23
            lsst-qserv-db20:  20:19:58 up 6 days, 9:13, 0 users, load average: 1.19, 1.17, 1.05
            lsst-qserv-db21:  20:19:58 up 6 days, 9:07, 0 users, load average: 0.87, 1.02, 1.01
            lsst-qserv-db22:  20:19:58 up 6 days, 9:12, 0 users, load average: 1.37, 1.13, 0.99
            lsst-qserv-db23:  20:19:59 up 6 days, 9:12, 0 users, load average: 1.42, 1.20, 1.09
            lsst-qserv-db24:  20:19:59 up 6 days, 9:12, 0 users, load average: 1.50, 1.16, 1.06
            lsst-qserv-db25:  20:19:59 up 6 days, 9:12, 0 users, load average: 1.58, 1.12, 1.02
            lsst-qserv-db26:  20:19:59 up 6 days, 9:11, 0 users, load average: 1.53, 1.18, 1.06
            lsst-qserv-db27:  20:20:00 up 6 days, 9:11, 0 users, load average: 1.37, 1.03, 1.00
            lsst-qserv-db28:  20:20:00 up 6 days, 9:10, 0 users, load average: 0.78, 0.85, 0.92
            lsst-qserv-db29:  20:20:00 up 6 days, 9:11, 0 users, load average: 1.36, 1.08, 0.99
            lsst-qserv-db30:  20:20:01 up 6 days, 9:11, 0 users, load average: 0.93, 0.89, 0.95
            

            Killed the loading scripts on the master node by:

            % ps -ef | grep load
            ..
            qserv     3979     1  0 Nov21 ?        00:00:16 ssh -n qserv-db15 /datasets/gapon/development/db_pdac_wise/scripts/load_object.bash '--verbose'
            ..
            % /bin/sudo -u qserv kill 3979
            

            Manually restarted containers on those nodes by:

            % docker rm qserv
            % docker run \
              --detach=true \
              -e "QSERV_MASTER=lsst-qserv-master01" \
              -e "SET_CONTAINER_TIMEZONE=true" \
              -e "CONTAINER_TIMEZONE=America/Chicago" \
              --volume /qserv/data:/qserv/data \
              --volume /qserv/log:/qserv/run/var/log \
              --volume /qserv/tmp:/qserv/run/tmp/ \
              --ulimit memlock=10737418240 \
              --name qserv \
              --net=host \
              qserv/qserv:tickets_DM-11755_worker \
              /bin/bash -c /qserv/scripts/start.sh
            

            Then manually recreated databases on each of those nodes by:

            source /datasets/gapon/stack/loadLSST.bash
            setup -t qserv-dev qserv_distrib; setup -j -r /datasets/gapon/development/qserv
            sudo mysql -A -S/qserv/data/mysql/mysql.sock -uroot -pCHANGEME -e 'DROP DATABASE neowiser_yr1_00; CREATE DATABASE neowiser_yr1_00' 
            

            Finally (from master) restarted the loader on both nodes:

            /bin/sudo -u qserv -i
            nohup ssh -n qserv-db15 /datasets/gapon/development/db_pdac_wise/scripts/load_object.bash --verbose >& /tmp/neowiser_yr1_00/log/qserv-db15_load_object.log&
            nohup ssh -n qserv-db19 /datasets/gapon/development/db_pdac_wise/scripts/load_object.bash --verbose >& /tmp/neowiser_yr1_00/log/qserv-db19_load_object.log&
            

            Show
            gapon Igor Gaponenko added a comment - - edited Restarting failed load on worker nodes 15, 19 (and later 21) Two (one hour later - another one) worker nodes got spontaneously restarted after 30 hours of loading (reported in [IHS-606] ] as seen from: [gapon@lsst-bastion01 ~]$ date Wed Nov 22 20:19:50 CST 2017   [gapon@lsst-bastion01 ~]$ for n in qserv-master01 ` seq -- format = "qserv-db%02.0f" 1 30`; do ssh -n $n echo '`hostname`": "`uptime`' ; done lsst-qserv-master01: 20:19:53 up 6 days, 9:10, 1 user, load average: 0.00, 0.01, 0.05 lsst-qserv-db01: 20:19:53 up 6 days, 9:15, 0 users , load average: 0.84, 0.98, 1.04 lsst-qserv-db02: 20:19:53 up 6 days, 9:15, 0 users , load average: 0.68, 0.93, 1.01 lsst-qserv-db03: 20:19:54 up 6 days, 9:14, 0 users , load average: 1.03, 1.05, 1.05 lsst-qserv-db04: 20:19:54 up 6 days, 9:15, 0 users , load average: 1.12, 1.08, 1.05 lsst-qserv-db05: 20:19:54 up 6 days, 9:15, 0 users , load average: 0.68, 0.97, 1.03 lsst-qserv-db06: 20:19:54 up 6 days, 9:15, 0 users , load average: 1.07, 1.04, 1.01 lsst-qserv-db07: 20:19:55 up 6 days, 9:14, 0 users , load average: 0.70, 0.80, 0.88 lsst-qserv-db08: 20:19:55 up 6 days, 9:15, 0 users , load average: 0.84, 0.93, 0.97 lsst-qserv-db09: 20:19:55 up 6 days, 9:14, 0 users , load average: 0.93, 0.97, 0.97 lsst-qserv-db10: 20:19:55 up 6 days, 9:14, 0 users , load average: 0.88, 1.01, 1.04 lsst-qserv-db11: 20:19:56 up 6 days, 9:14, 0 users , load average: 0.84, 1.07, 1.11 lsst-qserv-db12: 20:19:56 up 6 days, 9:14, 0 users , load average: 0.97, 1.00, 1.07 lsst-qserv-db13: 20:19:56 up 6 days, 9:14, 0 users , load average: 0.92, 0.97, 1.00 lsst-qserv-db14: 20:19:57 up 6 days, 9:13, 0 users , load average: 0.78, 0.90, 0.94 lsst-qserv-db15: 20:19:57 up 35 min, 0 users , load average: 0.01, 0.15, 0.18 lsst-qserv-db16: 20:19:57 up 6 days, 9:13, 0 users , load average: 1.10, 1.08, 1.05 lsst-qserv-db17: 20:19:57 up 6 days, 9:12, 0 users , load average: 1.05, 1.01, 0.98 lsst-qserv-db18: 20:19:57 up 6 days, 9:13, 0 users , load average: 1.25, 1.09, 1.07 lsst-qserv-db19: 20:19:58 up 2:03, 0 users , load average: 0.24, 0.26, 0.23 lsst-qserv-db20: 20:19:58 up 6 days, 9:13, 0 users , load average: 1.19, 1.17, 1.05 lsst-qserv-db21: 20:19:58 up 6 days, 9:07, 0 users , load average: 0.87, 1.02, 1.01 lsst-qserv-db22: 20:19:58 up 6 days, 9:12, 0 users , load average: 1.37, 1.13, 0.99 lsst-qserv-db23: 20:19:59 up 6 days, 9:12, 0 users , load average: 1.42, 1.20, 1.09 lsst-qserv-db24: 20:19:59 up 6 days, 9:12, 0 users , load average: 1.50, 1.16, 1.06 lsst-qserv-db25: 20:19:59 up 6 days, 9:12, 0 users , load average: 1.58, 1.12, 1.02 lsst-qserv-db26: 20:19:59 up 6 days, 9:11, 0 users , load average: 1.53, 1.18, 1.06 lsst-qserv-db27: 20:20:00 up 6 days, 9:11, 0 users , load average: 1.37, 1.03, 1.00 lsst-qserv-db28: 20:20:00 up 6 days, 9:10, 0 users , load average: 0.78, 0.85, 0.92 lsst-qserv-db29: 20:20:00 up 6 days, 9:11, 0 users , load average: 1.36, 1.08, 0.99 lsst-qserv-db30: 20:20:01 up 6 days, 9:11, 0 users , load average: 0.93, 0.89, 0.95 Killed the loading scripts on the master node by: % ps -ef | grep load .. qserv 3979 1 0 Nov21 ? 00:00:16 ssh -n qserv-db15 /datasets/gapon/development/db_pdac_wise/scripts/load_object . bash '--verbose' .. % /bin/sudo -u qserv kill 3979 Manually restarted containers on those nodes by: % docker rm qserv % docker run \ --detach= true \ -e "QSERV_MASTER=lsst-qserv-master01" \ -e "SET_CONTAINER_TIMEZONE=true" \ -e "CONTAINER_TIMEZONE=America/Chicago" \ --volume /qserv/data : /qserv/data \ --volume /qserv/log : /qserv/run/var/log \ --volume /qserv/tmp : /qserv/run/tmp/ \ -- ulimit memlock=10737418240 \ --name qserv \ --net=host \ qserv /qserv :tickets_DM-11755_worker \ /bin/bash -c /qserv/scripts/start .sh Then manually recreated databases on each of those nodes by: source /datasets/gapon/stack/loadLSST . bash setup -t qserv-dev qserv_distrib; setup -j -r /datasets/gapon/development/qserv sudo mysql -A -S /qserv/data/mysql/mysql .sock -uroot -pCHANGEME -e 'DROP DATABASE neowiser_yr1_00; CREATE DATABASE neowiser_yr1_00' Finally (from master ) restarted the loader on both nodes: /bin/sudo -u qserv -i nohup ssh -n qserv-db15 /datasets/gapon/development/db_pdac_wise/scripts/load_object . bash --verbose >& /tmp/neowiser_yr1_00/log/qserv-db15_load_object .log& nohup ssh -n qserv-db19 /datasets/gapon/development/db_pdac_wise/scripts/load_object . bash --verbose >& /tmp/neowiser_yr1_00/log/qserv-db19_load_object .log&
            Hide
            gapon Igor Gaponenko added a comment -

            Finished loading, ispecting results

            Monitoring a progress of the operation with:

            % for n in `seq --format "qserv-db%02.f" 1 30`; do ssh -n $n /bin/sudo -u qserv du -hs /qserv/data/mysql/neowiser_yr1_00; done
            .1T	/qserv/data/mysql/neowiser_yr1_00
            1.1T	/qserv/data/mysql/neowiser_yr1_00
            1.1T	/qserv/data/mysql/neowiser_yr1_00
            ...
            

            % for f in `ls /tmp/neowiser_yr1_00/log/qserv-db*_load_object.log`; do tail -1 $f; done
            Thu Nov 23 01:47:56 CST 2017] ** Finished loading **
            [Thu Nov 23 01:07:00 CST 2017] ** Finished loading **
            [Thu Nov 23 00:50:32 CST 2017] ** Finished loading **
            [Thu Nov 23 00:43:23 CST 2017] ** Finished loading **
            [Thu Nov 23 00:53:26 CST 2017] ** Finished loading **
            [Thu Nov 23 01:19:23 CST 2017] ** Finished loading **
            [Thu Nov 23 00:56:39 CST 2017] ** Finished loading **
            [Thu Nov 23 00:24:51 CST 2017] ** Finished loading **
            [Thu Nov 23 00:17:58 CST 2017] ** Finished loading **
            [Thu Nov 23 00:05:41 CST 2017] ** Finished loading **
            [Thu Nov 23 00:04:26 CST 2017] ** Finished loading **
            [Thu Nov 23 00:03:57 CST 2017] ** Finished loading **
            [Thu Nov 23 00:05:15 CST 2017] ** Finished loading **
            [Thu Nov 23 00:21:10 CST 2017] ** Finished loading **
            [Fri Nov 24 03:15:46 CST 2017] ** Finished loading **
            [Thu Nov 23 00:43:10 CST 2017] ** Finished loading **
            [Thu Nov 23 00:58:04 CST 2017] ** Finished loading **
            [Thu Nov 23 00:40:30 CST 2017] ** Finished loading **
            [Fri Nov 24 03:20:22 CST 2017] ** Finished loading **
            [Thu Nov 23 00:39:16 CST 2017] ** Finished loading **
            [Fri Nov 24 09:22:58 CST 2017] ** Finished loading **
            [Thu Nov 23 01:17:13 CST 2017] ** Finished loading **
            [Thu Nov 23 01:26:53 CST 2017] ** Finished loading **
            [Thu Nov 23 01:03:35 CST 2017] ** Finished loading **
            [Thu Nov 23 00:43:19 CST 2017] ** Finished loading **
            [Thu Nov 23 00:41:55 CST 2017] ** Finished loading **
            [Thu Nov 23 00:42:04 CST 2017] ** Finished loading **
            [Thu Nov 23 00:47:22 CST 2017] ** Finished loading **
            [Thu Nov 23 01:08:40 CST 2017] ** Finished loading **
            [Thu Nov 23 01:52:08 CST 2017] ** Finished loading **
            

            % for f in `ls /tmp/neowiser_yr1_00/log/qserv-db*_load_object.log`; do tail -1 $f; done | wc -l
            30
            

            Show
            gapon Igor Gaponenko added a comment - Finished loading, ispecting results Monitoring a progress of the operation with: % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n /bin/sudo -u qserv du -hs /qserv/data/mysql/neowiser_yr1_00 ; done .1T /qserv/data/mysql/neowiser_yr1_00 1.1T /qserv/data/mysql/neowiser_yr1_00 1.1T /qserv/data/mysql/neowiser_yr1_00 ... % for f in ` ls /tmp/neowiser_yr1_00/log/qserv-db *_load_object.log`; do tail -1 $f; done Thu Nov 23 01:47:56 CST 2017] ** Finished loading ** [Thu Nov 23 01:07:00 CST 2017] ** Finished loading ** [Thu Nov 23 00:50:32 CST 2017] ** Finished loading ** [Thu Nov 23 00:43:23 CST 2017] ** Finished loading ** [Thu Nov 23 00:53:26 CST 2017] ** Finished loading ** [Thu Nov 23 01:19:23 CST 2017] ** Finished loading ** [Thu Nov 23 00:56:39 CST 2017] ** Finished loading ** [Thu Nov 23 00:24:51 CST 2017] ** Finished loading ** [Thu Nov 23 00:17:58 CST 2017] ** Finished loading ** [Thu Nov 23 00:05:41 CST 2017] ** Finished loading ** [Thu Nov 23 00:04:26 CST 2017] ** Finished loading ** [Thu Nov 23 00:03:57 CST 2017] ** Finished loading ** [Thu Nov 23 00:05:15 CST 2017] ** Finished loading ** [Thu Nov 23 00:21:10 CST 2017] ** Finished loading ** [Fri Nov 24 03:15:46 CST 2017] ** Finished loading ** [Thu Nov 23 00:43:10 CST 2017] ** Finished loading ** [Thu Nov 23 00:58:04 CST 2017] ** Finished loading ** [Thu Nov 23 00:40:30 CST 2017] ** Finished loading ** [Fri Nov 24 03:20:22 CST 2017] ** Finished loading ** [Thu Nov 23 00:39:16 CST 2017] ** Finished loading ** [Fri Nov 24 09:22:58 CST 2017] ** Finished loading ** [Thu Nov 23 01:17:13 CST 2017] ** Finished loading ** [Thu Nov 23 01:26:53 CST 2017] ** Finished loading ** [Thu Nov 23 01:03:35 CST 2017] ** Finished loading ** [Thu Nov 23 00:43:19 CST 2017] ** Finished loading ** [Thu Nov 23 00:41:55 CST 2017] ** Finished loading ** [Thu Nov 23 00:42:04 CST 2017] ** Finished loading ** [Thu Nov 23 00:47:22 CST 2017] ** Finished loading ** [Thu Nov 23 01:08:40 CST 2017] ** Finished loading ** [Thu Nov 23 01:52:08 CST 2017] ** Finished loading ** % for f in ` ls /tmp/neowiser_yr1_00/log/qserv-db *_load_object.log`; do tail -1 $f; done | wc -l 30
            Hide
            gapon Igor Gaponenko added a comment - - edited

            Setting up the empty chunk kist and enabling the database

            NOTE: Doing so even before building the secondary index to see how many rows have been loaded so far, and if that number matches expectations

            The empty chunk list was generated in the same way it was done for other WISE catalogs. Installing the one:

            % wc -l /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/empty_neowiser_yr1_00.txt
            853668 /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/empty_neowiser_yr1_00.txt
             
            % wc -l /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/all_chunks_unique_sorted.txt
            146332 /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/all_chunks_unique_sorted.txt
             
            % bin/sudo -u qserv \
              cp /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/empty_neowiser_yr1_00.txt \
              /qserv/data/qserv
            

            Enabling the new database in Qserv:

            % /bin/sudo -u qserv \
              /datasets/gapon/development/db_pdac_wise/scripts/run_qserv_enable_database.bash \
              --verbose
              >& run_qserv_enable_database.log&
            

            Restart Qserv and count the number of rows in the new catalog:

            SELECT COUNT(*) FROM neowiser_yr1_00.neowiser_yr1_p1bs_psd;
            +----------------+
            | SUM(QS1_COUNT) |
            +----------------+
            |    18468575586 |
            +----------------+
            1 row in set (8 min 56.44 sec)
            

            CONCLUSION: this number matches the one mentioned in the documentation for the catalog: http://irsa.ipac.caltech.edu/data/download/neowiser_year1/

            Show
            gapon Igor Gaponenko added a comment - - edited Setting up the empty chunk kist and enabling the database NOTE : Doing so even before building the secondary index to see how many rows have been loaded so far, and if that number matches expectations The empty chunk list was generated in the same way it was done for other WISE catalogs. Installing the one: % wc -l /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/empty_neowiser_yr1_00 .txt 853668 /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/empty_neowiser_yr1_00 .txt   % wc -l /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/all_chunks_unique_sorted .txt 146332 /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/all_chunks_unique_sorted .txt   % bin /sudo -u qserv \ cp /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/empty_neowiser_yr1_00 .txt \ /qserv/data/qserv Enabling the new database in Qserv: % /bin/sudo -u qserv \ /datasets/gapon/development/db_pdac_wise/scripts/run_qserv_enable_database . bash \ --verbose >& run_qserv_enable_database.log& Restart Qserv and count the number of rows in the new catalog: SELECT COUNT (*) FROM neowiser_yr1_00.neowiser_yr1_p1bs_psd; + ----------------+ | SUM (QS1_COUNT) | + ----------------+ | 18468575586 | + ----------------+ 1 row in set (8 min 56.44 sec) CONCLUSION : this number matches the one mentioned in the documentation for the catalog: http://irsa.ipac.caltech.edu/data/download/neowiser_year1/
            Hide
            gapon Igor Gaponenko added a comment - - edited

            Generating the secondary index

            ATTENTION: since the NVME-based file system, /qserv won't have enough room to accommodate both TSV files and the secondary index itself then redirecting the import of files into /backup (the *SSD-based file system):

            % sudo mkdir  -p /backup/qserv/data/dumps/neowiser_yr1_00
            % sudo chown qserv:qserv /backup/qserv/data/dumps/neowiser_yr1_00
            % sudo ln -s /backup/qserv/data/dumps/neowiser_yr1_00 /qserv/data/dumps
            % ls -l /qserv/data/dumps/
            lrwxrwxrwx 1 qserv qserv 41 Nov 24 12:04 neowiser_yr1_00 -> /backup/qserv/data/dumps/neowiser_yr1_00/
             
            % df -h /qserv
            Filesystem      Size  Used Avail Use% Mounted on
            /dev/sdb1       2.7T  1.4T  1.3T  52% /qserv
             
            % df -h /backup
            Filesystem      Size  Used Avail Use% Mounted on
            /dev/sdc1       3.3T  832G  2.5T  25% /backup
            

            Harvesting triplets

            % nohup \
              /bin/sudo -u qserv \
                /datasets/gapon/development/db_pdac_wise/scripts/run_dump_triplet.bash \
                --verbose \
                >& run_dump_triplet.log&
            

            Monitoring/inspecting log files and cross-check the number of per-worker chunks against the number of success reports in the log files:

            % for n in `seq --format "qserv-db%02.f" 1 30`; do
                ssh -n $n 'wc -l /tmp/neowiser_yr1_00/chunks.txt; tail /tmp/neowiser_yr1_00/log/dump_triplet.* | grep Finished | wc -l'; done
            4878 /tmp/neowiser_yr1_00/chunks.txt
            4878
            4878 /tmp/neowiser_yr1_00/chunks.txt
            4878
            ...
            

            Pulling triplets from the worker nodes to master

            Pulling triplet files from all worker nodes into the master w/o loading them:

            % nohup \
              /bin/sudo -u qserv \
                /datasets/gapon/development/db_pdac_wise/scripts/load_secondary_index.bash \
                --verbose \
                --get-remote-triplets \
                --no-triplets-loading \
                >& pull_triplets.log&
            

            Inspecting/monitoring results to see if the number of TSV files matches expectations:

            % wc -l /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/all_chunks_unique_sorted.txt
            % /bin/sudo -u qserv ls  /qserv/data/dumps/neowiser_yr1_00/ | grep tsv | wc -l
            146332
            % /bin/sudo -u qserv du -hs /qserv/data/dumps/neowiser_yr1_00/
            470G	/qserv/data/dumps/neowiser_yr1_00/
            

            Pre-sorting and merging TSV files

            % /bin/sudo -u qserv -i
            % cd /backup/qserv/data/dumps/
            % mkdir neowiser_yr1_00_tsv64
            % ls neowiser_yr1_00 | grep tsv > neowiser_yr1_00_tsv64.txt
            % wc -l neowiser_yr1_00_tsv64.txt 
            146332 neowiser_yr1_00_tsv64.txt
            % cd neowiser_yr1_00_tsv64
            % split -d -l 1674 ../neowiser_yr1_00_tsv64.txt ''
            % ls -1 | wc -l
            88
            % wc -l * | tail -1
             146332 total
            

            Sort in 88 streams and store the sorted files in a separate folder:

            % mkdir neowiser_yr1_00_sorted
            % for c in `ls neowiser_yr1_00_tsv64`; do
                for f in `cat neowiser_yr1_00_tsv64/${c}`; do
                    sort neowiser_yr1_00/${f} > neowiser_yr1_00_sorted/${f}
                done&
            done
            

            Analyzing results after all 88 jobs finished:

            % date
            % ls -1 neowiser_yr1_00_sorted | wc -l
            146332
            % du -hs neowiser_yr1_00_sorted
            470G	neowiser_yr1_00_sorted
            

            Parallel merge of 88 streams:

            % cd neowiser_yr1_00_sorted
            % mkdir ../neowiser_yr1_00_sorted_merged
            % for f in `ls ../neowiser_yr1_00_tsv64`; do
                mkdir -p ../neowiser_yr1_00_tmp_${f}
                cat ../neowiser_yr1_00_tsv64/${f} | tr '\n' '\0' | sort -m -s -k1 -T ../neowiser_yr1_00_tmp_${f} -o ../neowiser_yr1_00_sorted_merged/${f}.tsv --files0-from=- &
            done
            

            Analyzing a status after it finished

            % ls -1 neowiser_yr1_00_sorted_merged/ | wc -l
            88
            % du -hs neowiser_yr1_00_sorted_merged/
            470G	neowiser_yr1_00_sorted_merged/
            

            All 88 files has reasonable content in their last lines.

            Final merge of 88 steam files into the large one:

            % mkdir neowiser_yr1_00_tmp
            % nohup sort -m -s -k1 -T neowiser_yr1_00_tmp/  -o neowiser_yr1_00_sorted_merged.tsv neowiser_yr1_00_sorted_merged/* >& final_sort_merge.log&
            

            Analyzing a status after it finished:

            % ls -alh neowiser_yr1_00_sorted_merged.tsv
            -rw-rw-r-- 1 qserv qserv 470G Nov 26 08:49 neowiser_yr1_00_sorted_merged.tsv
            % wc -l neowiser_yr1_00_sorted_merged.tsv
            18468575586 neowiser_yr1_00_sorted_merged.tsv
            

            Splitting the large file into smaller chunks:

            % mkdir neowiser_yr1_00_sorted_merged_split
            % cd neowiser_yr1_00_sorted_merged_split
            % split -d -l 288571494 ../neowiser_yr1_00_sorted_merged.tsv ''
            

            Analyzing a status after it finished:

            % ls -1 neowiser_yr1_00_sorted_merged_split | wc -l 
            % du -hs neowiser_yr1_00_sorted_merged_split
            % wc -l neowiser_yr1_00_sorted_merged_split/* | tail -1
            

            Total number of files: 64
            Average file size: 7.4 GB
            Lines per file: 289M

            Rename folders and those lastly created 64 TSV files to allow the previously developed tool to work:

            % cd neowiser_yr1_00_sorted_merged_split
            % for f in `ls`; do mv ${f} ${f}.tsv; done
            

            This link is needed because the loaded will be looking for files a a different location than where they're actually residing:

            % rm /qserv/data/dumps/neowiser_yr1_00
            % ln -s /backup/qserv/data/dumps/neowiser_yr1_00_sorted_merged_split /qserv/data/dumps/neowiser_yr1_00
            

            Loading:

            % cd /home/gapon
            % nohup /bin/sudo -u qserv \
            /datasets/gapon/development/db_pdac_wise/scripts/load_secondary_index.bash \
            --verbose \
            >& load_secondary_index_load_only.log&
            

            When finished:

            % /bin/sudo -u qserv ls -alh /qserv/data/mysql/qservMeta | grep neowiser_yr1_00
            -rw-rw----  1 qserv qserv 1014 Nov 25 01:00 neowiser_yr1_00__neowiser_yr1_p1bs_psd.frm
            -rw-rw----  1 qserv qserv 1.1T Nov 27 07:44 neowiser_yr1_00__neowiser_yr1_p1bs_psd.ibd
            

            The log files are unremarkable.
            The row count matches the number of objects in tables:

            SELECT COUNT(*) FROM qservMeta.neowiser_yr1_00__neowiser_yr1_p1bs_psd;
            18468575586
            

            Show
            gapon Igor Gaponenko added a comment - - edited Generating the secondary index ATTENTION : since the NVME-based file system, /qserv won't have enough room to accommodate both TSV files and the secondary index itself then redirecting the import of files into /backup (the *SSD-based file system): % sudo mkdir -p /backup/qserv/data/dumps/neowiser_yr1_00 % sudo chown qserv:qserv /backup/qserv/data/dumps/neowiser_yr1_00 % sudo ln -s /backup/qserv/data/dumps/neowiser_yr1_00 /qserv/data/dumps % ls -l /qserv/data/dumps/ lrwxrwxrwx 1 qserv qserv 41 Nov 24 12:04 neowiser_yr1_00 -> /backup/qserv/data/dumps/neowiser_yr1_00/   % df -h /qserv Filesystem Size Used Avail Use% Mounted on /dev/sdb1 2.7T 1.4T 1.3T 52% /qserv   % df -h /backup Filesystem Size Used Avail Use% Mounted on /dev/sdc1 3.3T 832G 2.5T 25% /backup Harvesting triplets % nohup \ /bin/sudo -u qserv \ /datasets/gapon/development/db_pdac_wise/scripts/run_dump_triplet . bash \ --verbose \ >& run_dump_triplet.log& Monitoring/inspecting log files and cross-check the number of per-worker chunks against the number of success reports in the log files: % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n 'wc -l /tmp/neowiser_yr1_00/chunks.txt; tail /tmp/neowiser_yr1_00/log/dump_triplet.* | grep Finished | wc -l' ; done 4878 /tmp/neowiser_yr1_00/chunks .txt 4878 4878 /tmp/neowiser_yr1_00/chunks .txt 4878 ... Pulling triplets from the worker nodes to master Pulling triplet files from all worker nodes into the master w/o loading them: % nohup \ /bin/sudo -u qserv \ /datasets/gapon/development/db_pdac_wise/scripts/load_secondary_index . bash \ --verbose \ --get-remote-triplets \ --no-triplets-loading \ >& pull_triplets.log& Inspecting/monitoring results to see if the number of TSV files matches expectations: % wc -l /datasets/gapon/wise/neowiser_yr1_p1bs_psd/partitioned_x16/all_chunks_unique_sorted .txt % /bin/sudo -u qserv ls /qserv/data/dumps/neowiser_yr1_00/ | grep tsv | wc -l 146332 % /bin/sudo -u qserv du -hs /qserv/data/dumps/neowiser_yr1_00/ 470G /qserv/data/dumps/neowiser_yr1_00/ Pre-sorting and merging TSV files % /bin/sudo -u qserv -i % cd /backup/qserv/data/dumps/ % mkdir neowiser_yr1_00_tsv64 % ls neowiser_yr1_00 | grep tsv > neowiser_yr1_00_tsv64.txt % wc -l neowiser_yr1_00_tsv64.txt 146332 neowiser_yr1_00_tsv64.txt % cd neowiser_yr1_00_tsv64 % split -d -l 1674 .. /neowiser_yr1_00_tsv64 .txt '' % ls -1 | wc -l 88 % wc -l * | tail -1 146332 total Sort in 88 streams and store the sorted files in a separate folder: % mkdir neowiser_yr1_00_sorted % for c in ` ls neowiser_yr1_00_tsv64`; do for f in ` cat neowiser_yr1_00_tsv64/${c}`; do sort neowiser_yr1_00/${f} > neowiser_yr1_00_sorted/${f} done & done Analyzing results after all 88 jobs finished: % date % ls -1 neowiser_yr1_00_sorted | wc -l 146332 % du -hs neowiser_yr1_00_sorted 470G neowiser_yr1_00_sorted Parallel merge of 88 streams: % cd neowiser_yr1_00_sorted % mkdir .. /neowiser_yr1_00_sorted_merged % for f in ` ls .. /neowiser_yr1_00_tsv64 `; do mkdir -p .. /neowiser_yr1_00_tmp_ ${f} cat .. /neowiser_yr1_00_tsv64/ ${f} | tr '\n' '\0' | sort -m -s -k1 -T .. /neowiser_yr1_00_tmp_ ${f} -o .. /neowiser_yr1_00_sorted_merged/ ${f}.tsv --files0-from=- & done Analyzing a status after it finished % ls -1 neowiser_yr1_00_sorted_merged/ | wc -l 88 % du -hs neowiser_yr1_00_sorted_merged/ 470G neowiser_yr1_00_sorted_merged/ All 88 files has reasonable content in their last lines. Final merge of 88 steam files into the large one: % mkdir neowiser_yr1_00_tmp % nohup sort -m -s -k1 -T neowiser_yr1_00_tmp/ -o neowiser_yr1_00_sorted_merged.tsv neowiser_yr1_00_sorted_merged/* >& final_sort_merge.log& Analyzing a status after it finished: % ls -alh neowiser_yr1_00_sorted_merged.tsv -rw-rw-r-- 1 qserv qserv 470G Nov 26 08:49 neowiser_yr1_00_sorted_merged.tsv % wc -l neowiser_yr1_00_sorted_merged.tsv 18468575586 neowiser_yr1_00_sorted_merged.tsv Splitting the large file into smaller chunks: % mkdir neowiser_yr1_00_sorted_merged_split % cd neowiser_yr1_00_sorted_merged_split % split -d -l 288571494 .. /neowiser_yr1_00_sorted_merged .tsv '' Analyzing a status after it finished: % ls -1 neowiser_yr1_00_sorted_merged_split | wc -l % du -hs neowiser_yr1_00_sorted_merged_split % wc -l neowiser_yr1_00_sorted_merged_split/* | tail -1 Total number of files: 64 Average file size: 7.4 GB Lines per file: 289M Rename folders and those lastly created 64 TSV files to allow the previously developed tool to work: % cd neowiser_yr1_00_sorted_merged_split % for f in ` ls `; do mv ${f} ${f}.tsv; done This link is needed because the loaded will be looking for files a a different location than where they're actually residing: % rm /qserv/data/dumps/neowiser_yr1_00 % ln -s /backup/qserv/data/dumps/neowiser_yr1_00_sorted_merged_split /qserv/data/dumps/neowiser_yr1_00 Loading: % cd /home/gapon % nohup /bin/sudo -u qserv \ /datasets/gapon/development/db_pdac_wise/scripts/load_secondary_index . bash \ --verbose \ >& load_secondary_index_load_only.log& When finished: % /bin/sudo -u qserv ls -alh /qserv/data/mysql/qservMeta | grep neowiser_yr1_00 -rw-rw---- 1 qserv qserv 1014 Nov 25 01:00 neowiser_yr1_00__neowiser_yr1_p1bs_psd.frm -rw-rw---- 1 qserv qserv 1.1T Nov 27 07:44 neowiser_yr1_00__neowiser_yr1_p1bs_psd.ibd The log files are unremarkable. The row count matches the number of objects in tables: SELECT COUNT (*) FROM qservMeta.neowiser_yr1_00__neowiser_yr1_p1bs_psd; 18468575586
            Hide
            gapon Igor Gaponenko added a comment -

            Storage resources utilization in the cluster

            Master

            Primary data file system of the master service (lsst-qserv-master01).

            % df -h /qserv
            Filesystem      Size  Used Avail Use% Mounted on
            /dev/sdb1       2.7T  2.4T  248G  91% /qserv
            

            It's used for storying intermediate results of queries and the secondary index tables (the main consumer of the storage):

            % du -hs /qserv/data/mysql/qservMeta | grep ibd
            1.1T	/qserv/data/mysql/qservMeta/neowiser_yr1_00__neowiser_yr1_p1bs_psd.ibd
            17G	/qserv/data/mysql/qservMeta/sdss_stripe82_00__RunDeepSource.ibd
            17G	/qserv/data/mysql/qservMeta/sdss_stripe82_01__RunDeepSource.ibd
            70G	/qserv/data/mysql/qservMeta/wise_00__allwise_cat.ibd
            416G	/qserv/data/mysql/qservMeta/wise_2band_00__allsky_2band_p1bs_psd.ibd
            210G	/qserv/data/mysql/qservMeta/wise_3band_00__allsky_3band_p1bs_psd.ibd
            537G	/qserv/data/mysql/qservMeta/wise_4band_00__allsky_4band_p1bs_psd.ibd
            40G	/qserv/data/mysql/qservMeta/wise_ext_00__allwise_rej.ibd
            

            Worker nodes

            Storage utilization across all databases:

            % for n in `seq --format "qserv-db%02.f" 1 30`; do
              ssh -n $n 'echo `hostname`": "`df -h /qserv/ | grep qserv`'
            done
            lsst-qserv-db01: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db02: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db03: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db04: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db05: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db06: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db07: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db08: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db09: /dev/sdb1 15T 3.7T 11T 25% /qserv
            lsst-qserv-db10: /dev/sdb1 15T 3.7T 11T 25% /qserv
            lsst-qserv-db11: /dev/sdb1 15T 3.7T 11T 25% /qserv
            lsst-qserv-db12: /dev/sdb1 15T 3.6T 11T 25% /qserv
            lsst-qserv-db13: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db14: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db15: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db16: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db17: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db18: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db19: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db20: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db21: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db22: /dev/sdb1 15T 3.8T 11T 27% /qserv
            lsst-qserv-db23: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db24: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db25: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db26: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db27: /dev/sdb1 15T 3.7T 11T 26% /qserv
            lsst-qserv-db28: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db29: /dev/sdb1 15T 3.8T 11T 26% /qserv
            lsst-qserv-db30: /dev/sdb1 15T 3.9T 11T 27% /qserv
            

            neowiser_yr1_00

            % for n in `seq --format "qserv-db%02.f" 1 30`; do
              ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/neowiser_yr1_00 | grep qserv`'
            done
            lsst-qserv-db01: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db02: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db03: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db04: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db05: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db06: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db07: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db08: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db09: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db10: 1021G /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db11: 1018G /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db12: 1020G /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db13: 1022G /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db14: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db15: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db16: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db17: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db18: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db19: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db20: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db21: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db22: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db23: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db24: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db25: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db26: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db27: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db28: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db29: 1.1T /qserv/data/mysql/neowiser_yr1_00
            lsst-qserv-db30: 1.1T /qserv/data/mysql/neowiser_yr1_00
            

            Total: 33 TB

            sdss_stripe82_00

            % for n in `seq --format "qserv-db%02.f" 1 30`; do
              ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/sdss_stripe82_00 | grep qserv`'
            done
            lsst-qserv-db01: 325G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db02: 334G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db03: 330G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db04: 339G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db05: 343G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db06: 341G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db07: 345G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db08: 335G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db09: 336G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db10: 352G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db11: 323G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db12: 330G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db13: 341G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db14: 329G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db15: 333G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db16: 317G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db17: 334G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db18: 346G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db19: 333G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db20: 347G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db21: 346G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db22: 342G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db23: 342G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db24: 332G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db25: 332G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db26: 328G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db27: 329G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db28: 344G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db29: 341G /qserv/data/mysql/sdss_stripe82_00
            lsst-qserv-db30: 333G /qserv/data/mysql/sdss_stripe82_00
            

            Total: 10 TB

            sdss_stripe82_01

            % for n in `seq --format "qserv-db%02.f" 1 30`; do
              ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/sdss_stripe82_01 | grep qserv`'
            done
            lsst-qserv-db01: 325G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db02: 334G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db03: 331G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db04: 339G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db05: 344G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db06: 341G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db07: 346G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db08: 335G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db09: 336G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db10: 352G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db11: 323G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db12: 331G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db13: 342G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db14: 330G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db15: 334G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db16: 318G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db17: 335G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db18: 347G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db19: 334G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db20: 350G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db21: 347G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db22: 342G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db23: 343G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db24: 332G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db25: 332G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db26: 329G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db27: 330G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db28: 345G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db29: 341G /qserv/data/mysql/sdss_stripe82_01
            lsst-qserv-db30: 334G /qserv/data/mysql/sdss_stripe82_01
            

            Total: 10 TB

            wise_00

            % for n in `seq --format "qserv-db%02.f" 1 30`; do
              ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/wise_00 | grep qserv`'
            done
            lsst-qserv-db01: 425G /qserv/data/mysql/wise_00
            lsst-qserv-db02: 456G /qserv/data/mysql/wise_00
            lsst-qserv-db03: 456G /qserv/data/mysql/wise_00
            lsst-qserv-db04: 455G /qserv/data/mysql/wise_00
            lsst-qserv-db05: 454G /qserv/data/mysql/wise_00
            lsst-qserv-db06: 453G /qserv/data/mysql/wise_00
            lsst-qserv-db07: 453G /qserv/data/mysql/wise_00
            lsst-qserv-db08: 453G /qserv/data/mysql/wise_00
            lsst-qserv-db09: 452G /qserv/data/mysql/wise_00
            lsst-qserv-db10: 434G /qserv/data/mysql/wise_00
            lsst-qserv-db11: 423G /qserv/data/mysql/wise_00
            lsst-qserv-db12: 453G /qserv/data/mysql/wise_00
            lsst-qserv-db13: 453G /qserv/data/mysql/wise_00
            lsst-qserv-db14: 454G /qserv/data/mysql/wise_00
            lsst-qserv-db15: 453G /qserv/data/mysql/wise_00
            lsst-qserv-db16: 454G /qserv/data/mysql/wise_00
            lsst-qserv-db17: 455G /qserv/data/mysql/wise_00
            lsst-qserv-db18: 455G /qserv/data/mysql/wise_00
            lsst-qserv-db19: 455G /qserv/data/mysql/wise_00
            lsst-qserv-db20: 455G /qserv/data/mysql/wise_00
            lsst-qserv-db21: 421G /qserv/data/mysql/wise_00
            lsst-qserv-db22: 456G /qserv/data/mysql/wise_00
            lsst-qserv-db23: 416G /qserv/data/mysql/wise_00
            lsst-qserv-db24: 454G /qserv/data/mysql/wise_00
            lsst-qserv-db25: 455G /qserv/data/mysql/wise_00
            lsst-qserv-db26: 456G /qserv/data/mysql/wise_00
            lsst-qserv-db27: 419G /qserv/data/mysql/wise_00
            lsst-qserv-db28: 456G /qserv/data/mysql/wise_00
            lsst-qserv-db29: 455G /qserv/data/mysql/wise_00
            lsst-qserv-db30: 456G /qserv/data/mysql/wise_00
            

            Total: 13 TB

            wise_2band_00

            % for n in `seq --format "qserv-db%02.f" 1 30`; do
              ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/wise_2band_00 | grep qserv`'
            done
            lsst-qserv-db01: 419G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db02: 412G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db03: 408G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db04: 405G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db05: 403G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db06: 402G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db07: 400G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db08: 398G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db09: 394G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db10: 392G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db11: 393G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db12: 394G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db13: 394G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db14: 394G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db15: 394G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db16: 400G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db17: 403G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db18: 399G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db19: 398G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db20: 399G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db21: 401G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db22: 406G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db23: 408G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db24: 404G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db25: 400G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db26: 399G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db27: 400G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db28: 404G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db29: 410G /qserv/data/mysql/wise_2band_00
            lsst-qserv-db30: 418G /qserv/data/mysql/wise_2band_00
            

            Total: 12 TB

            wise_3band_00

            % for n in `seq --format "qserv-db%02.f" 1 30`; do
              ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/wise_3band_00 | grep qserv`'
            done
            lsst-qserv-db01: 258G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db02: 251G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db03: 251G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db04: 254G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db05: 260G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db06: 268G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db07: 261G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db08: 254G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db09: 250G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db10: 249G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db11: 248G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db12: 248G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db13: 247G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db14: 249G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db15: 252G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db16: 256G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db17: 263G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db18: 263G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db19: 258G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db20: 252G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db21: 252G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db22: 263G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db23: 267G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db24: 259G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db25: 259G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db26: 258G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db27: 257G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db28: 257G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db29: 259G /qserv/data/mysql/wise_3band_00
            lsst-qserv-db30: 263G /qserv/data/mysql/wise_3band_00
            

            Total: 7.5 TB

            wise_4band_00

            % for n in `seq --format "qserv-db%02.f" 1 30`; do
              ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/wise_4band_00 | grep qserv`'
            done
            lsst-qserv-db01: 818G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db02: 804G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db03: 797G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db04: 791G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db05: 791G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db06: 798G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db07: 792G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db08: 787G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db09: 780G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db10: 778G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db11: 776G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db12: 778G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db13: 782G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db14: 783G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db15: 782G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db16: 786G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db17: 794G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db18: 795G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db19: 797G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db20: 798G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db21: 802G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db22: 816G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db23: 811G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db24: 799G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db25: 795G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db26: 794G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db27: 794G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db28: 798G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db29: 806G /qserv/data/mysql/wise_4band_00
            lsst-qserv-db30: 822G /qserv/data/mysql/wise_4band_00
            

            Total: 24 TB

            wise_ext_00 (rejected objects)

            % for n in `seq --format "qserv-db%02.f" 1 30`; do
              ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/wise_ext_00 | grep qserv`'
            done
            lsst-qserv-db01: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db02: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db03: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db04: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db05: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db06: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db07: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db08: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db09: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db10: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db11: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db12: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db13: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db14: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db15: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db16: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db17: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db18: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db19: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db20: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db21: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db22: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db23: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db24: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db25: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db26: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db27: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db28: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db29: 53G /qserv/data/mysql/wise_ext_00
            lsst-qserv-db30: 53G /qserv/data/mysql/wise_ext_00
            

            Total: 1.5 TB

            Show
            gapon Igor Gaponenko added a comment - Storage resources utilization in the cluster Master Primary data file system of the master service ( lsst-qserv-master01 ). % df -h /qserv Filesystem Size Used Avail Use% Mounted on /dev/sdb1 2.7T 2.4T 248G 91% /qserv It's used for storying intermediate results of queries and the secondary index tables (the main consumer of the storage): % du -hs /qserv/data/mysql/qservMeta | grep ibd 1.1T /qserv/data/mysql/qservMeta/neowiser_yr1_00__neowiser_yr1_p1bs_psd .ibd 17G /qserv/data/mysql/qservMeta/sdss_stripe82_00__RunDeepSource .ibd 17G /qserv/data/mysql/qservMeta/sdss_stripe82_01__RunDeepSource .ibd 70G /qserv/data/mysql/qservMeta/wise_00__allwise_cat .ibd 416G /qserv/data/mysql/qservMeta/wise_2band_00__allsky_2band_p1bs_psd .ibd 210G /qserv/data/mysql/qservMeta/wise_3band_00__allsky_3band_p1bs_psd .ibd 537G /qserv/data/mysql/qservMeta/wise_4band_00__allsky_4band_p1bs_psd .ibd 40G /qserv/data/mysql/qservMeta/wise_ext_00__allwise_rej .ibd Worker nodes Storage utilization across all databases: % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n 'echo `hostname`": "`df -h /qserv/ | grep qserv`' done lsst-qserv-db01: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db02: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db03: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db04: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db05: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db06: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db07: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db08: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db09: /dev/sdb1 15T 3.7T 11T 25% /qserv lsst-qserv-db10: /dev/sdb1 15T 3.7T 11T 25% /qserv lsst-qserv-db11: /dev/sdb1 15T 3.7T 11T 25% /qserv lsst-qserv-db12: /dev/sdb1 15T 3.6T 11T 25% /qserv lsst-qserv-db13: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db14: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db15: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db16: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db17: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db18: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db19: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db20: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db21: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db22: /dev/sdb1 15T 3.8T 11T 27% /qserv lsst-qserv-db23: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db24: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db25: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db26: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db27: /dev/sdb1 15T 3.7T 11T 26% /qserv lsst-qserv-db28: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db29: /dev/sdb1 15T 3.8T 11T 26% /qserv lsst-qserv-db30: /dev/sdb1 15T 3.9T 11T 27% /qserv neowiser_yr1_00 % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/neowiser_yr1_00 | grep qserv`' done lsst-qserv-db01: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db02: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db03: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db04: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db05: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db06: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db07: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db08: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db09: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db10: 1021G /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db11: 1018G /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db12: 1020G /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db13: 1022G /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db14: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db15: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db16: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db17: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db18: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db19: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db20: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db21: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db22: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db23: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db24: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db25: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db26: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db27: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db28: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db29: 1.1T /qserv/data/mysql/neowiser_yr1_00 lsst-qserv-db30: 1.1T /qserv/data/mysql/neowiser_yr1_00 Total: 33 TB sdss_stripe82_00 % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/sdss_stripe82_00 | grep qserv`' done lsst-qserv-db01: 325G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db02: 334G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db03: 330G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db04: 339G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db05: 343G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db06: 341G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db07: 345G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db08: 335G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db09: 336G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db10: 352G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db11: 323G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db12: 330G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db13: 341G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db14: 329G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db15: 333G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db16: 317G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db17: 334G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db18: 346G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db19: 333G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db20: 347G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db21: 346G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db22: 342G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db23: 342G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db24: 332G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db25: 332G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db26: 328G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db27: 329G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db28: 344G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db29: 341G /qserv/data/mysql/sdss_stripe82_00 lsst-qserv-db30: 333G /qserv/data/mysql/sdss_stripe82_00 Total: 10 TB sdss_stripe82_01 % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/sdss_stripe82_01 | grep qserv`' done lsst-qserv-db01: 325G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db02: 334G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db03: 331G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db04: 339G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db05: 344G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db06: 341G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db07: 346G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db08: 335G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db09: 336G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db10: 352G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db11: 323G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db12: 331G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db13: 342G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db14: 330G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db15: 334G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db16: 318G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db17: 335G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db18: 347G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db19: 334G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db20: 350G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db21: 347G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db22: 342G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db23: 343G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db24: 332G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db25: 332G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db26: 329G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db27: 330G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db28: 345G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db29: 341G /qserv/data/mysql/sdss_stripe82_01 lsst-qserv-db30: 334G /qserv/data/mysql/sdss_stripe82_01 Total: 10 TB wise_00 % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/wise_00 | grep qserv`' done lsst-qserv-db01: 425G /qserv/data/mysql/wise_00 lsst-qserv-db02: 456G /qserv/data/mysql/wise_00 lsst-qserv-db03: 456G /qserv/data/mysql/wise_00 lsst-qserv-db04: 455G /qserv/data/mysql/wise_00 lsst-qserv-db05: 454G /qserv/data/mysql/wise_00 lsst-qserv-db06: 453G /qserv/data/mysql/wise_00 lsst-qserv-db07: 453G /qserv/data/mysql/wise_00 lsst-qserv-db08: 453G /qserv/data/mysql/wise_00 lsst-qserv-db09: 452G /qserv/data/mysql/wise_00 lsst-qserv-db10: 434G /qserv/data/mysql/wise_00 lsst-qserv-db11: 423G /qserv/data/mysql/wise_00 lsst-qserv-db12: 453G /qserv/data/mysql/wise_00 lsst-qserv-db13: 453G /qserv/data/mysql/wise_00 lsst-qserv-db14: 454G /qserv/data/mysql/wise_00 lsst-qserv-db15: 453G /qserv/data/mysql/wise_00 lsst-qserv-db16: 454G /qserv/data/mysql/wise_00 lsst-qserv-db17: 455G /qserv/data/mysql/wise_00 lsst-qserv-db18: 455G /qserv/data/mysql/wise_00 lsst-qserv-db19: 455G /qserv/data/mysql/wise_00 lsst-qserv-db20: 455G /qserv/data/mysql/wise_00 lsst-qserv-db21: 421G /qserv/data/mysql/wise_00 lsst-qserv-db22: 456G /qserv/data/mysql/wise_00 lsst-qserv-db23: 416G /qserv/data/mysql/wise_00 lsst-qserv-db24: 454G /qserv/data/mysql/wise_00 lsst-qserv-db25: 455G /qserv/data/mysql/wise_00 lsst-qserv-db26: 456G /qserv/data/mysql/wise_00 lsst-qserv-db27: 419G /qserv/data/mysql/wise_00 lsst-qserv-db28: 456G /qserv/data/mysql/wise_00 lsst-qserv-db29: 455G /qserv/data/mysql/wise_00 lsst-qserv-db30: 456G /qserv/data/mysql/wise_00 Total: 13 TB wise_2band_00 % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/wise_2band_00 | grep qserv`' done lsst-qserv-db01: 419G /qserv/data/mysql/wise_2band_00 lsst-qserv-db02: 412G /qserv/data/mysql/wise_2band_00 lsst-qserv-db03: 408G /qserv/data/mysql/wise_2band_00 lsst-qserv-db04: 405G /qserv/data/mysql/wise_2band_00 lsst-qserv-db05: 403G /qserv/data/mysql/wise_2band_00 lsst-qserv-db06: 402G /qserv/data/mysql/wise_2band_00 lsst-qserv-db07: 400G /qserv/data/mysql/wise_2band_00 lsst-qserv-db08: 398G /qserv/data/mysql/wise_2band_00 lsst-qserv-db09: 394G /qserv/data/mysql/wise_2band_00 lsst-qserv-db10: 392G /qserv/data/mysql/wise_2band_00 lsst-qserv-db11: 393G /qserv/data/mysql/wise_2band_00 lsst-qserv-db12: 394G /qserv/data/mysql/wise_2band_00 lsst-qserv-db13: 394G /qserv/data/mysql/wise_2band_00 lsst-qserv-db14: 394G /qserv/data/mysql/wise_2band_00 lsst-qserv-db15: 394G /qserv/data/mysql/wise_2band_00 lsst-qserv-db16: 400G /qserv/data/mysql/wise_2band_00 lsst-qserv-db17: 403G /qserv/data/mysql/wise_2band_00 lsst-qserv-db18: 399G /qserv/data/mysql/wise_2band_00 lsst-qserv-db19: 398G /qserv/data/mysql/wise_2band_00 lsst-qserv-db20: 399G /qserv/data/mysql/wise_2band_00 lsst-qserv-db21: 401G /qserv/data/mysql/wise_2band_00 lsst-qserv-db22: 406G /qserv/data/mysql/wise_2band_00 lsst-qserv-db23: 408G /qserv/data/mysql/wise_2band_00 lsst-qserv-db24: 404G /qserv/data/mysql/wise_2band_00 lsst-qserv-db25: 400G /qserv/data/mysql/wise_2band_00 lsst-qserv-db26: 399G /qserv/data/mysql/wise_2band_00 lsst-qserv-db27: 400G /qserv/data/mysql/wise_2band_00 lsst-qserv-db28: 404G /qserv/data/mysql/wise_2band_00 lsst-qserv-db29: 410G /qserv/data/mysql/wise_2band_00 lsst-qserv-db30: 418G /qserv/data/mysql/wise_2band_00 Total: 12 TB wise_3band_00 % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/wise_3band_00 | grep qserv`' done lsst-qserv-db01: 258G /qserv/data/mysql/wise_3band_00 lsst-qserv-db02: 251G /qserv/data/mysql/wise_3band_00 lsst-qserv-db03: 251G /qserv/data/mysql/wise_3band_00 lsst-qserv-db04: 254G /qserv/data/mysql/wise_3band_00 lsst-qserv-db05: 260G /qserv/data/mysql/wise_3band_00 lsst-qserv-db06: 268G /qserv/data/mysql/wise_3band_00 lsst-qserv-db07: 261G /qserv/data/mysql/wise_3band_00 lsst-qserv-db08: 254G /qserv/data/mysql/wise_3band_00 lsst-qserv-db09: 250G /qserv/data/mysql/wise_3band_00 lsst-qserv-db10: 249G /qserv/data/mysql/wise_3band_00 lsst-qserv-db11: 248G /qserv/data/mysql/wise_3band_00 lsst-qserv-db12: 248G /qserv/data/mysql/wise_3band_00 lsst-qserv-db13: 247G /qserv/data/mysql/wise_3band_00 lsst-qserv-db14: 249G /qserv/data/mysql/wise_3band_00 lsst-qserv-db15: 252G /qserv/data/mysql/wise_3band_00 lsst-qserv-db16: 256G /qserv/data/mysql/wise_3band_00 lsst-qserv-db17: 263G /qserv/data/mysql/wise_3band_00 lsst-qserv-db18: 263G /qserv/data/mysql/wise_3band_00 lsst-qserv-db19: 258G /qserv/data/mysql/wise_3band_00 lsst-qserv-db20: 252G /qserv/data/mysql/wise_3band_00 lsst-qserv-db21: 252G /qserv/data/mysql/wise_3band_00 lsst-qserv-db22: 263G /qserv/data/mysql/wise_3band_00 lsst-qserv-db23: 267G /qserv/data/mysql/wise_3band_00 lsst-qserv-db24: 259G /qserv/data/mysql/wise_3band_00 lsst-qserv-db25: 259G /qserv/data/mysql/wise_3band_00 lsst-qserv-db26: 258G /qserv/data/mysql/wise_3band_00 lsst-qserv-db27: 257G /qserv/data/mysql/wise_3band_00 lsst-qserv-db28: 257G /qserv/data/mysql/wise_3band_00 lsst-qserv-db29: 259G /qserv/data/mysql/wise_3band_00 lsst-qserv-db30: 263G /qserv/data/mysql/wise_3band_00 Total: 7.5 TB wise_4band_00 % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/wise_4band_00 | grep qserv`' done lsst-qserv-db01: 818G /qserv/data/mysql/wise_4band_00 lsst-qserv-db02: 804G /qserv/data/mysql/wise_4band_00 lsst-qserv-db03: 797G /qserv/data/mysql/wise_4band_00 lsst-qserv-db04: 791G /qserv/data/mysql/wise_4band_00 lsst-qserv-db05: 791G /qserv/data/mysql/wise_4band_00 lsst-qserv-db06: 798G /qserv/data/mysql/wise_4band_00 lsst-qserv-db07: 792G /qserv/data/mysql/wise_4band_00 lsst-qserv-db08: 787G /qserv/data/mysql/wise_4band_00 lsst-qserv-db09: 780G /qserv/data/mysql/wise_4band_00 lsst-qserv-db10: 778G /qserv/data/mysql/wise_4band_00 lsst-qserv-db11: 776G /qserv/data/mysql/wise_4band_00 lsst-qserv-db12: 778G /qserv/data/mysql/wise_4band_00 lsst-qserv-db13: 782G /qserv/data/mysql/wise_4band_00 lsst-qserv-db14: 783G /qserv/data/mysql/wise_4band_00 lsst-qserv-db15: 782G /qserv/data/mysql/wise_4band_00 lsst-qserv-db16: 786G /qserv/data/mysql/wise_4band_00 lsst-qserv-db17: 794G /qserv/data/mysql/wise_4band_00 lsst-qserv-db18: 795G /qserv/data/mysql/wise_4band_00 lsst-qserv-db19: 797G /qserv/data/mysql/wise_4band_00 lsst-qserv-db20: 798G /qserv/data/mysql/wise_4band_00 lsst-qserv-db21: 802G /qserv/data/mysql/wise_4band_00 lsst-qserv-db22: 816G /qserv/data/mysql/wise_4band_00 lsst-qserv-db23: 811G /qserv/data/mysql/wise_4band_00 lsst-qserv-db24: 799G /qserv/data/mysql/wise_4band_00 lsst-qserv-db25: 795G /qserv/data/mysql/wise_4band_00 lsst-qserv-db26: 794G /qserv/data/mysql/wise_4band_00 lsst-qserv-db27: 794G /qserv/data/mysql/wise_4band_00 lsst-qserv-db28: 798G /qserv/data/mysql/wise_4band_00 lsst-qserv-db29: 806G /qserv/data/mysql/wise_4band_00 lsst-qserv-db30: 822G /qserv/data/mysql/wise_4band_00 Total: 24 TB wise_ext_00 (rejected objects) % for n in ` seq -- format "qserv-db%02.f" 1 30`; do ssh -n $n 'echo `hostname`": "`/bin/sudo -u qserv du -hs /qserv/data/mysql/wise_ext_00 | grep qserv`' done lsst-qserv-db01: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db02: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db03: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db04: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db05: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db06: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db07: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db08: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db09: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db10: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db11: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db12: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db13: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db14: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db15: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db16: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db17: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db18: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db19: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db20: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db21: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db22: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db23: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db24: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db25: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db26: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db27: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db28: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db29: 53G /qserv/data/mysql/wise_ext_00 lsst-qserv-db30: 53G /qserv/data/mysql/wise_ext_00 Total: 1.5 TB

              People

              Assignee:
              gapon Igor Gaponenko
              Reporter:
              gapon Igor Gaponenko
              Watchers:
              Fritz Mueller, Gregory Dubois-Felsmann, Igor Gaponenko
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved:

                  Jenkins

                  No builds found.