Loading catalogs into PDAC database sdss_stripe82_01
Using scripts and configurations from:
/datasets/gapon/development/db_pdac_stripe82
|
All operations are performed from the master node lsst-qserv-master01 of the PDAC cluster.
Set up CSS
This stage requires running two scripts:
% /bin/sudo -u qserv /datasets/gapon/development/db_pdac_stripe82/scripts/setup_css.bash >& setup_css.log
|
% /bin/sudo -u qserv /datasets/gapon/development/db_pdac_stripe82/scripts/setup_css_nonpart.bash >& setup_css_nonpart.log
|
Verifying results:
% source /datasets/gapon/stack/loadLSST.bash
|
% setup -t qserv-dev qserv_distrib; setup -j -r /datasets/gapon/development/qserv
|
% qserv-admin.py "DUMP EVERYTHING;" | grep sdss_stripe82_01
|
/DBS/sdss_stripe82_01 READY
|
/DBS/sdss_stripe82_01/.packed.json {"partitioningId":"0000000026","releaseStatus":"RELEASED","storageClass":"L2"}
|
/DBS/sdss_stripe82_01/TABLES \N
|
/DBS/sdss_stripe82_01/TABLES/DeepCoadd READY
|
/DBS/sdss_stripe82_01/TABLES/DeepCoadd/schema (
|
/DBS/sdss_stripe82_01/TABLES/DeepCoadd_Metadata READY
|
/DBS/sdss_stripe82_01/TABLES/DeepCoadd_Metadata/schema (
|
/DBS/sdss_stripe82_01/TABLES/DeepCoadd_To_Htm10 READY
|
/DBS/sdss_stripe82_01/TABLES/DeepCoadd_To_Htm10/schema (
|
/DBS/sdss_stripe82_01/TABLES/Filter READY
|
/DBS/sdss_stripe82_01/TABLES/Filter/schema (
|
/DBS/sdss_stripe82_01/TABLES/LeapSeconds READY
|
/DBS/sdss_stripe82_01/TABLES/LeapSeconds/schema (
|
/DBS/sdss_stripe82_01/TABLES/RunDeepForcedSource READY
|
/DBS/sdss_stripe82_01/TABLES/RunDeepForcedSource/partitioning \N
|
/DBS/sdss_stripe82_01/TABLES/RunDeepForcedSource/partitioning/.packed.json {"dirColName":"objectId","dirDb":"sdss_stripe82_01","dirTable":"RunDeepSource","latColName":"coord_decl","lonColName":"coord_ra","subChunks":"0"}
|
/DBS/sdss_stripe82_01/TABLES/RunDeepForcedSource/schema (
|
/DBS/sdss_stripe82_01/TABLES/RunDeepForcedSource/sharedScan \N
|
/DBS/sdss_stripe82_01/TABLES/RunDeepForcedSource/sharedScan/.packed.json {"lockInMem":"1"}
|
/DBS/sdss_stripe82_01/TABLES/RunDeepSource READY
|
/DBS/sdss_stripe82_01/TABLES/RunDeepSource/partitioning \N
|
/DBS/sdss_stripe82_01/TABLES/RunDeepSource/partitioning/.packed.json {"dirColName":"id","dirDb":"sdss_stripe82_01","dirTable":"RunDeepSource","latColName":"coord_decl","lonColName":"coord_ra","subChunks":"1"}
|
/DBS/sdss_stripe82_01/TABLES/RunDeepSource/schema (
|
/DBS/sdss_stripe82_01/TABLES/RunDeepSource/sharedScan \N
|
/DBS/sdss_stripe82_01/TABLES/RunDeepSource/sharedScan/.packed.json {"lockInMem":"1"}
|
/DBS/sdss_stripe82_01/TABLES/Science_Ccd_Exposure READY
|
/DBS/sdss_stripe82_01/TABLES/Science_Ccd_Exposure/schema (
|
/DBS/sdss_stripe82_01/TABLES/Science_Ccd_Exposure_Metadata READY
|
/DBS/sdss_stripe82_01/TABLES/Science_Ccd_Exposure_Metadata/schema (
|
/DBS/sdss_stripe82_01/TABLES/Science_Ccd_Exposure_NoFile READY
|
/DBS/sdss_stripe82_01/TABLES/Science_Ccd_Exposure_NoFile/schema (
|
/DBS/sdss_stripe82_01/TABLES/Science_Ccd_Exposure_To_Htm10 READY
|
/DBS/sdss_stripe82_01/TABLES/Science_Ccd_Exposure_To_Htm10/schema (
|
/DBS/sdss_stripe82_01/TABLES/ZZZ_Db_Description READY
|
/DBS/sdss_stripe82_01/TABLES/ZZZ_Db_Description/schema (
|
Create the database on all nodes of the cluster
Script:
% /bin/sudo -u qserv /datasets/gapon/development/db_pdac_stripe82/scripts/run_create_database.bash \
|
--delete \
|
--verbose \
|
>& run_create_database.log&
|
NOTE: the --delete option which will delete the database (and all relevant data) if it existed.
Inspected log files (on the master node):
% ls -al /tmp/sdss_stripe82_01/log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db01_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db02_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db03_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db04_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db05_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db06_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db07_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db08_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db09_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db10_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db11_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db12_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db13_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db14_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db15_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db16_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db17_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db18_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db19_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db20_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db21_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db22_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db23_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db24_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db25_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db26_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db27_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db28_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db29_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-db30_create_database.log
|
-rw-r--r-- 1 qserv qserv 133 May 18 20:37 qserv-master01_create_database.log
|
CONCLUSION: all looks normal. Each log file reports:
deleting database: sdss_stripe82_01
|
creating database: sdss_stripe82_01
|
configuring access privileges for database: sdss_stripe82_01
|
Configuring Docker containers
IMPORTANT: this step ensures the containers will have sufficient temporary space to process multi-GB-size TSV files to be loaded into databases via the wmgr Web service.
Check the version of the container running on the nodes. This will be needed to configure the container management scripts before restarting the containers with the desired configuration parameters later on within this thread:
{coe:bash}
% docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9f684092314d qserv/qserv:tickets_DM-10283_master "/bin/sh -c /qserv/sc" 3 weeks ago Up 3 weeks qserv
|
Check which mount points are configured:
|
{code:bash}
|
% docker inspect qserv
|
..
|
"Mounts": [
|
{
|
"Source": "/qserv/data",
|
"Destination": "/qserv/data",
|
"Mode": "",
|
"RW": true,
|
"Propagation": "rprivate"
|
},
|
{
|
"Source": "/qserv/log",
|
"Destination": "/qserv/run/var/log",
|
"Mode": "",
|
"RW": true,
|
"Propagation": "rprivate"
|
}
|
],
|
..
|
PROBLEM FOUND: the mount point for /run/tmp is missing. Need to configure it as explained below.
First, edit the environment set up file to ensure the following parameters are set:
% cat/datasets/gapon/development/qserv/admin/tools/docker/deployment/ncsa/env.sh
|
..
|
BRANCH=tickets_DM-10283
|
|
# Data directory location on docker host, optional
|
HOST_DATA_DIR=/qserv/data
|
|
# Log directory location on docker host, optional
|
HOST_LOG_DIR=/qserv/log
|
|
# Temporary directory location on docker host, optional
|
HOST_TMP_DIR=/qserv/tmp
|
Then restart the container with these parameters:
% cd /datasets/gapon/development/qserv/admin/tools/docker/deployment/ncsa/
|
% ./stop.sh
|
% ./run.sh
|
Inspect the containers status:
% docker ps
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
511a591e000d qserv/qserv:tickets_DM-10283_master "/bin/sh -c /qserv/sc" 25 seconds ago Up 22 seconds qserv
|
|
% docker inspect qserv
|
..
|
"Mounts": [
|
{
|
"Source": "/qserv/data",
|
"Destination": "/qserv/data",
|
"Mode": "",
|
"RW": true,
|
"Propagation": "rprivate"
|
},
|
{
|
"Source": "/qserv/log",
|
"Destination": "/qserv/run/var/log",
|
"Mode": "",
|
"RW": true,
|
"Propagation": "rprivate"
|
},
|
{
|
"Source": "/qserv/tmp",
|
"Destination": "/qserv/run/tmp",
|
"Mode": "",
|
"RW": true,
|
"Propagation": "rprivate"
|
}
|
],
|
..
|
Bump the number of MySQL connections
This will increased the num_connections limit at all MySQL/MariaDB servers beyond the default number of 512 which may be too low when loading many chunks in parallel.
Test the number of connections and the current limits on each node:
% /bin/sudo -u qserv /datasets/gapon/development/db_pdac_stripe82/scripts/run_show_processlist.bash --total
|
lsst-qserv-master01: 8 / 512
|
lsst-qserv-db01: 2 / 512
|
lsst-qserv-db02: 2 / 512
|
lsst-qserv-db03: 2 / 512
|
lsst-qserv-db04: 2 / 512
|
lsst-qserv-db05: 2 / 512
|
lsst-qserv-db06: 2 / 512
|
lsst-qserv-db07: 2 / 512
|
lsst-qserv-db08: 2 / 512
|
lsst-qserv-db09: 2 / 512
|
lsst-qserv-db10: 2 / 512
|
lsst-qserv-db11: 2 / 512
|
lsst-qserv-db12: 2 / 512
|
lsst-qserv-db13: 2 / 512
|
lsst-qserv-db14: 2 / 512
|
lsst-qserv-db15: 2 / 512
|
lsst-qserv-db16: 2 / 512
|
lsst-qserv-db17: 2 / 512
|
lsst-qserv-db18: 2 / 512
|
lsst-qserv-db19: 2 / 512
|
lsst-qserv-db20: 2 / 512
|
lsst-qserv-db21: 2 / 512
|
lsst-qserv-db22: 2 / 512
|
lsst-qserv-db23: 2 / 512
|
lsst-qserv-db24: 2 / 512
|
lsst-qserv-db25: 2 / 512
|
lsst-qserv-db26: 2 / 512
|
lsst-qserv-db27: 2 / 512
|
lsst-qserv-db28: 2 / 512
|
lsst-qserv-db29: 2 / 512
|
lsst-qserv-db30: 2 / 512
|
Bump the limit:
% /bin/sudo -u qserv /datasets/gapon/development/db_pdac_stripe82/scripts/run_set_max_connections.bash
|
Load partitioned table RunDeepSource
Script:
% nohup /bin/sudo -u qserv /datasets/gapon/development/db_pdac_stripe82/scripts/run_load_object.bash --verbose >& run_load_object.log&
|
Inspecting log files after the completion of the operation to make sure all went well (the log files are located on the master node):
% tail /tmp/sdss_stripe82_01/log/qserv-db*_load_object.log | grep 'Finished'
|
[Thu May 18 23:46:04 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:56 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:39 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:59 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:22 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:25 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:53 CDT 2017] ** Finished loading **
|
[Thu May 18 23:47:14 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:40 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:24 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:37 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:54 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:48 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:25 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:44 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:24 CDT 2017] ** Finished loading **
|
[Thu May 18 23:48:56 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:21 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:26 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:48 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:50 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:17 CDT 2017] ** Finished loading **
|
[Thu May 18 23:47:27 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:11 CDT 2017] ** Finished loading **
|
[Thu May 18 23:45:39 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:04 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:19 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:04 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:23 CDT 2017] ** Finished loading **
|
[Thu May 18 23:46:00 CDT 2017] ** Finished loading **
|
Also checking dis space utilization for the new database on the worker nodes:
% for worker in `seq --format="qserv-db%02g" 1 30`; do
|
ssh -n $worker 'echo `hostname`: `/bin/sudo -u qserv du -hs /qserv/data/mysql/sdss_stripe82_01 | grep qserv`'
|
done
|
lsst-qserv-db01: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db02: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db03: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db04: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db05: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db06: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db07: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db08: 11G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db09: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db10: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db11: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db12: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db13: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db14: 11G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db15: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db16: 11G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db17: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db18: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db19: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db20: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db21: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db22: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db23: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db24: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db25: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db26: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db27: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db28: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db29: 12G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db30: 12G /qserv/data/mysql/sdss_stripe82_01
|
Building the secondary index
This operation has two phases:
- harvesting triplets of the (objectId,chunkId,subChunkId) for all object (RunDeepSource) tables access all worker nodes. The results will be stored locally as *TSV files (one file per object table) on each worker node
- transferring the TSV files from the worker nodes to the master node and loading them into the secondary index table
This scripts does the first stage:
% nohup \
|
/bin/sudo -u qserv \
|
/datasets/gapon/development/db_pdac_stripe82/scripts/run_dump_triplet.bash \
|
--verbose \
|
>& run_dump_triplet.log&
|
The second (loading) step is implemented by this script:
% nohup \
|
/bin/sudo -u qserv \
|
/datasets/gapon/development/db_pdac_stripe82/scripts/load_secondary_index.bash \
|
--verbose \
|
--get-remote-triplets \
|
>& load_secondary_index.log&
|
IMPORTANT: option --get-remote-triplets is required to pull TSV files with the previously dumped triplets from the worker nodes. Otherwise a previously cached files will be assumed. And if no files were found in the local (master's cache) then the resulting secondary index table will be empty.
A simple test for the number of triplets after the completion of the file retrieval phase of the index loading operation:
% wc -l /qserv/data/dumps/sdss_stripe82_01/*.tsv
|
....
|
186671762 total
|
This number should also match the number of objects within the RunDeepSource table.
Load partitioned table RunDeepForcedSource
Script:
% nohup /bin/sudo -u qserv /datasets/gapon/development/db_pdac_stripe82/scripts/run_load_source.bash --verbose >& run_load_source.log&
|
Inspecting log files after the completion of the operation to make sure all went well (the log files are located on the master node):
tail /tmp/sdss_stripe82_01/log/qserv-db*_load_source.log | grep Finished
|
[Sat May 20 11:58:25 CDT 2017] ** Finished loading **
|
[Sat May 20 12:22:32 CDT 2017] ** Finished loading **
|
[Sat May 20 12:20:07 CDT 2017] ** Finished loading **
|
[Sat May 20 13:10:17 CDT 2017] ** Finished loading **
|
[Sat May 20 13:07:19 CDT 2017] ** Finished loading **
|
[Sat May 20 12:46:31 CDT 2017] ** Finished loading **
|
[Sat May 20 13:20:50 CDT 2017] ** Finished loading **
|
[Sat May 20 12:36:40 CDT 2017] ** Finished loading **
|
[Sat May 20 13:00:53 CDT 2017] ** Finished loading **
|
[Sat May 20 13:17:21 CDT 2017] ** Finished loading **
|
[Sat May 20 12:03:53 CDT 2017] ** Finished loading **
|
[Sat May 20 12:29:24 CDT 2017] ** Finished loading **
|
[Sat May 20 14:17:38 CDT 2017] ** Finished loading **
|
[Sat May 20 12:22:56 CDT 2017] ** Finished loading **
|
[Sat May 20 12:47:27 CDT 2017] ** Finished loading **
|
[Sat May 20 11:48:24 CDT 2017] ** Finished loading **
|
[Sat May 20 12:51:53 CDT 2017] ** Finished loading **
|
[Sat May 20 13:03:10 CDT 2017] ** Finished loading **
|
[Sat May 20 12:25:56 CDT 2017] ** Finished loading **
|
[Sat May 20 13:42:33 CDT 2017] ** Finished loading **
|
[Sat May 20 13:08:54 CDT 2017] ** Finished loading **
|
[Sat May 20 12:55:21 CDT 2017] ** Finished loading **
|
[Sat May 20 13:25:40 CDT 2017] ** Finished loading **
|
[Sat May 20 12:22:02 CDT 2017] ** Finished loading **
|
[Sat May 20 13:00:17 CDT 2017] ** Finished loading **
|
[Sat May 20 12:07:51 CDT 2017] ** Finished loading **
|
[Sat May 20 12:21:10 CDT 2017] ** Finished loading **
|
[Sat May 20 12:46:56 CDT 2017] ** Finished loading **
|
[Sat May 20 12:55:06 CDT 2017] ** Finished loading **
|
[Sat May 20 12:45:45 CDT 2017] ** Finished loading **
|
Also checking dis space utilization for the new database on the worker nodes:
% for worker in `seq --format="qserv-db%02g" 1 30`; do
|
ssh -n $worker 'echo `hostname`: `/bin/sudo -u qserv du -hs /qserv/data/mysql/sdss_stripe82_01 | grep qserv`'
|
done
|
lsst-qserv-db01: 325G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db02: 334G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db03: 331G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db04: 339G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db05: 344G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db06: 341G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db07: 346G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db08: 335G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db09: 336G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db10: 352G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db11: 323G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db12: 331G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db13: 342G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db14: 330G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db15: 334G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db16: 318G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db17: 335G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db18: 347G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db19: 334G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db20: 350G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db21: 347G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db22: 342G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db23: 343G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db24: 332G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db25: 332G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db26: 329G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db27: 330G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db28: 345G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db29: 341G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db30: 334G /qserv/data/mysql/sdss_stripe82_01
|
Loading non-partitioned tables
Running this script from the master node:
% nohup /bin/sudo -u qserv \
|
/datasets/gapon/development/db_pdac_stripe82/scripts/run_load_nonpart.bash \
|
--verbose \
|
>& run_load_nonpart.log&
|
Checking the log files (on master) after the completion of the loading operation:
% tail /tmp/sdss_stripe82_01/log/qserv-db*_load_nonpart.log | grep 'Finished'
|
[Fri May 19 19:42:48 CDT 2017] ** Finished loading **
|
[Fri May 19 19:41:06 CDT 2017] ** Finished loading **
|
[Fri May 19 19:42:02 CDT 2017] ** Finished loading **
|
[Fri May 19 19:41:54 CDT 2017] ** Finished loading **
|
[Fri May 19 19:42:45 CDT 2017] ** Finished loading **
|
[Fri May 19 19:42:10 CDT 2017] ** Finished loading **
|
[Fri May 19 19:42:14 CDT 2017] ** Finished loading **
|
[Fri May 19 19:43:03 CDT 2017] ** Finished loading **
|
[Fri May 19 19:40:32 CDT 2017] ** Finished loading **
|
[Fri May 19 19:41:18 CDT 2017] ** Finished loading **
|
[Fri May 19 19:42:40 CDT 2017] ** Finished loading **
|
[Fri May 19 19:42:20 CDT 2017] ** Finished loading **
|
[Fri May 19 19:42:47 CDT 2017] ** Finished loading **
|
[Fri May 19 19:40:02 CDT 2017] ** Finished loading **
|
[Fri May 19 19:40:41 CDT 2017] ** Finished loading **
|
[Fri May 19 19:41:58 CDT 2017] ** Finished loading **
|
[Fri May 19 19:41:56 CDT 2017] ** Finished loading **
|
[Fri May 19 19:41:34 CDT 2017] ** Finished loading **
|
[Fri May 19 19:40:36 CDT 2017] ** Finished loading **
|
[Fri May 19 19:43:17 CDT 2017] ** Finished loading **
|
[Fri May 19 19:43:10 CDT 2017] ** Finished loading **
|
[Fri May 19 19:41:37 CDT 2017] ** Finished loading **
|
[Fri May 19 19:41:47 CDT 2017] ** Finished loading **
|
[Fri May 19 19:43:11 CDT 2017] ** Finished loading **
|
[Fri May 19 19:41:31 CDT 2017] ** Finished loading **
|
[Fri May 19 19:42:13 CDT 2017] ** Finished loading **
|
[Fri May 19 19:42:01 CDT 2017] ** Finished loading **
|
[Fri May 19 19:40:23 CDT 2017] ** Finished loading **
|
[Fri May 19 19:41:56 CDT 2017] ** Finished loading **
|
[Fri May 19 19:40:38 CDT 2017] ** Finished loading **
|
And the new disk space utilization across all worker nodes:
% for worker in `seq --format="qserv-db%02g" 1 30`; do \
|
ssh -n $worker 'echo `hostname`: `/bin/sudo -u qserv du -hs /qserv/data/mysql/sdss_stripe82_01 | grep qserv`' \
|
done
|
lsst-qserv-db01: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db02: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db03: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db04: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db05: 33G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db06: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db07: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db08: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db09: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db10: 33G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db11: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db12: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db13: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db14: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db15: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db16: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db17: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db18: 33G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db19: 33G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db20: 33G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db21: 33G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db22: 33G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db23: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db24: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db25: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db26: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db27: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db28: 32G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db29: 33G /qserv/data/mysql/sdss_stripe82_01
|
lsst-qserv-db30: 32G /qserv/data/mysql/sdss_stripe82_01
|
Setting up the empty chunk list
Since the new version of the database has the same set of chunks then it's possible to reusing the same empty chunk list file (doing this from the *master*node(:
% /bin/sudo -u qserv ls -l /qserv/data/qserv/ | grep stripe82
|
-rw-r--r-- 1 qserv qserv 6881330 Nov 2 2016 empty_sdss_stripe82_00.txt
|
|
% /bin/sudo -u qserv cp \
|
/qserv/data/qserv/empty_sdss_stripe82_00.txt \
|
/qserv/data/qserv/empty_sdss_stripe82_01.txt
|
|
% /bin/sudo -u qserv ls -l /qserv/data/qserv/ | grep stripe82
|
-rw-r--r-- 1 qserv qserv 6881330 Nov 2 2016 empty_sdss_stripe82_00.txt
|
-rw-r--r-- 1 qserv qserv 6881330 May 19 11:50 empty_sdss_stripe82_01.txt
|
Enabling the new database in Qserv
This is the last stage of the database loading protocol which needs to be implemented before proceeding to the final tests. The database is enabled in Qserv by running this script (from the master node):
% /bin/sudo -u qserv \
|
/datasets/gapon/development/db_pdac_stripe82/scripts/run_qserv_enable_database.bash \
|
--verbose
|
>& run_qserv_enable_database.log&
|
Restarting Qserv
The database was merged on the right boundary. According to the tests the earlier reported problem (
DM-8239) of the "hole" in patch=366,0 has been fixed.Closing this issue.