Details
-
Type:
RFC
-
Status: Implemented
-
Resolution: Done
-
Component/s: DM
-
Labels:None
Description
By default, our builds are not optimised (-O0), which requires everyone who doesn't want to wait until the heat death of the universe to set SCONSFLAGS="opt=3", but other packages that are built with scons may not recognise this. This default is also contrary to the standard practise for open-source software, which is that by default builds are optimised. I propose to change the default optimisation level to opt=3 from the current opt=0. It's a very simple change in sconsUtils:
--- a/python/lsst/sconsUtils/state.py
|
+++ b/python/lsst/sconsUtils/state.py
|
@@ -98,7 +98,7 @@ def _initVariables():
|
SCons.Script.BoolVariable('force', 'Set to force possibly dangerous behaviours', False),
|
('optfile', 'Specify a file to read default options from', None),
|
('prefix', 'Specify the install destination', None),
|
- SCons.Script.EnumVariable('opt', 'Set the optimisation level', 0,
|
+ SCons.Script.EnumVariable('opt', 'Set the optimisation level', 3,
|
allowed_values=('0', '1', '2', '3')),
|
SCons.Script.EnumVariable('profile', 'Compile/link for profiler', 0,
|
allowed_values=('0', '1', 'pg', 'gcov')),
|
Would it be worth using a lower optimization (e.g. 1 or 2) to make debugging feasible? I guess the tradeoff is general run time for everyone vs. people who want to use gdb having to rebuild lots of things without optimization.