.:scripts:home: Difference between revisions
Davidjdelene (talk | contribs) m (Protected ".:scripts:home" ([Edit=Allow only administrators] (indefinite) [Move=Allow only administrators] (indefinite))) |
Daviddelene (talk | contribs) |
||
(130 intermediate revisions by 9 users not shown) | |||
Line 1: | Line 1: | ||
== Hierarchical | == Hierarchical Processing Scripts == | ||
The graphic below shows the structure of the process_all scripts in the ADPAA library. Each script calls the ones beneath it (for example: the process_all_dir script will call the process_all_polcast3 script, which in turn calls the process_all script, and etc.). | The graphic below shows the structure of the process_all scripts in the ADPAA library. Each script calls the ones beneath it (for example: the process_all_dir script will call the process_all_polcast3 script, which in turn calls the process_all script, and etc.). | ||
[ | [http://aerosol.atmos.und.edu/ADPAA/process_all_structure.png process_all_structure.png] | ||
== Master Processing Scripts == | === Master Processing Scripts === | ||
==='''process_all_dir'''=== | ===='''process_all_dir'''==== | ||
This simply goes through every PostProcessing directory and processes the *.sea file. Edits are also applied. | This simply goes through every PostProcessing directory and processes the *.sea file. Edits are also applied. | ||
To view a list of command line arguments that will work with process_all_dir, just enter process_all_dir into the command line | To view a list of command line arguments that will work with process_all_dir, just enter process_all_dir into the command line. | ||
When running the process_all_dir script it is a requirement to be in the general time period directory. If the script is ran in any other directory nothing will happen. | When running the process_all_dir script it is a requirement to be in the general time period (YYYY) directory. If the script is ran in any other directory nothing will happen. | ||
The following is the list of proper names and directories: | The following is the list of proper names and directories: | ||
oracles2016 - Fall 2016 ORACLES Project (UND Directory Structure) | |||
ophir2016 - Spring 2016 Ophir Project (UND Directory Structure) | |||
olympex - Fall 2015 Washington Project (UND Directory Structure) | |||
CAPE2015 - July/August 2015 Florida Project (UND Directory Structure) | |||
UTC2015 - Spring 2015 North Dakota Project (UND Directory Structure) | |||
UTC2014 - Fall 2014 North Dakota Project (UND Directory Structure) | |||
iphex - Summer 2014 North Carolina Project (UND Directory Structure) | |||
POLCAST4 - Summer 2012 North Dakota Project (RAL Directory Structure) | |||
gcpex - Winter 2012 Georgian Bay Project (UND Directory Structure) | |||
MC3E - Spring 2011 Oklahoma Project (UND Directory Structure) | MC3E - Spring 2011 Oklahoma Project (UND Directory Structure) | ||
Goodrich - | Goodrich - 2010/2011/2012 North Dakota Project (UND Directory Structure) | ||
POLCAST3 - Summer 2010 North Dakota Project (RAL Directory Structure) | POLCAST3 - Summer 2010 North Dakota Project (RAL Directory Structure) | ||
SaudiArabia_Spring2009 - Spring 2009 Saudi Arabia Project (RAL Directory Structure) | SaudiArabia_Spring2009 - Spring 2009 Saudi Arabia Project (RAL Directory Structure) | ||
Line 43: | Line 53: | ||
'''EX:''' process_all_dir POLCAST3 | '''EX:''' process_all_dir POLCAST3 | ||
This simply goes through every PostProcessing directory and processes the *.sea file. Edits are also applied. | This simply goes through every PostProcessing directory and processes the *.sea file. Edits are also applied. | ||
To view a list of command line arguments that will work with process_all_dir, just enter process_all_dir into the command line | To view a list of command line arguments that will work with process_all_dir, just enter process_all_dir into the command | ||
line. | |||
Applications: In order to change the program so that the cloud bases and temperatures could be determined during the 2010 seeding by the Cessna340_N37360 aircraft, it was necessary to find the time intervals that the aircraft was near a target. This is due to the lack of equipment on the aircraft that would allow it to keep track of cloud bases during flight. The process used to determine this post flight is outlined below: | |||
In order to change the program so that the cloud bases and temperatures could be determined during the 2010 seeding by the Cessna340_N37360 aircraft, it was necessary to find the time intervals that the aircraft was near a target. | |||
# Open the data file for the date in question under /nas/ral/NorthDakota/Summer2010/Aircraft/Cessna340_N37360/FlightData, enter the Post_Processing file, and use cplot to the open the .pol3a or .pol file | # Open the data file for the date in question under /nas/ral/NorthDakota/Summer2010/Aircraft/Cessna340_N37360/FlightData, enter the Post_Processing file, and use cplot to the open the .pol3a or .pol file | ||
# Plot Latitude versus Longitude in cplot | # Plot Latitude versus Longitude in cplot | ||
# Locate the areas that the aircraft seems to circle on the chart | # Locate the areas that the aircraft seems to circle on the chart | ||
# Use a combination of Tools->Select Time Interval and Control->Time Interval in cplot to narrow | # Use a combination of Tools->Select Time Interval and Control->Time Interval in cplot to narrow the flight down to just the circling times of the aircraft. As a rule of thumb, the aircraft must circle a minimum of 15 minutes in order for that region to be considered a target. | ||
# Now that you have found the time interval for the target, switch your x and y axis on cplot to Pressure_Alt and Air_Temp respectively. These will produce a rather random plot, but will allow you to find the average cloud height and temperature. | # Now that you have found the time interval for the target, switch your x and y axis on cplot to Pressure_Alt and Air_Temp respectively. These will produce a rather random plot, but will allow you to find the average cloud height and temperature. | ||
# Use Tools-> Statistics to determine the approximate cloud height and temperature. | # Use Tools-> Statistics to determine the approximate cloud height and temperature. | ||
== Single Field Project Scripts == | === Single Field Project Scripts === | ||
====process_all_*==== | |||
For example, process_all_saudi calls all programs to process aircraft data for the Saudi Arabia 2007 project. You can test a process_all_* by using TestData files, for example: | |||
cd /usr/local/ADPAA/src/TestData/FlightData/20140429_152103 && process_all_iphex | |||
=== Platform Scripts === | |||
====aimmsprocessing_saudi==== | |||
Handles and converts all of the AIMMs data that was saved on USB drives to UND Modified NASA format. | |||
aimmsprocessing_saudi | |||
''Note:'' Needs to be executed from a FlightData directory and processes data within the "AIMMSData" directory | |||
e.g. SaudiArabia/Spring09/Aircraft/KingAir_N825ST/FlightData20090323_114454 | |||
== | === Data Level Processing Scripts === | ||
=== | ====AIMMS USB File (*.a?? and *.r??) Scripts ==== | ||
===== convert_adptonasa ===== | |||
Converts AIMMS adp.out to UND Modified NASA format. | |||
convert_adptonasa <inputfile> | |||
''Note:'' The time associated with these files is GPS time. This script currently only converts GPS time to UTC for 2009. For data from any other year make sure that the time offset is correct. | |||
==== | ===== convert_aimmstonasa ===== | ||
Convert AIMMS raw data files (*.aim) created during the Saudi Arabia 2009 project to UND Modified NASA format. | |||
= | convert_aimmstonasa file=inputfile | ||
''Note:'' inputfile must have the *.aim extension. This indicates the data file created by the concatenation of all of the *.a?? files. | |||
Files are assumed to be in the form of ????????.aim. | |||
===='''extract_tables'''==== | ''Note:'' The *.aim files were created by the concatenation of all the *.a?? files which were saved to the USB drive during flights. For example, cat 03231137.a?? > 03231137.aim | ||
===== convert_aimtonasa ===== | |||
Converts AIMMs (YY_MM_DD_HH_MM_SS.aim.txt) to UND Modified NASA format. | |||
convert_aimtonasa <inputfile> | |||
====M300 Data File (*.sea) Scripts ==== | |||
====='''extract_tables'''===== | |||
This script will extract the tables in a given *.sea file. The syntax for this is as follows: | This script will extract the tables in a given *.sea file. The syntax for this is as follows: | ||
Line 81: | Line 121: | ||
===='''getstart_info'''==== | ====='''getstart_info'''===== | ||
This subroutine determines the start time and date from a *.sea file. The syntax for this is: | This subroutine determines the start time and date from a *.sea file. The syntax for this is: | ||
Line 88: | Line 128: | ||
===='''process_raw'''==== | ====='''process_raw'''===== | ||
Process_raw does Level 1 data processing. The process_raw script takes the *.sea binary file produced by the M300/M200 aircraft acquisition system and creates instrument specific ASCII files. The instrument specific files can have edits applied and be averaged to complete the Level 1 processing. The syntax for process_raw subroutine is using the -h option: | |||
process_raw -h | |||
''Example Syntax'' | |||
process_raw {$ADPAA_DIR}/src/TestData/FlightData/20140429_152103/PostProcessing/14_04_29_15_21_03.sea | |||
Note that debug options (-d -dd -ddd) provide different levels of debug information. The debug options activate (turn on) print statements in the code to provide information as the code is runs. These print statements are helpful when there is an error or issue with the processing of the *.sea file. Helpful for determine what is causing the error. | |||
====='''process_all'''===== | |||
Process_all script does data Level 1-4 processing. The process_all script starts with the data acquisition files (for example *.sea M300 data acquisition file) to produced all extracted instrument data files *.raw files) that are contained withing the data acquisition file. The syntax for process_all subroutine is: | |||
process_all [options] inputfile | process_all [options] inputfile | ||
Options: | Options: | ||
'''--fast''' | '''--fast''' | ||
Line 103: | Line 149: | ||
'''--final''' | '''--final''' | ||
The data processing is to produce the final version of the data set | The data processing is to produce the final version of the data set | ||
'''-h, --help''' | |||
Print command syntax and options. | |||
'''--L1''' | '''--L1''' | ||
Do not process Level 1 files. | Do not process Level 1 files. | ||
Line 109: | Line 157: | ||
'''--L3''' | '''--L3''' | ||
Do not process Level 3 files. | Do not process Level 3 files. | ||
'''--logfile''' | |||
Creates logfiles in addition to standard data files. | |||
'''inputfile''' - *.sea raw data file or any YY_MM_DD_HH_MM_SS.* based file name. | '''inputfile''' - *.sea raw data file or any YY_MM_DD_HH_MM_SS.* based file name. | ||
There should be no errors listed in the created YY_MM_DD_HH_MM_SS.log.postprocessing file upon completion of the process_all job. If there are issues with the processing, the *.log.postprocessing file contains notes where the error occurred. The default is to process files at all data levels, starting with level 1 and processing up to level 4. The --L? options are provided to speed up the data processing by enabling skipping of lower data levels. Typically, once the data has been process once, the --L? can be used to only execute scripts to process higher data files by excluding lower data levels. For example, the --L1 option can be used to exclude level 1 processing, which involves extraction of the data from the data acquisition file. Such lower level data exclusion is useful when testing processing scripts at high data levels where you would want to check the impact on all processing files, which typically includes the generations of the science file(s). | |||
====''' | == NASA/UND ASCII (1001) Input/Output Scripts == | ||
=== IDL === | |||
===='''[https://sourceforge.net/p/adpaa/code/HEAD/tree/trunk/src/idl_lib/read_nasa.pro Input - ${ADPAA_DIR}/src/idl_lib/read_nasa.pro]'''==== | |||
===='''[https://sourceforge.net/p/adpaa/code/HEAD/tree/trunk/src/idl_lib/constants.pro Constants - ${ADPAA_DIR}/src/idl_lib/constants.pro]'''==== | |||
=== MATLAB === | |||
===='''[https://sourceforge.net/p/adpaa/code/HEAD/tree/trunk/src/matlab_lib/nasafilein.m Input - ${ADPAA_DIR}/src/matlab_lib/nasafilein.m]==== | |||
''''' | === Python === | ||
===='''[https://sourceforge.net/p/adpaa/code/HEAD/tree/trunk/src/python_lib/adpaa.py Package - ${ADPAA_DIR}/src/python_lib/adpaa.py]'''==== | |||
===='''[https://sourceforge.net/p/adpaa/code/HEAD/tree/trunk/src/python_lib/readfile.py Input - ${ADPAA_DIR}/src/python_lib/readfile.py]'''==== | |||
Note: It is recommended to call this package using the 'ReadFile' method within the ADPAA Python package instead of calling this package directly. | |||
===='''[https://sourceforge.net/p/adpaa/code/HEAD/tree/trunk/src/python_lib/writefile.py Output - ${ADPAA_DIR}/src/python_lib/writefile.py]'''==== | |||
Note: It is recommended to call this package using the 'WriteFile' method within the ADPAA Python package instead of calling this package directly. | |||
===='''[https://sourceforge.net/p/adpaa/code/HEAD/tree/trunk/src/python_lib/constants.py Constants - ${ADPAA_DIR}/src/python_lib/constants.py]'''==== | |||
Note: It is recommended to call this package using the 'constants' method within the ADPAA Python package instead of calling this package directly. | |||
=== SciLab === | |||
===='''[https://sourceforge.net/p/adpaa/code/HEAD/tree/trunk/src/scilab_lib/nasafilein.sce Input - ${ADPAA_DIR}/src/sci_lab/nasafilein.sce]'''==== | |||
== NASA/UND ASCII (1001) Conversion Scripts == | |||
List of scripts used in ADPAA to modify data files in the UND NASA ASCII format. Use --help to obtain information syntax of the commands. The header information should provide usage examples and information about using each script. To not have dual documentation only the name of the script and main purchase is given to provide a high level overview, details are provided in the scripts itself. | |||
==== | ===convert2acpos=== | ||
===convert_oid2nasa=== | |||
Converts a file in the Optical Ice Detector format into the NASA/UND ASCII format | |||
===convert_undtoICARTT=== | |||
Converts a file in the NASA/UND ASCII (1001) format to the 2008 ICARTT format. | |||
===convert_undtoICARTT2013=== | |||
Converts a file in the NASA/UND ASCII (1001) format to the 2013 ICARTT format. | |||
== UND 'NASA' ASCII (1001) Modification Scripts== | |||
List of scripts used in ADPAA to modify data files in the UND NASA ASCII format. Use --help to obtain information syntax of the commands. The header information should provide usage examples and information about using each script. To not have dual documentation only the name of the script and main purchase is given to provide a high level overview, details are provided in the scripts itself. | |||
==='''addedit'''=== | |||
The addedit is a perl program that creates an edit file. | |||
The | ==='''apply_edits'''=== | ||
The apply_edits file is used to create a clean file which can be used with cplot. | |||
==='''avgfields'''=== | |||
Averages fields from a source file into a target file. | |||
==='''combine_files'''=== | |||
The combine_files script combines parted files, such as *.sau, *.wmi, *.pol files, that are because a data acquisition system is restarted during a research flight. Script creates files contained withing the "YYYYMMDD_?/Combined" directory. | |||
==='''counts_2ds_hvps3'''=== | |||
The purpose is to merge counts data from specified channels of the 2DS and HVPS3 into a single 1 Hz file. | |||
==='''counts_cdp_2ds_hvps3'''=== | |||
The purpose is to merge counts data from specified channels of the CDP, 2DS, and HVPS3 into a single 1 Hz file. These channels are: | |||
==='''drift_correction'''=== | |||
The drift_correction is a python program that corrects for cloud drift due to wind speeds to show the hypothetical path a plane would follow if clouds did not drift due to wind speed. This can be particully useful when trying to determine when an aircraft is passing through a cloud more than once. The syntax for running the code is: | |||
==='''fileMath'''=== | |||
This script performs mathematical operations on a UND "NASA" formatted file. | |||
==='''filteroutliers'''=== | |||
This script will eliminate outlier data outside of two standard deviations in the UND "NASA" file. filteroutliers.py uses a Z-Score test with rolling windows of x amount of points to mask data as Missing Value Code (MVC) in order to clean up digital noise in the data. The filter can be applied to a single parameter, multiple parameters separated by commas, or all parameters available in the file. | |||
==='''listparas'''=== | |||
This subroutine program can be used to determine what parameters are required by UND "NASA" formatted files. | |||
==='''merge_2ds_hvps3'''=== | |||
The purpose is to merge spectrums from specified channels of the 2DS and HVPS3 into a single 1 Hz file. These channels are: | |||
==='''merge_cdp_2ds_hvps3'''=== | |||
The purpose is to merge spectrums from specified channels of the CDP, 2DS, and HVPS3 into a single 1 Hz file. | |||
==='''mergefield'''=== | |||
The purpose of the mergefield program is to merge parameters of one data file with another. The syntax of the program is as such: | |||
==='''merge_spectra'''=== | |||
The purpose of the merge_spectra program is to merge the particle size spectrum produced by one probe with the spectrum produced by another probe. The syntax of the program is as follows: | |||
==='''pbp_idl2und.py'''=== | |||
pbp_idl2und.py converts the raw particle by particle file outputted from the process_soda2 code into the UND NASA format. The script was written for use with 2DS data, but it can be used for other probe data as well, such as HVPS (as long as it has an option to generate particle by particle files in the soda2 code). To use the script, use the following syntax: | |||
==='''pcaspscat'''=== | |||
The pcaspscat program creates a *.550nm.scat.raw file (from whatever *.conc.pcasp.raw file is in the directory the program is being executed from). This file displays atmospheric scattering, absorbtion, backscattering, and assymetry parameters all based on the pcasp channels. These parameters are all based on a wavelength of 550nm. | |||
==='''process_raw_file'''=== | |||
The process_raw_file program takes the input file and creates a .1Hz file. This utilizes the avgfields function which averages the fields from the source file into a target file. In this case the target file is the newly created .1Hz file. The syntax for this program is as follows: | |||
==='''sfm2hms'''=== | |||
Converts time from seconds from midnight (sfm) to hour:minute:second time format | |||
==='''subset'''=== | |||
The subset program is used to create a data file which takes data from another file that follows a certain numerical characteristic. For example, say I want all data from the March 20, 2009 Saudi file that had LWC greater than 1 g/cm^3 or all data with temperatures between -10 and -5 degrees Celsius. I can use the subset program to create a file with that data. | |||
== NASA/UND ASCII (1001) Analysis Scripts == | |||
List of scripts used in ADPAA to modify data files in the UND NASA ASCII format. Use --help to obtain information syntax of the commands. The header information should provide usage examples and information about using each script. To not have dual documentation only the name of the script and main purchase is given to provide a high level overview, details are provided in the scripts itself. | |||
==='''calculate_b_coef'''=== | |||
Used to calculate the the backscatter coefficient using the equation found in "The Encyclopedia of Atmospheric Sciences" (Zhang et al. 2015). A file is generated containing 1 Hz backscatter coefficient data (*.b_coef.1Hz) using a merged concentration file generated by merge_2ds_hvps3.py (*.merge.2DS.HVPS3.1Hz) a merged counts file generated by counts_2ds_hvps3.py (*.counts.2DS.HVPS3.1Hz), and basic file (*.basic.1Hz) for temperature data to discriminate between water and ice. The backscatter efficiencies used in the coefficient equation are for the 905 nm wavelength. | |||
==='''classify_habit'''=== | |||
A python script which uses 1Hz data of aircraft temperature and particle aspect ratios to produce a data file containing the most likely ice-habit. Infile1 contains the basic data and infile2 the aspect ratios. | |||
==='''cplot'''=== | |||
To start the main visualization program. | |||
cplot data_file | |||
==='''cplot2'''=== | |||
Cplot2 is similar to Cplot; however, IDL new graphics are used for the display which enables more interactive changes to the plots. | |||
To start the main visualization program. | |||
cplot2 data_file | |||
To calculate statistics over a time period: Run cplot2 on the data file, and click OK. In the top toolbar, select Control and set the Time Interval. Then select the variables to be used for the x and y axes. Then select Tools > Statistics. | |||
==='''plot2dc'''=== | |||
To bring up 2dc probe images, the plot2dc program may be used. To execute simply use this syntax: | To bring up 2dc probe images, the plot2dc program may be used. To execute simply use this syntax: | ||
Line 211: | Line 289: | ||
Images from the *.2dc file will be brought up for analysis. | Images from the *.2dc file will be brought up for analysis. | ||
===twods_conc2bulk_ice.py (Level 4)=== | |||
Calculates bulk parameters from optical array probes (namingly 2D-S, HVPS3, and CIP) and Nevzorov probe (if available), such as ice water content, liquid-equivalent diameter, and reflectivity factor. Also calculates and outputs spectrum files of ice water content, reflectivity factor, and liquid-equivalent diameter. | |||
twods_conc2bulk.py [--spec_files] [start=d [--liquid-equivalent]] [error=counts_file] [--quadrature] [tolerance=t] [nevwc_file] OAP_conc_file |
Latest revision as of 04:56, 16 March 2024
Hierarchical Processing Scripts
The graphic below shows the structure of the process_all scripts in the ADPAA library. Each script calls the ones beneath it (for example: the process_all_dir script will call the process_all_polcast3 script, which in turn calls the process_all script, and etc.).
Master Processing Scripts
process_all_dir
This simply goes through every PostProcessing directory and processes the *.sea file. Edits are also applied. To view a list of command line arguments that will work with process_all_dir, just enter process_all_dir into the command line.
When running the process_all_dir script it is a requirement to be in the general time period (YYYY) directory. If the script is ran in any other directory nothing will happen.
The following is the list of proper names and directories:
oracles2016 - Fall 2016 ORACLES Project (UND Directory Structure) ophir2016 - Spring 2016 Ophir Project (UND Directory Structure) olympex - Fall 2015 Washington Project (UND Directory Structure) CAPE2015 - July/August 2015 Florida Project (UND Directory Structure) UTC2015 - Spring 2015 North Dakota Project (UND Directory Structure) UTC2014 - Fall 2014 North Dakota Project (UND Directory Structure) iphex - Summer 2014 North Carolina Project (UND Directory Structure) POLCAST4 - Summer 2012 North Dakota Project (RAL Directory Structure) gcpex - Winter 2012 Georgian Bay Project (UND Directory Structure) MC3E - Spring 2011 Oklahoma Project (UND Directory Structure) Goodrich - 2010/2011/2012 North Dakota Project (UND Directory Structure) POLCAST3 - Summer 2010 North Dakota Project (RAL Directory Structure) SaudiArabia_Spring2009 - Spring 2009 Saudi Arabia Project (RAL Directory Structure) Mali_Summer2008 - Summer 2008 Mali Project (RAL Directory Structure) SaudiArabia_Summer08 - Winter 2007/2008 Saudi Arabia Project (UND Directory Structure) POLCAST2 - Summer 2008 North Dakota Project (RAL Directory Structure) Mali_Summer2006 - Summer 2006 Mali Project (RAL Directory Structure) Mali_Summer2007 - Summer 2007 Mali Project (RAL Directory Structure) SaudiArabia_Winter0708 - Winter 2007/2008 Saudi Arabia Project (RAL Directory Structure) Saudi - Winter 2007/2008 Saudi Arabia Project (UND Directory Structure) Mali - July/August/September/October 2007 (UND Directory Structure) Harvesting - June/July/August 2007 Sikorsky3 - September/October 2005 TAMDAR_Turbulence - August/September 2005 Sikorsky2 - January/February 2005 L3Com - November/December 2004 MPACE - September/October 2004 IOP1 - June/July 2004 Sikorsky - March/April 2004 WISP4 - February/March 2004 THORpex - November/December 2004 NACP - May/June 2003 Crystal - July 2002 Kwajex - August/September 1999
EX: process_all_dir POLCAST3
This simply goes through every PostProcessing directory and processes the *.sea file. Edits are also applied. To view a list of command line arguments that will work with process_all_dir, just enter process_all_dir into the command line.
Applications: In order to change the program so that the cloud bases and temperatures could be determined during the 2010 seeding by the Cessna340_N37360 aircraft, it was necessary to find the time intervals that the aircraft was near a target. This is due to the lack of equipment on the aircraft that would allow it to keep track of cloud bases during flight. The process used to determine this post flight is outlined below:
- Open the data file for the date in question under /nas/ral/NorthDakota/Summer2010/Aircraft/Cessna340_N37360/FlightData, enter the Post_Processing file, and use cplot to the open the .pol3a or .pol file
- Plot Latitude versus Longitude in cplot
- Locate the areas that the aircraft seems to circle on the chart
- Use a combination of Tools->Select Time Interval and Control->Time Interval in cplot to narrow the flight down to just the circling times of the aircraft. As a rule of thumb, the aircraft must circle a minimum of 15 minutes in order for that region to be considered a target.
- Now that you have found the time interval for the target, switch your x and y axis on cplot to Pressure_Alt and Air_Temp respectively. These will produce a rather random plot, but will allow you to find the average cloud height and temperature.
- Use Tools-> Statistics to determine the approximate cloud height and temperature.
Single Field Project Scripts
process_all_*
For example, process_all_saudi calls all programs to process aircraft data for the Saudi Arabia 2007 project. You can test a process_all_* by using TestData files, for example:
cd /usr/local/ADPAA/src/TestData/FlightData/20140429_152103 && process_all_iphex
Platform Scripts
aimmsprocessing_saudi
Handles and converts all of the AIMMs data that was saved on USB drives to UND Modified NASA format.
aimmsprocessing_saudi Note: Needs to be executed from a FlightData directory and processes data within the "AIMMSData" directory e.g. SaudiArabia/Spring09/Aircraft/KingAir_N825ST/FlightData20090323_114454
Data Level Processing Scripts
AIMMS USB File (*.a?? and *.r??) Scripts
convert_adptonasa
Converts AIMMS adp.out to UND Modified NASA format.
convert_adptonasa <inputfile>
Note: The time associated with these files is GPS time. This script currently only converts GPS time to UTC for 2009. For data from any other year make sure that the time offset is correct.
convert_aimmstonasa
Convert AIMMS raw data files (*.aim) created during the Saudi Arabia 2009 project to UND Modified NASA format.
convert_aimmstonasa file=inputfile Note: inputfile must have the *.aim extension. This indicates the data file created by the concatenation of all of the *.a?? files. Files are assumed to be in the form of ????????.aim.
Note: The *.aim files were created by the concatenation of all the *.a?? files which were saved to the USB drive during flights. For example, cat 03231137.a?? > 03231137.aim
convert_aimtonasa
Converts AIMMs (YY_MM_DD_HH_MM_SS.aim.txt) to UND Modified NASA format.
convert_aimtonasa <inputfile>
M300 Data File (*.sea) Scripts
extract_tables
This script will extract the tables in a given *.sea file. The syntax for this is as follows:
extract_tables <-vm> input_file
getstart_info
This subroutine determines the start time and date from a *.sea file. The syntax for this is:
getstart_info <-sfm> <-end> <-v> <-vm> input_file
process_raw
Process_raw does Level 1 data processing. The process_raw script takes the *.sea binary file produced by the M300/M200 aircraft acquisition system and creates instrument specific ASCII files. The instrument specific files can have edits applied and be averaged to complete the Level 1 processing. The syntax for process_raw subroutine is using the -h option:
process_raw -h
Example Syntax
process_raw {$ADPAA_DIR}/src/TestData/FlightData/20140429_152103/PostProcessing/14_04_29_15_21_03.sea
Note that debug options (-d -dd -ddd) provide different levels of debug information. The debug options activate (turn on) print statements in the code to provide information as the code is runs. These print statements are helpful when there is an error or issue with the processing of the *.sea file. Helpful for determine what is causing the error.
process_all
Process_all script does data Level 1-4 processing. The process_all script starts with the data acquisition files (for example *.sea M300 data acquisition file) to produced all extracted instrument data files *.raw files) that are contained withing the data acquisition file. The syntax for process_all subroutine is:
process_all [options] inputfile
Options: --fast Skips long duration processing. --final The data processing is to produce the final version of the data set -h, --help Print command syntax and options. --L1 Do not process Level 1 files. --L2 Do not process Level 2 files. --L3 Do not process Level 3 files. --logfile Creates logfiles in addition to standard data files.
inputfile - *.sea raw data file or any YY_MM_DD_HH_MM_SS.* based file name.
There should be no errors listed in the created YY_MM_DD_HH_MM_SS.log.postprocessing file upon completion of the process_all job. If there are issues with the processing, the *.log.postprocessing file contains notes where the error occurred. The default is to process files at all data levels, starting with level 1 and processing up to level 4. The --L? options are provided to speed up the data processing by enabling skipping of lower data levels. Typically, once the data has been process once, the --L? can be used to only execute scripts to process higher data files by excluding lower data levels. For example, the --L1 option can be used to exclude level 1 processing, which involves extraction of the data from the data acquisition file. Such lower level data exclusion is useful when testing processing scripts at high data levels where you would want to check the impact on all processing files, which typically includes the generations of the science file(s).
NASA/UND ASCII (1001) Input/Output Scripts
IDL
Input - ${ADPAA_DIR}/src/idl_lib/read_nasa.pro
Constants - ${ADPAA_DIR}/src/idl_lib/constants.pro
MATLAB
Input - ${ADPAA_DIR}/src/matlab_lib/nasafilein.m
Python
Package - ${ADPAA_DIR}/src/python_lib/adpaa.py
Input - ${ADPAA_DIR}/src/python_lib/readfile.py
Note: It is recommended to call this package using the 'ReadFile' method within the ADPAA Python package instead of calling this package directly.
Output - ${ADPAA_DIR}/src/python_lib/writefile.py
Note: It is recommended to call this package using the 'WriteFile' method within the ADPAA Python package instead of calling this package directly.
Constants - ${ADPAA_DIR}/src/python_lib/constants.py
Note: It is recommended to call this package using the 'constants' method within the ADPAA Python package instead of calling this package directly.
SciLab
Input - ${ADPAA_DIR}/src/sci_lab/nasafilein.sce
NASA/UND ASCII (1001) Conversion Scripts
List of scripts used in ADPAA to modify data files in the UND NASA ASCII format. Use --help to obtain information syntax of the commands. The header information should provide usage examples and information about using each script. To not have dual documentation only the name of the script and main purchase is given to provide a high level overview, details are provided in the scripts itself.
convert2acpos
convert_oid2nasa
Converts a file in the Optical Ice Detector format into the NASA/UND ASCII format
convert_undtoICARTT
Converts a file in the NASA/UND ASCII (1001) format to the 2008 ICARTT format.
convert_undtoICARTT2013
Converts a file in the NASA/UND ASCII (1001) format to the 2013 ICARTT format.
UND 'NASA' ASCII (1001) Modification Scripts
List of scripts used in ADPAA to modify data files in the UND NASA ASCII format. Use --help to obtain information syntax of the commands. The header information should provide usage examples and information about using each script. To not have dual documentation only the name of the script and main purchase is given to provide a high level overview, details are provided in the scripts itself.
addedit
The addedit is a perl program that creates an edit file.
apply_edits
The apply_edits file is used to create a clean file which can be used with cplot.
avgfields
Averages fields from a source file into a target file.
combine_files
The combine_files script combines parted files, such as *.sau, *.wmi, *.pol files, that are because a data acquisition system is restarted during a research flight. Script creates files contained withing the "YYYYMMDD_?/Combined" directory.
counts_2ds_hvps3
The purpose is to merge counts data from specified channels of the 2DS and HVPS3 into a single 1 Hz file.
counts_cdp_2ds_hvps3
The purpose is to merge counts data from specified channels of the CDP, 2DS, and HVPS3 into a single 1 Hz file. These channels are:
drift_correction
The drift_correction is a python program that corrects for cloud drift due to wind speeds to show the hypothetical path a plane would follow if clouds did not drift due to wind speed. This can be particully useful when trying to determine when an aircraft is passing through a cloud more than once. The syntax for running the code is:
fileMath
This script performs mathematical operations on a UND "NASA" formatted file.
filteroutliers
This script will eliminate outlier data outside of two standard deviations in the UND "NASA" file. filteroutliers.py uses a Z-Score test with rolling windows of x amount of points to mask data as Missing Value Code (MVC) in order to clean up digital noise in the data. The filter can be applied to a single parameter, multiple parameters separated by commas, or all parameters available in the file.
listparas
This subroutine program can be used to determine what parameters are required by UND "NASA" formatted files.
merge_2ds_hvps3
The purpose is to merge spectrums from specified channels of the 2DS and HVPS3 into a single 1 Hz file. These channels are:
merge_cdp_2ds_hvps3
The purpose is to merge spectrums from specified channels of the CDP, 2DS, and HVPS3 into a single 1 Hz file.
mergefield
The purpose of the mergefield program is to merge parameters of one data file with another. The syntax of the program is as such:
merge_spectra
The purpose of the merge_spectra program is to merge the particle size spectrum produced by one probe with the spectrum produced by another probe. The syntax of the program is as follows:
pbp_idl2und.py
pbp_idl2und.py converts the raw particle by particle file outputted from the process_soda2 code into the UND NASA format. The script was written for use with 2DS data, but it can be used for other probe data as well, such as HVPS (as long as it has an option to generate particle by particle files in the soda2 code). To use the script, use the following syntax:
pcaspscat
The pcaspscat program creates a *.550nm.scat.raw file (from whatever *.conc.pcasp.raw file is in the directory the program is being executed from). This file displays atmospheric scattering, absorbtion, backscattering, and assymetry parameters all based on the pcasp channels. These parameters are all based on a wavelength of 550nm.
process_raw_file
The process_raw_file program takes the input file and creates a .1Hz file. This utilizes the avgfields function which averages the fields from the source file into a target file. In this case the target file is the newly created .1Hz file. The syntax for this program is as follows:
sfm2hms
Converts time from seconds from midnight (sfm) to hour:minute:second time format
subset
The subset program is used to create a data file which takes data from another file that follows a certain numerical characteristic. For example, say I want all data from the March 20, 2009 Saudi file that had LWC greater than 1 g/cm^3 or all data with temperatures between -10 and -5 degrees Celsius. I can use the subset program to create a file with that data.
NASA/UND ASCII (1001) Analysis Scripts
List of scripts used in ADPAA to modify data files in the UND NASA ASCII format. Use --help to obtain information syntax of the commands. The header information should provide usage examples and information about using each script. To not have dual documentation only the name of the script and main purchase is given to provide a high level overview, details are provided in the scripts itself.
calculate_b_coef
Used to calculate the the backscatter coefficient using the equation found in "The Encyclopedia of Atmospheric Sciences" (Zhang et al. 2015). A file is generated containing 1 Hz backscatter coefficient data (*.b_coef.1Hz) using a merged concentration file generated by merge_2ds_hvps3.py (*.merge.2DS.HVPS3.1Hz) a merged counts file generated by counts_2ds_hvps3.py (*.counts.2DS.HVPS3.1Hz), and basic file (*.basic.1Hz) for temperature data to discriminate between water and ice. The backscatter efficiencies used in the coefficient equation are for the 905 nm wavelength.
classify_habit
A python script which uses 1Hz data of aircraft temperature and particle aspect ratios to produce a data file containing the most likely ice-habit. Infile1 contains the basic data and infile2 the aspect ratios.
cplot
To start the main visualization program.
cplot data_file
cplot2
Cplot2 is similar to Cplot; however, IDL new graphics are used for the display which enables more interactive changes to the plots.
To start the main visualization program.
cplot2 data_file
To calculate statistics over a time period: Run cplot2 on the data file, and click OK. In the top toolbar, select Control and set the Time Interval. Then select the variables to be used for the x and y axes. Then select Tools > Statistics.
plot2dc
To bring up 2dc probe images, the plot2dc program may be used. To execute simply use this syntax:
plot2dc 2DC_file
Images from the *.2dc file will be brought up for analysis.
twods_conc2bulk_ice.py (Level 4)
Calculates bulk parameters from optical array probes (namingly 2D-S, HVPS3, and CIP) and Nevzorov probe (if available), such as ice water content, liquid-equivalent diameter, and reflectivity factor. Also calculates and outputs spectrum files of ice water content, reflectivity factor, and liquid-equivalent diameter.
twods_conc2bulk.py [--spec_files] [start=d [--liquid-equivalent]] [error=counts_file] [--quadrature] [tolerance=t] [nevwc_file] OAP_conc_file