NASA Astrophysics Data System (ADS)
Goodrich, D. C.; Tan, J.; Petersen, W. A.; Unkrich, C. C.; Demaria, E. M.; Hazenberg, P.; Lakshmi, V.
2017-12-01
Precipitation profiles from the GPM Core Observatory Dual-frequency Precipitation Radar (DPR) form part of the a priori database used in GPM Goddard Profiling (GPROF) algorithm passive microwave radiometer retrievals of rainfall. The GPROF retrievals are in turn used as high quality precipitation estimates in gridded products such as IMERG. Due to the variability in and high surface emissivity of land surfaces, GPROF performs precipitation retrievals as a function of surface classes. As such, different surface types may possess different error characteristics, especially over arid regions where high quality ground measurements are often lacking. Importantly, the emissive properties of land also result in GPROF rainfall estimates being driven primarily by the higher frequency radiometer channels (e.g., > 89 GHz) where precipitation signals are most sensitive to coupling between the ice-phase and rainfall production. In this study, we evaluate the rainfall estimates from the Ku channel of the DPR as well as GPROF estimates from various passive microwave sensors. Our evaluation is conducted at the level of individual satellite pixels (5 to 15 km in diameter), against a dense network of weighing rain gauges (90 in 150 km2) in the USDA-ARS Walnut Gulch Experimental Watershed and Long-Term Agroecosystem Research (LTAR) site in southeastern Arizona. The multiple gauges in each satellite pixel and precise accumulation about the overpass time allow a spatially and temporally representative comparison between the satellite estimates and ground reference. Over Walnut Gulch, both the Ku and GPROF estimates are challenged to delineate between rain and no-rain. Probabilities of detection are relatively high, but false alarm ratios are also high. The rain intensities possess a negative bias across nearly all sensors. It is likely that storm types, arid conditions and the highly variable precipitation regime present a challenge to both rainfall retrieval algorithms. An array of ground-based sensors is being deployed during the 2017 monsoon season to better understand possible reasons for this discrepancy.
Inter-comparison of the EUMETSAT H-SAF and NASA PPS precipitation products over Western Europe.
NASA Astrophysics Data System (ADS)
Kidd, Chris; Panegrossi, Giulia; Ringerud, Sarah; Stocker, Erich
2017-04-01
The development of precipitation retrieval techniques utilising passive microwave satellite observations has achieved a good degree of maturity through the use of physically-based schemes. The DMSP Special Sensor Microwave Imager/Sounder (SSMIS) has been the mainstay of passive microwave observations over the last 13 years forming the basis of many satellite precipitation products, including NASA's Precipitation Processing System (PPS) and EUMETSAT's Hydrological Satellite Application Facility (H-SAF). The NASA PPS product utilises the Goddard Profiling (GPROF; currently 2014v2-0) retrieval scheme that provides a physically consistent retrieval scheme through the use of coincident active/passive microwave retrievals from the Global Precipitation Measurement (GPM) mission core satellite. The GPM combined algorithm retrieves hydrometeor profiles optimized for consistency with both Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI); these profiles form the basis of the GPROF database which can be utilized for any constellation radiometer within the framework a Bayesian retrieval scheme. The H-SAF product (PR-OBS-1 v1.7) is based on a physically-based Bayesian technique where the a priori information is provided by a Cloud Dynamic Radiation Database (CDRD). Meteorological parameter constraints, derived from synthetic dynamical-thermodynamical-hydrological meteorological profile variables, are used in conjunction with multi-hydrometeor microphysical profiles and multispectral PMW brightness temperature vectors into a specialized a priori knowledge database underpinning and guiding the algorithm's Bayesian retrieval solver. This paper will present the results of an inter-comparison of the NASA PPS GPROF and EUMETSAT H-SAF PR-OBS-1 products over Western Europe for the period from 1 January 2015 through 31 December 2016. Surface radar is derived from the UKMO-derived Nimrod European radar product, available at 15 minute/5 km resolution. Initial results show that overall the correlations between the two satellite precipitation products and surface radar precipitation estimates are similar, particularly for cases where there is extensive precipitation; however, the H-SAF tends to have poorer correlations in situations where rain is light or limited in extent. Similarly, RMSEs for the GPROF scheme tend to a smaller than those of the H-SAF retrievals. The difference in the performance can be traced to the identification of precipitation; the GPROF2014v2-0 scheme overestimates the occurrence and extent of the precipitation, generating a significant amount of light precipitation. The H-SAF scheme has a lower precipitation threshold of about 0.25 mmh-1 while overestimating moderate and higher precipitation intensities.
Advances in Satellite Microwave Precipitation Retrieval Algorithms Over Land
NASA Astrophysics Data System (ADS)
Wang, N. Y.; You, Y.; Ferraro, R. R.
2015-12-01
Precipitation plays a key role in the earth's climate system, particularly in the aspect of its water and energy balance. Satellite microwave (MW) observations of precipitation provide a viable mean to achieve global measurement of precipitation with sufficient sampling density and accuracy. However, accurate precipitation information over land from satellite MW is a challenging problem. The Goddard Profiling Algorithm (GPROF) algorithm for the Global Precipitation Measurement (GPM) is built around the Bayesian formulation (Evans et al., 1995; Kummerow et al., 1996). GPROF uses the likelihood function and the prior probability distribution function to calculate the expected value of precipitation rate, given the observed brightness temperatures. It is particularly convenient to draw samples from a prior PDF from a predefined database of observations or models. GPROF algorithm does not search all database entries but only the subset thought to correspond to the actual observation. The GPM GPROF V1 database focuses on stratification by surface emissivity class, land surface temperature and total precipitable water. However, there is much uncertainty as to what is the optimal information needed to subset the database for different conditions. To this end, we conduct a database stratification study of using National Mosaic and Multi-Sensor Quantitative Precipitation Estimation, Special Sensor Microwave Imager/Sounder (SSMIS) and Advanced Technology Microwave Sounder (ATMS) and reanalysis data from Modern-Era Retrospective Analysis for Research and Applications (MERRA). Our database study (You et al., 2015) shows that environmental factors such as surface elevation, relative humidity, and storm vertical structure and height, and ice thickness can help in stratifying a single large database to smaller and more homogeneous subsets, in which the surface condition and precipitation vertical profiles are similar. It is found that the probability of detection (POD) increases about 8% and 12% by using stratified databases for rainfall and snowfall detection, respectively. In addition, by considering the relative humidity at lower troposphere and the vertical velocity at 700 hPa in the precipitation detection process, the POD for snowfall detection is further increased by 20.4% from 56.0% to 76.4%.
Retrieved Vertical Profiles of Latent Heat Release Using TRMM Rainfall Products
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Lang, S.; Olson, W. S.; Meneghini, R.; Yang, S.; Simpson, J.; Kummerow, C.; Smith, E.
2000-01-01
This paper represents the first attempt to use TRMM rainfall information to estimate the four dimensional latent heating structure over the global tropics for February 1998. The mean latent heating profiles over six oceanic regions (TOGA COARE IFA, Central Pacific, S. Pacific Convergence Zone, East Pacific, Indian Ocean and Atlantic Ocean) and three continental regions (S. America, Central Africa and Australia) are estimated and studied. The heating profiles obtained from the results of diagnostic budget studies over a broad range of geographic locations are used to provide comparisons and indirect validation for the heating algorithm estimated heating profiles. Three different latent heating algorithms, the Goddard Convective-Stratiform (CSH) heating, the Goddard Profiling (GPROF) heating, and the Hydrometeor heating (HH) are used and their results are intercompared. The horizontal distribution or patterns of latent heat release from the three different heating retrieval methods are quite similar. They all can identify the areas of major convective activity (i.e., a well defined ITCZ in the Pacific, a distinct SPCZ) in the global tropics. The magnitude of their estimated latent heating release is also not in bad agreement with each other and with those determined from diagnostic budget studies. However, the major difference among these three heating retrieval algorithms is the altitude of the maximum heating level. The CSH algorithm estimated heating profiles only show one maximum heating level, and the level varies between convective activity from various geographic locations. These features are in good agreement with diagnostic budget studies. By contrast, two maximum heating levels were found using the GPROF heating and HH algorithms. The latent heating profiles estimated from all three methods can not show cooling between active convective events. We also examined the impact of different TMI (Multi-channel Passive Microwave Sensor) and PR (Precipitation Radar) rainfall information on latent heating structures.
Rainfall Estimates from the TMI and the SSM/I
NASA Technical Reports Server (NTRS)
Hong, Ye; Kummerow, Christian D.; Olson, William S.; Viltard, Nicolas
1999-01-01
The Tropical Rainfall Measuring Mission (TRMM), which is a joint Japan-U.S. Earth observing satellite, has been successfully launched from Japan on November 27, 1997. The main purpose of the TRMM is to measure quantitatively rainfall over the tropics for the research of climate and weather. One of three rainfall measuring instruments abroad the TRMM is the high resolution TRMM Microwave Imager (TMI). The TMI instrument is essentially the copy of the SSM/I with a dual-polarized pair of 10.7 GHz channels added to increase the dynamic range of rainfall estimates. In addition, the 21.3 GHz water vapor absorption channel is designed in the TMI as opposed to the 22.235 GHz in the SSM/I to avoid saturation in the tropics. This paper will present instantaneous rain rates estimated from the coincident TMI and SSM/I observations. The algorithm for estimating instantaneous rainfall rates from both sensors is the Goddard Profiling algorithm (Gprof). The Gprof algorithm is a physically based, multichannel rainfall retrieval algorithm, The algorithm is very portable and can be used for various sensors with different channels and resolutions. The comparison of rain rates estimated from TMI and SSM/I on the same rain regions will be performed. The results from the comparison and the insight of tile retrieval algorithm will be given.
NASA Technical Reports Server (NTRS)
Kidd, Chris; Matsui, Toshi; Chern, Jiundar; Mohr, Karen; Kummerow, Christian; Randel, Dave
2015-01-01
The estimation of precipitation across the globe from satellite sensors provides a key resource in the observation and understanding of our climate system. Estimates from all pertinent satellite observations are critical in providing the necessary temporal sampling. However, consistency in these estimates from instruments with different frequencies and resolutions is critical. This paper details the physically based retrieval scheme to estimate precipitation from cross-track (XT) passive microwave (PM) sensors on board the constellation satellites of the Global Precipitation Measurement (GPM) mission. Here the Goddard profiling algorithm (GPROF), a physically based Bayesian scheme developed for conically scanning (CS) sensors, is adapted for use with XT PM sensors. The present XT GPROF scheme utilizes a model-generated database to overcome issues encountered with an observational database as used by the CS scheme. The model database ensures greater consistency across meteorological regimes and surface types by providing a more comprehensive set of precipitation profiles. The database is corrected for bias against the CS database to ensure consistency in the final product. Statistical comparisons over western Europe and the United States show that the XT GPROF estimates are comparable with those from the CS scheme. Indeed, the XT estimates have higher correlations against surface radar data, while maintaining similar root-mean-square errors. Latitudinal profiles of precipitation show the XT estimates are generally comparable with the CS estimates, although in the southern midlatitudes the peak precipitation is shifted equatorward while over the Arctic large differences are seen between the XT and the CS retrievals.
Gpm Level 1 Science Requirements: Science and Performance Viewed from the Ground
NASA Technical Reports Server (NTRS)
Petersen, W.; Kirstetter, P.; Wolff, D.; Kidd, C.; Tokay, A.; Chandrasekar, V.; Grecu, M.; Huffman, G.; Jackson, G. S.
2016-01-01
GPM meets Level 1 science requirements for rain estimation based on the strong performance of its radar algorithms. Changes in the V5 GPROF algorithm should correct errors in V4 and will likely resolve GPROF performance issues relative to L1 requirements. L1 FOV Snow detection largely verified but at unknown SWE rate threshold (likely < 0.5 –1 mm/hr/liquid equivalent). Ongoing work to improve SWE rate estimation for both satellite and GV remote sensing.
TRMM Version 7 Level 3 Gridded Monthly Accumulations of GPROF Precipitation Retrievals
NASA Technical Reports Server (NTRS)
Stocker, E. F.; Kelley, O. A.
2012-01-01
In July 2011, improved versions of the retrieval algorithms were approved for TRMM. All data starting with June 2011 are produced only with the version 7 code. At the same time, version 7 reprocessing of all TRMM mission data was started. By the end of August 2011, the 14+ years of the reprocessed mission data became available online to users. This reprocessing provided the opportunity to redo and enhance upon an analysis of V7 impacts on L3 data accumulations that was presented at the 2010 EGU General Assembly. This paper will discuss the impact of algorithm changes made in th GPROF retrieval on the Level 2 swath products. Perhaps the most important change in that retrieval was to replacement of a model based a priori database with one created from Precipitation Radar (PR) and TMI brightness temperature (Tb) data. The radar pays a major role in the V7 GPROF (GPROF2010) in determining existence of rain. The level 2 retrieval algorithm also introduced a field providing the probability of rain. This combined use of the PR has some impact on the retrievals and created areas, particularly over ocean, where many areas of low-probability precipitation are retrieved whereas in version 6, these areas contained zero rain rates. This paper will discuss how these impacts get translated to the space/time averaged monthly products that use the GPROF retrievals. The level 3 products discussed are the gridded text product 3G68 and the standard 3A12 and 3B31 products. The paper provides an overview of the changes and explanation of how the level 3 products dealt with the change in the retrieval approach. Using the .25 deg x .25 degree grid, the paper will show that agreement between the swath product and the level 3 remains very high. It will also present comparisons of V6 and V7 GPROF retrievals as seen both at the swath level and the level 3 time/space gridded accumulations. It will show that the various L3 products based on GPROF level 2 retrievals are in close agreement. The paper concludes by outlining some of the challenges of the TRMM version 7 level 3 products.
NASA Technical Reports Server (NTRS)
Negri, Andrew J.; Anagnostou, Emmanouil; Adler, Robert F.
1999-01-01
Over 10 years of continuous data from the Special Sensor microwave Imager (SSM/I) aboard a series of Defense Department satellites has made it possible to construct regional rainfall climatologies at high spatial resolution. Using the Goddard Profiling Algorithm (GPROF), monthly estimates of precipitation were made over the region of northern Brazil, including the Amazon Basin, for 1987 to 1998. GPROF is a physical approach to passive microwave precipitation retrieval, which uses the Goddard Cumulus Ensemble (cloud) model to establish prior probability densities of precipitation structures. Precipitation fields from GPROF were stratified into morning and evening satellite overpasses, and accumulated at monthly intervals at 0.5 degree spatial resolution. Important diurnal effects were noted in the analysis, the most pronounced being a land/sea breeze circulation along the northern coast of Brazil and a mountain/valley circulation along the Andes. There were also indications of morning rainfall maxima along the major rivers, and evening maxima between the rivers. The addition of simultaneous geosynchronous infrared (IR) data leads to the current technique, which takes advantage of the 30 minute sampling and 4 km spatial resolution of the infrared channel and the better physics of the microwave retrieval. The resultant IR method is subsequently used to derive the diurnal variability of rainfall over the Amazon basin, and further, to investigate the relative contribution from its convective and stratiform components.
Rapid Prototyping of Application Specific Signal Processors (RASSP)
1993-12-23
Compilers 2-9 - Cadre Teamwork 2-13 - CodeCenter (Centerline) 2-15 - dbx/dbxtool (UNIXm) 2-17 - Falcon (Mentor) 2-19 - FrameMaker (Frame Tech) 2-21 - gprof...UNIXm C debuggers Falcon Mentor ECAD Framework FrameMaker Frame Tech Word Processing gcc GNU CIC++ compiler gprof GNU Software profiling tool...organization can put their own documentation on-line using the BOLD Com- poser for Framemaker . " The AMPLE programming language is a C like language used for
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz; Kelley, Owen
2017-01-01
This presentation will summarize the changes in the products for the GPM V05 reprocessing cycle. It will concentrate on discussing the gridded text product from the core satellite retrievals. However, all aspects of the GPROF GMI changes in this product are equally appropriate to the other two gridded text products. The GPM mission reprocessed its products in May of 2017 as part of a continuing improvement of precipitation retrievals. This lead to important improvement in the retrievals and therefore also necessitated reprocessing the gridded test products. The V05 GPROF changes not only improved the retrievals but substantially alerted the format and this compelled changes to the gridded text products. Especially important in this regard is the GPROF2017 (used in V05) change from reporting the fraction of the total precipitation rate that occurring as convection or in liquid phase. Instead, GPROF2017, and therefore V05 gridded text products, report the rate of convective precipitation in mm/hr. The GPROF2017 algorithm now reports the frozen precipitation rate in mm/hr rather than the fraction of total precipitation that is liquid. Because of the aim of the gridded text product is to remain simple the radar and combined results will also change in V05 to reflect this change in the GMI retrieval. The presentation provides an analysis of these changes as well as presenting a comparison with the swath products from which the hourly text grids were derived.
The Goddard Profiling Algorithm (GPROF): Description and Current Applications
NASA Technical Reports Server (NTRS)
Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea
2004-01-01
Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.
NASA Technical Reports Server (NTRS)
Stocker, Erich; Kelley, Owen; Kummerow, Christian; Chou, Joyce; Woltz, Lawrence
2010-01-01
TRMM has three level 3 (space/time averaged) data products that aggregate level 2 TRMM Microwave Imager (TMI) GPROF precipitation retrievals. These three products are TRMM 3A12, which is a monthly accumulation of 2A12 the GPROF swath retrieval product; TRMM 3B31, which is a monthly accumulation of 2A12 and 2B31 the combined retrieval product that uses both Precipitation Radar (PR) and TMI data; and 3G68 and its variants, which provide hourly retrievals for TMI, PR and combined. The 3G68 products are packaged as daily files but provide hourly information at 0.5 deg x 0.5 deg resolution globally, 0.25 deg x 0.25 deg globally, or 0.1 deg x 0.1 deg over Africa, Australia and South America. This paper will present early information of the changes in the v7 TMI GPROF level 2 retrievals that have an impact on the level 3 accumulations. This paper provides an analysis of the effect the 2A12 GPROF changes have on 3G68 products. In addition, it provides a comparison between the TRMM level 3 products that use the TMI GPROF swath retrievals.
Ground Validation Assessments of GPM Core Observatory Science Requirements
NASA Astrophysics Data System (ADS)
Petersen, Walt; Huffman, George; Kidd, Chris; Skofronick-Jackson, Gail
2017-04-01
NASA Global Precipitation Measurement (GPM) Mission science requirements define specific measurement error standards for retrieved precipitation parameters such as rain rate, raindrop size distribution, and falling snow detection on instantaneous temporal scales and spatial resolutions ranging from effective instrument fields of view [FOV], to grid scales of 50 km x 50 km. Quantitative evaluation of these requirements intrinsically relies on GPM precipitation retrieval algorithm performance in myriad precipitation regimes (and hence, assumptions related to physics) and on the quality of ground-validation (GV) data being used to assess the satellite products. We will review GPM GV products, their quality, and their application to assessing GPM science requirements, interleaving measurement and precipitation physical considerations applicable to the approaches used. Core GV data products used to assess GPM satellite products include 1) two minute and 30-minute rain gauge bias-adjusted radar rain rate products and precipitation types (rain/snow) adapted/modified from the NOAA/OU multi-radar multi-sensor (MRMS) product over the continental U.S.; 2) Polarimetric radar estimates of rain rate over the ocean collected using the K-Pol radar at Kwajalein Atoll in the Marshall Islands and the Middleton Island WSR-88D radar located in the Gulf of Alaska; and 3) Multi-regime, field campaign and site-specific disdrometer-measured rain/snow size distribution (DSD), phase and fallspeed information used to derive polarimetric radar-based DSD retrievals and snow water equivalent rates (SWER) for comparison to coincident GPM-estimated DSD and precipitation rates/types, respectively. Within the limits of GV-product uncertainty we demonstrate that the GPM Core satellite meets its basic mission science requirements for a variety of precipitation regimes. For the liquid phase, we find that GPM radar-based products are particularly successful in meeting bias and random error requirements associated with retrievals of rain rate and required +/- 0.5 millimeter error bounds for mass-weighted mean drop diameter. Version-04 (V4) GMI GPROF radiometer-based rain rate products exhibit reasonable agreement with GV, but do not completely meet mission science requirements over the continental U.S. for lighter rain rates (e.g., 1 mm/hr) due to excessive random error ( 75%). Importantly, substantial corrections were made to the V4 GPROF algorithm and preliminary analysis of Version 5 (V5) rain products indicates more robust performance relative to GV. For the frozen phase and a modest GPM requirement to "demonstrate detection of snowfall", DPR products do successfully identify snowfall within the sensitivity and beam sampling limits of the DPR instrument ( 12 dBZ lower limit; lowest clutter-free bins). Similarly, the GPROF algorithm successfully "detects" falling snow and delineates it from liquid precipitation. However, the GV approach to computing falling-snow "detection" statistics is intrinsically tied to GPROF Bayesian algorithm-based thresholds of precipitation "detection" and model analysis temperature, and is not sufficiently tied to SWER. Hence we will also discuss ongoing work to establish the lower threshold SWER for "detection" using combined GV radar, gauge and disdrometer-based case studies.
Development of microwave rainfall retrieval algorithm for climate applications
NASA Astrophysics Data System (ADS)
KIM, J. H.; Shin, D. B.
2014-12-01
With the accumulated satellite datasets for decades, it is possible that satellite-based data could contribute to sustained climate applications. Level-3 products from microwave sensors for climate applications can be obtained from several algorithms. For examples, the Microwave Emission brightness Temperature Histogram (METH) algorithm produces level-3 rainfalls directly, whereas the Goddard profiling (GPROF) algorithm first generates instantaneous rainfalls and then temporal and spatial averaging process leads to level-3 products. The rainfall algorithm developed in this study follows a similar approach to averaging instantaneous rainfalls. However, the algorithm is designed to produce instantaneous rainfalls at an optimal resolution showing reduced non-linearity in brightness temperature (TB)-rain rate(R) relations. It is found that the resolution tends to effectively utilize emission channels whose footprints are relatively larger than those of scattering channels. This algorithm is mainly composed of a-priori databases (DBs) and a Bayesian inversion module. The DB contains massive pairs of simulated microwave TBs and rain rates, obtained by WRF (version 3.4) and RTTOV (version 11.1) simulations. To improve the accuracy and efficiency of retrieval process, data mining technique is additionally considered. The entire DB is classified into eight types based on Köppen climate classification criteria using reanalysis data. Among these sub-DBs, only one sub-DB which presents the most similar physical characteristics is selected by considering the thermodynamics of input data. When the Bayesian inversion is applied to the selected DB, instantaneous rain rate with 6 hours interval is retrieved. The retrieved monthly mean rainfalls are statistically compared with CMAP and GPCP, respectively.
Monte Carlo dose calculation using a cell processor based PlayStation 3 system
NASA Astrophysics Data System (ADS)
Chow, James C. L.; Lam, Phil; Jaffray, David A.
2012-02-01
This study investigates the performance of the EGSnrc computer code coupled with a Cell-based hardware in Monte Carlo simulation of radiation dose in radiotherapy. Performance evaluations of two processor-intensive functions namely, HOWNEAR and RANMAR_GET in the EGSnrc code were carried out basing on the 20-80 rule (Pareto principle). The execution speeds of the two functions were measured by the profiler gprof specifying the number of executions and total time spent on the functions. A testing architecture designed for Cell processor was implemented in the evaluation using a PlayStation3 (PS3) system. The evaluation results show that the algorithms examined are readily parallelizable on the Cell platform, provided that an architectural change of the EGSnrc was made. However, as the EGSnrc performance was limited by the PowerPC Processing Element in the PS3, PC coupled with graphics processing units or GPCPU may provide a more viable avenue for acceleration.
OpenMP-accelerated SWAT simulation using Intel C and FORTRAN compilers: Development and benchmark
NASA Astrophysics Data System (ADS)
Ki, Seo Jin; Sugimura, Tak; Kim, Albert S.
2015-02-01
We developed a practical method to accelerate execution of Soil and Water Assessment Tool (SWAT) using open (free) computational resources. The SWAT source code (rev 622) was recompiled using a non-commercial Intel FORTRAN compiler in Ubuntu 12.04 LTS Linux platform, and newly named iOMP-SWAT in this study. GNU utilities of make, gprof, and diff were used to develop the iOMP-SWAT package, profile memory usage, and check identicalness of parallel and serial simulations. Among 302 SWAT subroutines, the slowest routines were identified using GNU gprof, and later modified using Open Multiple Processing (OpenMP) library in an 8-core shared memory system. In addition, a C wrapping function was used to rapidly set large arrays to zero by cross compiling with the original SWAT FORTRAN package. A universal speedup ratio of 2.3 was achieved using input data sets of a large number of hydrological response units. As we specifically focus on acceleration of a single SWAT run, the use of iOMP-SWAT for parameter calibrations will significantly improve the performance of SWAT optimization.
NASA Astrophysics Data System (ADS)
Guilloteau, C.; Foufoula-Georgiou, E.; Kummerow, C.; Kirstetter, P. E.
2017-12-01
A multiscale approach is used to compare precipitation fields retrieved from GMI using the last version of the GPROF algorithm (GPROF-2017) to the DPR fields all over the globe. Using a wavelet-based spectral analysis, which renders the multi-scale decompositions of the original fields independent of each other spatially and across scales, we quantitatively assess the various scales of variability of the retrieved fields, and thus define the spatially-variable "effective resolution" (ER) of the retrievals. Globally, a strong agreement is found between passive microwave and radar patterns at scales coarser than 80km. Over oceans the patterns match down to the 20km scale. Over land, comparison statistics are spatially heterogeneous. In most areas a strong discrepancy is observed between passive microwave and radar patterns at scales finer than 40-80km. The comparison is also supported by ground-based observations over the continental US derived from the NOAA/NSSL MRMS suite of products. While larger discrepancies over land than over oceans are classically explained by land complex surface emissivity perturbing the passive microwave retrieval, other factors are investigated here, such as intricate differences in the storm structure over oceans and land. Differences in term of statistical properties (PDF of intensities and spatial organization) of precipitation fields over land and oceans are assessed from radar data, as well as differences in the relation between the 89GHz brightness temperature and precipitation. Moreover, the multiscale approach allows quantifying the part of discrepancies caused by miss-match of the location of intense cells and instrument-related geometric effects. The objective is to diagnose shortcomings of current retrieval algorithms such that targeted improvements can be made to achieve over land the same retrieval performance as over oceans.
Recent Improvements in Estimating Convective and Stratiform Rainfall in Amazonia
NASA Technical Reports Server (NTRS)
Negri, Andrew J.
1999-01-01
In this paper we present results from the application of a satellite infrared (IR) technique for estimating rainfall over northern South America. Our main objectives are to examine the diurnal variability of rainfall and to investigate the relative contributions from the convective and stratiform components. We apply the technique of Anagnostou et al (1999). In simple functional form, the estimated rain area A(sub rain) may be expressed as: A(sub rain) = f(A(sub mode),T(sub mode)), where T(sub mode) is the mode temperature of a cloud defined by 253 K, and A(sub mode) is the area encompassed by T(sub mode). The technique was trained by a regression between coincident microwave estimates from the Goddard Profiling (GPROF) algorithm (Kummerow et al, 1996) applied to SSM/I data and GOES IR (11 microns) observations. The apportionment of the rainfall into convective and stratiform components is based on the microwave technique described by Anagnostou and Kummerow (1997). The convective area from this technique was regressed against an IR structure parameter (the Convective Index) defined by Anagnostou et al (1999). Finally, rainrates are assigned to the Am.de proportional to (253-temperature), with different rates for the convective and stratiform
A Vertical Census of Precipitation Characteristics using Ground-based Dual-polarimetric Radar Data
NASA Astrophysics Data System (ADS)
Wolff, D. B.; Petersen, W. A.; Marks, D. A.; Pippitt, J. L.; Tokay, A.; Gatlin, P. N.
2017-12-01
Characterization of the vertical structure/variability of precipitation and resultant microphysics is critical in providing physical validation of space-based precipitation retrievals. In support of NASAs Global Precipitation Measurement (GPM) mission Ground Validation (GV) program, NASA has invested in a state-of-art dual-polarimetric radar known as NPOL. NPOL is routinely deployed on the Delmarva Peninsula in support of NASAs GPM Precipitation Research Facility (PRF). NPOL has also served as the backbone of several GPM field campaigns in Oklahoma, Iowa, South Carolina and most recently in the Olympic Mountains in Washington state. When precipitation is present, NPOL obtains very high-resolution vertical profiles of radar observations (e.g. reflectivity (ZH) and differential reflectivity (ZDR)), from which important particle size distribution parameters are retrieved such as the mass-weight mean diameter (Dm) and the intercept parameter (Nw). These data are then averaged horizontally to match the nadir resolution of the dual-frequency radar (DPR; 5 km) on board the GPM satellite. The GPM DPR, Combined, and radiometer algorithms (such as GPROF) rely on functional relationships built from assumed parametric relationships and/or retrieved parameter profiles and spatial distributions of particle size (PSD), water content, and hydrometeor phase within a given sample volume. Thus, the NPOL-retrieved profiles provide an excellent tool for characterization of the vertical profile structure and variability during GPM overpasses. In this study, we will use many such overpass comparisons to quantify an estimate of the true sub-IFOV variability as a function of hydrometeor and rain type (convective or stratiform). This presentation will discuss the development of a relational database to help provide a census of the vertical structure of precipitation via analysis and correlation of reflectivity, differential reflectivity, mean-weight drop diameter and the normalized intercept parameter of the gamma drop size distribution.
NASA Astrophysics Data System (ADS)
Derin, Y.; Anagnostou, E. N.; Anagnostou, M.; Kalogiros, J. A.; Casella, D.; Marra, A. C.; Panegrossi, G.; Sanò, P.
2017-12-01
Difficulties in representation of high rainfall variability over mountainous areas using ground based sensors make satellite remote sensing techniques attractive for hydrologic studies over these regions. Even though satellite-based rainfall measurements are quasi global and available at high spatial resolution, these products have uncertainties that necessitate use of error characterization and correction procedures based upon more accurate in situ rainfall measurements. Such measurements can be obtained from field campaigns facilitated by research quality sensors such as locally deployed weather radar and in situ weather stations. This study uses such high quality and resolution rainfall estimates derived from dual-polarization X-band radar (XPOL) observations from three field experiments in Mid-Atlantic US East Coast (NASA IPHEX experiment), the Olympic Peninsula of Washington State (NASA OLYMPEX experiment), and the Mediterranean to characterize the error characteristics of multiple passive microwave (PMW) sensor retrievals. The study first conducts an independent error analysis of the XPOL radar reference rainfall fields against in situ rain gauges and disdrometer observations available by the field experiments. Then the study evaluates different PMW precipitation products using the XPOL datasets (GR) over the three aforementioned complex terrain study areas. We extracted matchups of PMW/GR rainfall based on a matching methodology that identifies GR volume scans coincident with PMW field-of-view sampling volumes, and scaled GR parameters to the satellite products' nominal spatial resolution. The following PMW precipitation retrieval algorithms are evaluated: the NASA Goddard PROFiling algorithm (GPROF), standard and climatology-based products (V 3, 4 and 5) from four PMW sensors (SSMIS, MHS, GMI, and AMSR2), and the precipitation products based on the algorithms Cloud Dynamics and Radiation Database (CDRD) for SSMIS and Passive microwave Neural network Precipitation Retrieval (PNPR) for AMSU/MHS, developed at ISAC-CNR within the EUMETSAT H-SAF. We will present error analysis results for the different PMW rainfall retrievals and discuss dependences on precipitation type, elevation and precipitation microphysics (derived from XPOL).
NASA Technical Reports Server (NTRS)
Fisher, Brad; Wolff, David B.
2010-01-01
Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.
A Detailed Examination of the GPM Core Satellite Gridded Text Product
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz; Kelley, Owen A.; Kummerow, C.; Huffman, George; Olson, William S.; Kwiatowski, John M.
2015-01-01
The Global Precipitation Measurement (GPM) mission quarter-degree gridded-text product has a similar file format and a similar purpose as the Tropical Rainfall Measuring Mission (TRMM) 3G68 quarter-degree product. The GPM text-grid format is an hourly summary of surface precipitation retrievals from various GPM instruments and combinations of GPM instruments. The GMI Goddard Profiling (GPROF) retrieval provides the widest swath (800 km) and does the retrieval using the GPM Microwave Imager (GMI). The Ku radar provides the widest radar swath (250 km swath) and also provides continuity with the TRMM Ku Precipitation Radar. GPM's Ku+Ka band matched swath (125 km swath) provides a dual-frequency precipitation retrieval. The "combined" retrieval (125 km swath) provides a multi-instrument precipitation retrieval based on the GMI, the DPR Ku radar, and the DPR Ka radar. While the data are reported in hourly grids, all hours for a day are packaged into a single text file that is g-zipped to reduce file size and to speed up downloading. The data are reported on a 0.25deg x 0.25 deg grid.
NASA Technical Reports Server (NTRS)
Liu, Z.; Ostrenga, D.; Vollmer, B.; Kempler, S.; Deshong, B.; Greene, M.
2015-01-01
The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is also home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 17 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available: -Level-1 GPM Microwave Imager (GMI) and partner radiometer products, DPR products -Level-2 Goddard Profiling Algorithm (GPROF) GMI and partner products, DPR products -Level-3 daily and monthly products, DPR products -Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http://disc.sci.gsfc.nasa.gov/gpm). Data services that are currently and to-be available include Google-like Mirador (http://mirador.gsfc.nasa.gov/) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http://giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications. The United User Interface (UUI) is the next step in the evolution of the GES DISC web site. It attempts to provide seamless access to data, information and services through a single interface without sending the user to different applications or URLs (e.g., search, access, subset, Giovanni, documents).
NASA Astrophysics Data System (ADS)
Wang, N. Y.; You, Y.; Ferraro, R. R.; Guch, I.
2014-12-01
Microwave satellite remote sensing of precipitation over land is a challenging problem due to the highly variable land surface emissivity, which, if not properly accounted for, can be much greater than the precipitation signal itself, especially in light rain/snow conditions. Additionally, surfaces such as arid land, deserts and snow cover have brightness temperatures characteristics similar to precipitation Ongoing work by NASA's GPM microwave radiometer team is constructing databases for the GPROF algorithm through a variety of means, however, there is much uncertainty as to what is the optimal information needed for the wide array of sensors in the GPM constellation, including examination of regional conditions. The at-launch database focuses on stratification by emissivity class, surface temperature and total precipitable water (TPW). We'll perform sensitivity studies to determine the potential role of environmental factors such as land surface temperature, surface elevation, and relative humidity and storm morphology such as storm vertical structure, height, and ice thickness to improve precipitation estimation over land, including rain and snow. In other words, what information outside of the satellite radiances can help describe the background and subsequent departures from it that are active precipitating regions? It is likely that this information will be a function of the various precipitation regimes. Statistical methods such as Principal Component Analysis (PCA) will be utilized in this task. Databases from a variety of sources are being constructed. They include existing satellite microwave measurements of precipitating and non-precipitating conditions, ground radar precipitation rate estimates, surface emissivity climatology from satellites, surface temperature and TPW from NWP reanalysis. Results from the analysis of these databases with respect to the microwave precipitation sensitivity to the variety of environmental conditions in different climate regimes will be discussed.
NASA Technical Reports Server (NTRS)
Ostrenga, D.; Liu, Z.; Vollmer, B.; Teng, W.; Kempler, S.
2014-01-01
On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http:pmm.nasa.govGPM). The GPM mission consists of an international network of satellites in which a GPM Core Observatory satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 16 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available include the following:Level-1 GPM Microwave Imager (GMI) and partner radiometer productsLevel-2 Goddard Profiling Algorithm (GPROF) GMI and partner productsLevel-3 daily and monthly productsIntegrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http:disc.sci.gsfc.nasa.govgpm). Data services that are currently and to-be available include Google-like Mirador (http:mirador.gsfc.nasa.gov) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http:giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications.
NASA Technical Reports Server (NTRS)
Liu, Zhong; Ostrenga, D.; Vollmer, B.; Deshong, B.; Greene, M.; Teng, W.; Kempler, S. J.
2015-01-01
On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http:pmm.nasa.govGPM). The GPM mission consists of an international network of satellites in which a GPM Core Observatory satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 16 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available include the following: 1. Level-1 GPM Microwave Imager (GMI) and partner radiometer products. 2. Goddard Profiling Algorithm (GPROF) GMI and partner products. 3. Integrated Multi-satellitE Retrievals for GPM (IMERG) products. (early, late, and final)A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http:disc.sci.gsfc.nasa.govgpm). Data services that are currently and to-be available include Google-like Mirador (http:mirador.gsfc.nasa.gov) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http:giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications.In this presentation, we will present GPM data products and services with examples.
NASA Astrophysics Data System (ADS)
Ostrenga, D.; Liu, Z.; Vollmer, B.; Teng, W. L.; Kempler, S. J.
2014-12-01
On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http://pmm.nasa.gov/GPM). The GPM mission consists of an international network of satellites in which a GPM "Core Observatory" satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data within the NASA Earth Observation System Data Information System (EOSDIS). The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). Over the past 16 years, the GES DISC has served the scientific as well as other communities with TRMM data and user-friendly services. During the GPM era, the GES DISC will continue to provide user-friendly data services and customer support to users around the world. GPM products currently and to-be available include the following: Level-1 GPM Microwave Imager (GMI) and partner radiometer products Goddard Profiling Algorithm (GPROF) GMI and partner products Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final) A dedicated Web portal (including user guides, etc.) has been developed for GPM data (http://disc.sci.gsfc.nasa.gov/gpm). Data services that are currently and to-be available include Google-like Mirador (http://mirador.gsfc.nasa.gov/) for data search and access; data access through various Web services (e.g., OPeNDAP, GDS, WMS, WCS); conversion into various formats (e.g., netCDF, HDF, KML (for Google Earth), ASCII); exploration, visualization, and statistical online analysis through Giovanni (http://giovanni.gsfc.nasa.gov); generation of value-added products; parameter and spatial subsetting; time aggregation; regridding; data version control and provenance; documentation; science support for proper data usage, FAQ, help desk; monitoring services (e.g. Current Conditions) for applications. In this presentation, we will present GPM data products and services with examples.
NASA Astrophysics Data System (ADS)
Loughman, Robert; Bhartia, Pawan K.; Chen, Zhong; Xu, Philippe; Nyaku, Ernest; Taha, Ghassan
2018-05-01
The theoretical basis of the Ozone Mapping and Profiler Suite (OMPS) Limb Profiler (LP) Version 1 aerosol extinction retrieval algorithm is presented. The algorithm uses an assumed bimodal lognormal aerosol size distribution to retrieve aerosol extinction profiles at 675 nm from OMPS LP radiance measurements. A first-guess aerosol extinction profile is updated by iteration using the Chahine nonlinear relaxation method, based on comparisons between the measured radiance profile at 675 nm and the radiance profile calculated by the Gauss-Seidel limb-scattering (GSLS) radiative transfer model for a spherical-shell atmosphere. This algorithm is discussed in the context of previous limb-scattering aerosol extinction retrieval algorithms, and the most significant error sources are enumerated. The retrieval algorithm is limited primarily by uncertainty about the aerosol phase function. Horizontal variations in aerosol extinction, which violate the spherical-shell atmosphere assumed in the version 1 algorithm, may also limit the quality of the retrieved aerosol extinction profiles significantly.
Day 1 for the Integrated Multi-Satellite Retrievals for GPM (IMERG) Data Sets
NASA Astrophysics Data System (ADS)
Huffman, G. J.; Bolvin, D. T.; Braithwaite, D.; Hsu, K. L.; Joyce, R.; Kidd, C.; Sorooshian, S.; Xie, P.
2014-12-01
The Integrated Multi-satellitE Retrievals for GPM (IMERG) is designed to compute the best time series of (nearly) global precipitation from "all" precipitation-relevant satellites and global surface precipitation gauge analyses. IMERG was developed to use GPM Core Observatory data as a reference for the international constellation of satellites of opportunity that constitute the GPM virtual constellation. Computationally, IMERG is a unified U.S. algorithm drawing on strengths in the three contributing groups, whose previous work includes: 1) the TRMM Multi-satellite Precipitation Analysis (TMPA); 2) the CPC Morphing algorithm with Kalman Filtering (K-CMORPH); and 3) the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks using a Cloud Classification System (PERSIANN-CCS). We review the IMERG design, development, testing, and current status. IMERG provides 0.1°x0.1° half-hourly data, and will be run at multiple times, providing successively more accurate estimates: 4 hours, 8 hours, and 2 months after observation time. In Day 1 the spatial extent is 60°N-S, for the period March 2014 to the present. In subsequent reprocessing the data will extend to fully global, covering the period 1998 to the present. Both the set of input data set retrievals and the IMERG system are substantially different than those used in previous U.S. products. The input passive microwave data are all being produced with GPROF2014, which is substantially upgraded compared to previous versions. For the first time, this includes microwave sounders. Accordingly, there is a strong need to carefully check the initial test data sets for performance. IMERG output will be illustrated using pre-operational test data, including the variety of supporting fields, such as the merged-microwave and infrared estimates, and the precipitation type. Finally, we will summarize the expected release of various output products, and the subsequent reprocessing sequence.
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.; Ray, Taylor J.
2013-01-01
Two versions of airborne wind profiling algorithms for the pulsed 2-micron coherent Doppler lidar system at NASA Langley Research Center in Virginia are presented. Each algorithm utilizes different number of line-of-sight (LOS) lidar returns while compensating the adverse effects of different coordinate systems between the aircraft and the Earth. One of the two algorithms APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) estimates wind products using two LOSs. The other algorithm utilizes five LOSs. The airborne lidar data were acquired during the NASA's Genesis and Rapid Intensification Processes (GRIP) campaign in 2010. The wind profile products from the two algorithms are compared with the dropsonde data to validate their results.
NASA Technical Reports Server (NTRS)
Liu, Zhong; Ostrenga, D.; Vollmer, B.; Deshong, B.; MacRitchie, K.; Greene, M.; Kempler, S.
2016-01-01
Precipitation is an important dataset in hydrometeorological research and applications such as flood modeling, drought monitoring, etc. On February 27, 2014, the NASA Global Precipitation Measurement (GPM) mission was launched to provide the next-generation global observations of rain and snow (http:pmm.nasa.govGPM). The GPM mission consists of an international network of satellites in which a GPM Core Observatory satellite carries both active and passive microwave instruments to measure precipitation and serve as a reference standard, to unify precipitation measurements from a constellation of other research and operational satellites. The NASA Goddard Earth Sciences (GES) Data and Information Services Center (DISC) hosts and distributes GPM data. The GES DISC is home to the data archive for the GPM predecessor, the Tropical Rainfall Measuring Mission (TRMM). GPM products currently available include the following:1. Level-1 GPM Microwave Imager (GMI) and partner radiometer products2. Goddard Profiling Algorithm (GPROF) GMI and partner products (Level-2 and Level-3)3. GPM dual-frequency precipitation radar and their combined products (Level-2 and Level-3)4. Integrated Multi-satellitE Retrievals for GPM (IMERG) products (early, late, and final run)GPM data can be accessed through a number of data services (e.g., Simple Subset Wizard, OPeNDAP, WMS, WCS, ftp, etc.). A newly released Unified User Interface or UUI is a single interface to provide users seamless access to data, information and services. For example, a search for precipitation products will not only return TRMM and GPM products, but also other global precipitation products such as MERRA (Modern Era Retrospective-Analysis for Research and Applications), GLDAS (Global Land Data Assimilation Systems), etc.New features and capabilities have been recently added in GIOVANNI to allow exploring and inter-comparing GPM IMERG (Integrated Multi-satelliE Retrievals for GPM) half-hourly and monthly precipitation products as well as other precipitation products such as TRMM, MERRA, NLDAS, GLDAS, etc. GIOVANNI is a web-based tool developed by the GES DISC, to visualize and analyze Earth science data without having to download data and software. During the GPM era, the GES DISC will continue to develop and provide data services for supporting applications. We will update and enhance existing TRMM applications (Current Conditions, the USDA Crop Explorer, etc.) with higher spatial resolution IMERG products. In this presentation, we will present GPM data products and services with examples.
Highlights of TOMS Version 9 Total Ozone Algorithm
NASA Technical Reports Server (NTRS)
Bhartia, Pawan; Haffner, David
2012-01-01
The fundamental basis of TOMS total ozone algorithm was developed some 45 years ago by Dave and Mateer. It was designed to estimate total ozone from satellite measurements of the backscattered UV radiances at few discrete wavelengths in the Huggins ozone absorption band (310-340 nm). Over the years, as the need for higher accuracy in measuring total ozone from space has increased, several improvements to the basic algorithms have been made. They include: better correction for the effects of aerosols and clouds, an improved method to account for the variation in shape of ozone profiles with season, latitude, and total ozone, and a multi-wavelength correction for remaining profile shape errors. These improvements have made it possible to retrieve total ozone with just 3 spectral channels of moderate spectral resolution (approx. 1 nm) with accuracy comparable to state-of-the-art spectral fitting algorithms like DOAS that require high spectral resolution measurements at large number of wavelengths. One of the deficiencies of the TOMS algorithm has been that it doesn't provide an error estimate. This is a particular problem in high latitudes when the profile shape errors become significant and vary with latitude, season, total ozone, and instrument viewing geometry. The primary objective of the TOMS V9 algorithm is to account for these effects in estimating the error bars. This is done by a straightforward implementation of the Rodgers optimum estimation method using a priori ozone profiles and their error covariances matrices constructed using Aura MLS and ozonesonde data. The algorithm produces a vertical ozone profile that contains 1-2.5 pieces of information (degrees of freedom of signal) depending upon solar zenith angle (SZA). The profile is integrated to obtain the total column. We provide information that shows the altitude range in which the profile is best determined by the measurements. One can use this information in data assimilation and analysis. A side benefit of this algorithm is that it is considerably simpler than the present algorithm that uses a database of 1512 profiles to retrieve total ozone. These profiles are tedious to construct and modify. Though conceptually similar to the SBUV V8 algorithm that was developed about a decade ago, the SBUV and TOMS V9 algorithms differ in detail. The TOMS algorithm uses 3 wavelengths to retrieve the profile while the SBUV algorithm uses 6-9 wavelengths, so TOMS provides less profile information. However both algorithms have comparable total ozone information and TOMS V9 can be easily adapted to use additional wavelengths from instruments like GOME, OMI and OMPS to provide better profile information at smaller SZAs. The other significant difference between the two algorithms is that while the SBUV algorithm has been optimized for deriving monthly zonal means by making an appropriate choice of the a priori error covariance matrix, the TOMS algorithm has been optimized for tracking short-term variability using month and latitude dependent covariance matrices.
Jürgens, Tim; Clark, Nicholas R; Lecluyse, Wendy; Meddis, Ray
2016-01-01
To use a computer model of impaired hearing to explore the effects of a physiologically-inspired hearing-aid algorithm on a range of psychoacoustic measures. A computer model of a hypothetical impaired listener's hearing was constructed by adjusting parameters of a computer model of normal hearing. Absolute thresholds, estimates of compression, and frequency selectivity (summarized to a hearing profile) were assessed using this model with and without pre-processing the stimuli by a hearing-aid algorithm. The influence of different settings of the algorithm on the impaired profile was investigated. To validate the model predictions, the effect of the algorithm on hearing profiles of human impaired listeners was measured. A computer model simulating impaired hearing (total absence of basilar membrane compression) was used, and three hearing-impaired listeners participated. The hearing profiles of the model and the listeners showed substantial changes when the test stimuli were pre-processed by the hearing-aid algorithm. These changes consisted of lower absolute thresholds, steeper temporal masking curves, and sharper psychophysical tuning curves. The hearing-aid algorithm affected the impaired hearing profile of the model to approximate a normal hearing profile. Qualitatively similar results were found with the impaired listeners' hearing profiles.
A retrieval algorithm of hydrometer profile for submillimeter-wave radiometer
NASA Astrophysics Data System (ADS)
Liu, Yuli; Buehler, Stefan; Liu, Heguang
2017-04-01
Vertical profiles of particle microphysics perform vital functions for the estimation of climatic feedback. This paper proposes a new algorithm to retrieve the profile of the parameters of the hydrometeor(i.e., ice, snow, rain, liquid cloud, graupel) based on passive submillimeter-wave measurements. These parameters include water content and particle size. The first part of the algorithm builds the database and retrieves the integrated quantities. Database is built up by Atmospheric Radiative Transfer Simulator(ARTS), which uses atmosphere data to simulate the corresponding brightness temperature. Neural network, trained by the precalculated database, is developed to retrieve the water path for each type of particles. The second part of the algorithm analyses the statistical relationship between water path and vertical parameters profiles. Based on the strong dependence existing between vertical layers in the profiles, Principal Component Analysis(PCA) technique is applied. The third part of the algorithm uses the forward model explicitly to retrieve the hydrometeor profiles. Cost function is calculated in each iteration, and Differential Evolution(DE) algorithm is used to adjust the parameter values during the evolutionary process. The performance of this algorithm is planning to be verified for both simulation database and measurement data, by retrieving profiles in comparison with the initial one. Results show that this algorithm has the ability to retrieve the hydrometeor profiles efficiently. The combination of ARTS and optimization algorithm can get much better results than the commonly used database approach. Meanwhile, the concept that ARTS can be used explicitly in the retrieval process shows great potential in providing solution to other retrieval problems.
Java implementation of Class Association Rule algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tamura, Makio
2007-08-30
Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix and a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be appliedmore » more generally.« less
Banakh, V A; Marakasov, D A
2007-08-01
Reconstruction of a wind profile based on the statistics of plane-wave intensity fluctuations in a turbulent atmosphere is considered. The algorithm for wind profile retrieval from the spatiotemporal spectrum of plane-wave weak intensity fluctuations is described, and the results of end-to-end computer experiments on wind profiling based on the developed algorithm are presented. It is shown that the reconstructing algorithm allows retrieval of a wind profile from turbulent plane-wave intensity fluctuations with acceptable accuracy.
Behavioral Profiling of Scada Network Traffic Using Machine Learning Algorithms
2014-03-27
BEHAVIORAL PROFILING OF SCADA NETWORK TRAFFIC USING MACHINE LEARNING ALGORITHMS THESIS Jessica R. Werling, Captain, USAF AFIT-ENG-14-M-81 DEPARTMENT...subject to copyright protection in the United States. AFIT-ENG-14-M-81 BEHAVIORAL PROFILING OF SCADA NETWORK TRAFFIC USING MACHINE LEARNING ...AFIT-ENG-14-M-81 BEHAVIORAL PROFILING OF SCADA NETWORK TRAFFIC USING MACHINE LEARNING ALGORITHMS Jessica R. Werling, B.S.C.S. Captain, USAF Approved
Digital sorting of complex tissues for cell type-specific gene expression profiles.
Zhong, Yi; Wan, Ying-Wooi; Pang, Kaifang; Chow, Lionel M L; Liu, Zhandong
2013-03-07
Cellular heterogeneity is present in almost all gene expression profiles. However, transcriptome analysis of tissue specimens often ignores the cellular heterogeneity present in these samples. Standard deconvolution algorithms require prior knowledge of the cell type frequencies within a tissue or their in vitro expression profiles. Furthermore, these algorithms tend to report biased estimations. Here, we describe a Digital Sorting Algorithm (DSA) for extracting cell-type specific gene expression profiles from mixed tissue samples that is unbiased and does not require prior knowledge of cell type frequencies. The results suggest that DSA is a specific and sensitivity algorithm in gene expression profile deconvolution and will be useful in studying individual cell types of complex tissues.
NASA Technical Reports Server (NTRS)
Chu, W. P.
1977-01-01
Spacecraft remote sensing of stratospheric aerosol and ozone vertical profiles using the solar occultation experiment has been analyzed. A computer algorithm has been developed in which a two step inversion of the simulated data can be performed. The radiometric data are first inverted into a vertical extinction profile using a linear inversion algorithm. Then the multiwavelength extinction profiles are solved with a nonlinear least square algorithm to produce aerosol and ozone vertical profiles. Examples of inversion results are shown illustrating the resolution and noise sensitivity of the inversion algorithms.
NASA Technical Reports Server (NTRS)
Groce, J. L.; Izumi, K. H.; Markham, C. H.; Schwab, R. W.; Thompson, J. L.
1986-01-01
The Local Flow Management/Profile Descent (LFM/PD) algorithm designed for the NASA Transport System Research Vehicle program is described. The algorithm provides fuel-efficient altitude and airspeed profiles consistent with ATC restrictions in a time-based metering environment over a fixed ground track. The model design constraints include accommodation of both published profile descent procedures and unpublished profile descents, incorporation of fuel efficiency as a flight profile criterion, operation within the performance capabilities of the Boeing 737-100 airplane with JT8D-7 engines, and conformity to standard air traffic navigation and control procedures. Holding and path stretching capabilities are included for long delay situations.
Computation of nonparametric convex hazard estimators via profile methods.
Jankowski, Hanna K; Wellner, Jon A
2009-05-01
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.
Optimal Decentralized Protocol for Electric Vehicle Charging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gan, LW; Topcu, U; Low, SH
We propose a decentralized algorithm to optimally schedule electric vehicle (EV) charging. The algorithm exploits the elasticity of electric vehicle loads to fill the valleys in electric load profiles. We first formulate the EV charging scheduling problem as an optimal control problem, whose objective is to impose a generalized notion of valley-filling, and study properties of optimal charging profiles. We then give a decentralized algorithm to iteratively solve the optimal control problem. In each iteration, EVs update their charging profiles according to the control signal broadcast by the utility company, and the utility company alters the control signal to guidemore » their updates. The algorithm converges to optimal charging profiles (that are as "flat" as they can possibly be) irrespective of the specifications (e.g., maximum charging rate and deadline) of EVs, even if EVs do not necessarily update their charging profiles in every iteration, and use potentially outdated control signal when they update. Moreover, the algorithm only requires each EV solving its local problem, hence its implementation requires low computation capability. We also extend the algorithm to track a given load profile and to real-time implementation.« less
NASA Astrophysics Data System (ADS)
Siomos, Nikolaos; Filoglou, Maria; Poupkou, Anastasia; Liora, Natalia; Dimopoulos, Spyros; Melas, Dimitris; Chaikovsky, Anatoli; Balis, Dimitris
2015-04-01
Vertical profiles of the aerosol mass concentration derived by a retrieval algorithm that uses combined sunphotometer and LIDAR data (LIRIC) were used in order to validate the mass concentration profiles estimated by the air quality model CAMx. LIDAR and CIMEL measurements of the Laboratory of Atmospheric Physics of the Aristotle University of Thessaloniki were used for this validation.The aerosol mass concentration profiles of the fine and coarse mode derived by CAMx were compared with the respective profiles derived by the retrieval algorithm. For the coarse mode particles, forecasts of the Saharan dust transportation model BSC-DREAM8bV2 were also taken into account. Each of the retrieval algorithm's profiles were matched to the models' profile with the best agreement within a time window of four hours before and after the central measurement. OPAC, a software than can provide optical properties of aerosol mixtures, was also employed in order to calculate the angstrom exponent and the lidar ratio values for 355nm and 532nm for each of the model's profiles aiming in a comparison with the angstrom exponent and the lidar ratio values derived by the retrieval algorithm for each measurement. The comparisons between the fine mode aerosol concentration profiles resulted in a good agreement between CAMx and the retrieval algorithm, with the vertical mean bias error never exceeding 7 μgr/m3. Concerning the aerosol coarse mode concentration profiles both CAMx and BSC-DREAM8bV2 values are severely underestimated, although, in cases of Saharan dust transportation events there is an agreement between the profiles of BSC-DREAM8bV2 model and the retrieval algorithm.
NASA Technical Reports Server (NTRS)
Curtis, Scott; Huffman, George; Nelkin, Eric
1999-01-01
Satellite estimates and gauge observations of precipitation are useful in understanding the water cycle, analyzing climatic variability, and validating climate models. The Global Precipitation Climatology Project (GPCP) released a community merged precipitation data set for the period July 1987 through the present, and has recently extended that data set back to 1986. One objective of this study is to use GPCP estimates to describe and quantify the seasonal variation of precipitation, with emphasis on the Asian summer monsoon. Another focus is the 1997-98 El Nino Southern Oscillation (ENSO) and associated extreme precipitation events. The summer monsoon tends to be drier than normal in El Nino ears. This was not observed for 1997 or 1998, while for 1997 the NCEP model produced the largest summer rain rates over India in years. This inconsistency will be examined. The average annual global precipitation rate is 2.7 mm day as estimated by GPCP, which is similar to values computed from long-term climatologies. From 30 deg N to 30 deg S the average precipitation rate is 2.7 mm day over land with a maximum in the annual cycle occurring in February-March, when the Amazon basin receives abundant rainfall. The average precipitation rate is 3.1 mm day over the tropical oceans, with a peak earlier in the season (November-December), corresponding with the transition from a strong Pacific Intertropical Convergence Zone (ITCZ) from June to November to a strong South Pacific Convergence Zone (SPCZ) from December to March. The seasonal evolution of C, C, the Asian summer monsoon stands out with rains in excess of 15 mm day off the coast of Burma in June. The GPROF pentad data also captures the onset of the tropical Pacific rainfall patterns associated with the 1997-98 ENSO. From February to October 1997 at least four rain-producing systems traveled from West to East in the equatorial corridor. A rapid transition from El Nino to La Nina conditions occurred in May-June 1998. GPCP and GPROF were used to construct precipitation-based ENSO indices to monitor El Ninos (EL) and La Ninas and (LI).
Quasi-Global Precipitation as Depicted in the GPCPV2.2 and TMPA V7
NASA Technical Reports Server (NTRS)
Huffman, George J.; Bolvin, David T.; Nelkin, Eric J.; Adler, Robert F.
2012-01-01
After a lengthy incubation period, the year 2012 saw the release of the Global Precipitation Climatology Project (GPCP) Version 2.2 monthly dataset and the TRMM Multi-satellite Precipitation Analysis (TMPA) Version 7. One primary feature of the new data sets is that DMSP SSMIS data are now used, which entailed a great deal of development work to overcome calibration issues. In addition, the GPCP V2.2 included a slight upgrade to the gauge analysis input datasets, particularly over China, while the TMPA V7 saw more-substantial upgrades: 1) The gauge analysis record in Version 6 used the (older) GPCP monitoring product through April 2005 and the CAMS analysis thereafter, which introduced an inhomogeneity. Version 7 uses the Version 6 GPCC Full analysis, switching to the Version 4 Monitoring analysis thereafter. 2) The inhomogeneously processed AMSU record in Version 6 is uniformly processed in Version 7. 3) The TMI and SSMI input data have been upgraded to the GPROF2010 algorithm. The global-change, water cycle, and other user communities are acutely interested in how these data sets compare, as consistency between differently processed, long-term, quasi-global data sets provides some assurance that the statistics computed from them provide a good representation of the atmosphere's behavior. Within resolution differences, the two data sets agree well over land as the gauge data (which tend to dominate the land results) are the same in both. Over ocean the results differ more because the satellite products used for calibration are based on very different algorithms and the dominant input data sets are different. The time series of tropical (30 N-S) ocean average precipitation shows that the TMPA V7 follows the TMI-PR Combined Product calibrator, although running approximately 5% higher on average. The GPCP and TMPA time series are fairly consistent, although the GPCP runs approximately 10% lower than the TMPA, and has a somewhat larger interannual variation. As well, the GPCP and TMPA interannual variations have an apparent phase shift, with GPCP running a few months later. Additional diagnostics will include mean maps and selected scatter plots.
GPM Ground Validation: Pre to Post-Launch Era
NASA Astrophysics Data System (ADS)
Petersen, Walt; Skofronick-Jackson, Gail; Huffman, George
2015-04-01
NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, accumulation, types and data quality are being routinely generated to facilitate statistical GV of instantaneous (e.g., Level II orbit) and merged (e.g., IMERG) GPM products. Toward assessing precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of both ground and satellite-based estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi-radar, gauge and disdrometer facility located in coastal Virginia. This presentation will summarize the evolution of the NASA GPM GV program from pre to post-launch eras and place focus on evaluation of year-1 post-launch GPM satellite datasets including Level II GPROF, DPR and Combined algorithms, and Level III IMERG products.
Use of a genetic algorithm to improve the rail profile on Stockholm underground
NASA Astrophysics Data System (ADS)
Persson, Ingemar; Nilsson, Rickard; Bik, Ulf; Lundgren, Magnus; Iwnicki, Simon
2010-12-01
In this paper, a genetic algorithm optimisation method has been used to develop an improved rail profile for Stockholm underground. An inverted penalty index based on a number of key performance parameters was generated as a fitness function and vehicle dynamics simulations were carried out with the multibody simulation package Gensys. The effectiveness of each profile produced by the genetic algorithm was assessed using the roulette wheel method. The method has been applied to the rail profile on the Stockholm underground, where problems with rolling contact fatigue on wheels and rails are currently managed by grinding. From a starting point of the original BV50 and the UIC60 rail profiles, an optimised rail profile with some shoulder relief has been produced. The optimised profile seems similar to measured rail profiles on the Stockholm underground network and although initial grinding is required, maintenance of the profile will probably not require further grinding.
Airborne Doppler Wind Lidar Post Data Processing Software DAPS-LV
NASA Technical Reports Server (NTRS)
Kavaya, Michael J. (Inventor); Beyon, Jeffrey Y. (Inventor); Koch, Grady J. (Inventor)
2015-01-01
Systems, methods, and devices of the present invention enable post processing of airborne Doppler wind LIDAR data. In an embodiment, airborne Doppler wind LIDAR data software written in LabVIEW may be provided and may run two versions of different airborne wind profiling algorithms. A first algorithm may be the Airborne Wind Profiling Algorithm for Doppler Wind LIDAR ("APOLO") using airborne wind LIDAR data from two orthogonal directions to estimate wind parameters, and a second algorithm may be a five direction based method using pseudo inverse functions to estimate wind parameters. The various embodiments may enable wind profiles to be compared using different algorithms, may enable wind profile data for long haul color displays to be generated, may display long haul color displays, and/or may enable archiving of data at user-selectable altitudes over a long observation period for data distribution and population.
Treatment Algorithms Based on Tumor Molecular Profiling: The Essence of Precision Medicine Trials.
Le Tourneau, Christophe; Kamal, Maud; Tsimberidou, Apostolia-Maria; Bedard, Philippe; Pierron, Gaëlle; Callens, Céline; Rouleau, Etienne; Vincent-Salomon, Anne; Servant, Nicolas; Alt, Marie; Rouzier, Roman; Paoletti, Xavier; Delattre, Olivier; Bièche, Ivan
2016-04-01
With the advent of high-throughput molecular technologies, several precision medicine (PM) studies are currently ongoing that include molecular screening programs and PM clinical trials. Molecular profiling programs establish the molecular profile of patients' tumors with the aim to guide therapy based on identified molecular alterations. The aim of prospective PM clinical trials is to assess the clinical utility of tumor molecular profiling and to determine whether treatment selection based on molecular alterations produces superior outcomes compared with unselected treatment. These trials use treatment algorithms to assign patients to specific targeted therapies based on tumor molecular alterations. These algorithms should be governed by fixed rules to ensure standardization and reproducibility. Here, we summarize key molecular, biological, and technical criteria that, in our view, should be addressed when establishing treatment algorithms based on tumor molecular profiling for PM trials. © The Author 2015. Published by Oxford University Press.
Predicting ozone profile shape from satellite UV spectra
NASA Astrophysics Data System (ADS)
Xu, Jian; Loyola, Diego; Romahn, Fabian; Doicu, Adrian
2017-04-01
Identifying ozone profile shape is a critical yet challenging job for the accurate reconstruction of vertical distributions of atmospheric ozone that is relevant to climate change and air quality. Motivated by the need to develop an approach to reliably and efficiently estimate vertical information of ozone and inspired by the success of machine learning techniques, this work proposes a new algorithm for deriving ozone profile shapes from ultraviolet (UV) absorption spectra that are recorded by satellite instruments, e.g. GOME series and the future Sentinel missions. The proposed algorithm formulates this particular inverse problem in a classification framework rather than a conventional inversion one and places an emphasis on effectively characterizing various profile shapes based on machine learning techniques. Furthermore, a comparison of the ozone profiles from real GOME-2 data estimated by our algorithm and the classical retrieval algorithm (Optimal Estimation Method) is performed.
A numerical algorithm of tooth profile of non-circular cylindrical gear
NASA Astrophysics Data System (ADS)
Wang, Xuan
2017-08-01
Non-circular cylindrical gear (NCCG) is a common form of non-circular gear. Different from the circular gear, the tooth profile equation of NCCG cannot be obtained. So it is necessary to use a numerical algorithm to calculate the tooth profile of NCCG. For this reason, this paper presents a simple and highly efficient numerical algorithm to obtain the tooth profile of NCCG. Firstly, the mathematical model of tooth profile envelope of NCCG is established based on the principle of gear shaping, and the tooth profile envelope of NCCG is obtained. Secondly, the polar radius and polar angle of shaper cutter tooth profile are chosen as the criterions, by which the points of NCCG tooth cogging can be screened out. Finally, the boundary of tooth cogging points is extracted by a distance criterion and correspondingly the tooth profile of NCCG is obtained.
Wind profiling based on the optical beam intensity statistics in a turbulent atmosphere.
Banakh, Victor A; Marakasov, Dimitrii A
2007-10-01
Reconstruction of the wind profile from the statistics of intensity fluctuations of an optical beam propagating in a turbulent atmosphere is considered. The equations for the spatiotemporal correlation function and the spectrum of weak intensity fluctuations of a Gaussian beam are obtained. The algorithms of wind profile retrieval from the spatiotemporal intensity spectrum are described and the results of end-to-end computer experiments on wind profiling based on the developed algorithms are presented. It is shown that the developed algorithms allow retrieval of the wind profile from the turbulent optical beam intensity fluctuations with acceptable accuracy in many practically feasible laser measurements set up in the atmosphere.
A proof of the DBRF-MEGN method, an algorithm for deducing minimum equivalent gene networks
2011-01-01
Background We previously developed the DBRF-MEGN (difference-based regulation finding-minimum equivalent gene network) method, which deduces the most parsimonious signed directed graphs (SDGs) consistent with expression profiles of single-gene deletion mutants. However, until the present study, we have not presented the details of the method's algorithm or a proof of the algorithm. Results We describe in detail the algorithm of the DBRF-MEGN method and prove that the algorithm deduces all of the exact solutions of the most parsimonious SDGs consistent with expression profiles of gene deletion mutants. Conclusions The DBRF-MEGN method provides all of the exact solutions of the most parsimonious SDGs consistent with expression profiles of gene deletion mutants. PMID:21699737
Information content of ozone retrieval algorithms
NASA Technical Reports Server (NTRS)
Rodgers, C.; Bhartia, P. K.; Chu, W. P.; Curran, R.; Deluisi, J.; Gille, J. C.; Hudson, R.; Mateer, C.; Rusch, D.; Thomas, R. J.
1989-01-01
The algorithms are characterized that were used for production processing by the major suppliers of ozone data to show quantitatively: how the retrieved profile is related to the actual profile (This characterizes the altitude range and vertical resolution of the data); the nature of systematic errors in the retrieved profiles, including their vertical structure and relation to uncertain instrumental parameters; how trends in the real ozone are reflected in trends in the retrieved ozone profile; and how trends in other quantities (both instrumental and atmospheric) might appear as trends in the ozone profile. No serious deficiencies were found in the algorithms used in generating the major available ozone data sets. As the measurements are all indirect in someway, and the retrieved profiles have different characteristics, data from different instruments are not directly comparable.
Algorithm Estimates Microwave Water-Vapor Delay
NASA Technical Reports Server (NTRS)
Robinson, Steven E.
1989-01-01
Accuracy equals or exceeds conventional linear algorithms. "Profile" algorithm improved algorithm using water-vapor-radiometer data to produce estimates of microwave delays caused by water vapor in troposphere. Does not require site-specific and weather-dependent empirical parameters other than standard meteorological data, latitude, and altitude for use in conjunction with published standard atmospheric data. Basic premise of profile algorithm, wet-path delay approximated closely by solution to simplified version of nonlinear delay problem and generated numerically from each radiometer observation and simultaneous meteorological data.
Four-dimensional guidance algorithms for aircraft in an air traffic control environment
NASA Technical Reports Server (NTRS)
Pecsvaradi, T.
1975-01-01
Theoretical development and computer implementation of three guidance algorithms are presented. From a small set of input parameters the algorithms generate the ground track, altitude profile, and speed profile required to implement an experimental 4-D guidance system. Given a sequence of waypoints that define a nominal flight path, the first algorithm generates a realistic, flyable ground track consisting of a sequence of straight line segments and circular arcs. Each circular turn is constrained by the minimum turning radius of the aircraft. The ground track and the specified waypoint altitudes are used as inputs to the second algorithm which generates the altitude profile. The altitude profile consists of piecewise constant flight path angle segments, each segment lying within specified upper and lower bounds. The third algorithm generates a feasible speed profile subject to constraints on the rate of change in speed, permissible speed ranges, and effects of wind. Flight path parameters are then combined into a chronological sequence to form the 4-D guidance vectors. These vectors can be used to drive the autopilot/autothrottle of the aircraft so that a 4-D flight path could be tracked completely automatically; or these vectors may be used to drive the flight director and other cockpit displays, thereby enabling the pilot to track a 4-D flight path manually.
GPM Mission Gridded Text Products Providing Surface Precipitation Retrievals
NASA Astrophysics Data System (ADS)
Stocker, Erich Franz; Kelley, Owen; Huffman, George; Kummerow, Christian
2015-04-01
In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar), and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMI/DPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for reseachers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations. This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments - GMI, DPR, and combined GMI/DPR (2) surface precipitation retrievals for the partner constellation satellites. Both of these gridded products are generated for a .25 degree x .25 degree hourly grid, which are packaged into daily ASCII files that can downloaded from the PPS FTP site. To reduce the download size, the files are compressed using the gzip utility. This paper will focus on presenting high-level details about the gridded text product being generated from the instruments on the GPM core satellite. But summary information will also be presented about the partner radiometer gridded product. All retrievals for the partner radiometer are done using the GPROF2014 algorithm using as input the PPS generated inter-calibrated 1C product for the radiometer.
NASA Technical Reports Server (NTRS)
Lawton, Pat
2004-01-01
The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.
An iterative algorithm for calculating stylus radius unambiguously
NASA Astrophysics Data System (ADS)
Vorburger, T. V.; Zheng, A.; Renegar, T. B.; Song, J.-F.; Ma, L.
2011-08-01
The stylus radius is an important specification for stylus instruments and is commonly provided by instrument manufacturers. However, it is difficult to measure the stylus radius unambiguously. Accurate profiles of the stylus tip may be obtained by profiling over an object sharper than itself, such as a razor blade. However, the stylus profile thus obtained is a partial arc, and unless the shape of the stylus tip is a perfect sphere or circle, the effective value of the radius depends on the length of the tip profile over which the radius is determined. We have developed an iterative, least squares algorithm aimed to determine the effective least squares stylus radius unambiguously. So far, the algorithm converges to reasonable results for the least squares stylus radius. We suggest that the algorithm be considered for adoption in documentary standards describing the properties of stylus instruments.
NASA Astrophysics Data System (ADS)
Kirchengast, Gottfried; Li, Ying; Scherllin-Pirscher, Barbara; Schwärz, Marc; Schwarz, Jakob; Nielsen, Johannes K.
2017-04-01
The GNSS radio occultation (RO) technique is an important remote sensing technique for obtaining thermodynamic profiles of temperature, humidity, and pressure in the Earth's troposphere. However, due to refraction effects of both dry ambient air and water vapor in the troposphere, retrieval of accurate thermodynamic profiles at these lower altitudes is challenging and requires suitable background information in addition to the RO refractivity information. Here we introduce a new moist air retrieval algorithm aiming to improve the quality and robustness of retrieving temperature, humidity and pressure profiles in moist air tropospheric conditions. The new algorithm consists of four steps: (1) use of prescribed specific humidity and its uncertainty to retrieve temperature and its associated uncertainty; (2) use of prescribed temperature and its uncertainty to retrieve specific humidity and its associated uncertainty; (3) use of the previous results to estimate final temperature and specific humidity profiles through optimal estimation; (4) determination of air pressure and density profiles from the results obtained before. The new algorithm does not require elaborated matrix inversions which are otherwise widely used in 1D-Var retrieval algorithms, and it allows a transparent uncertainty propagation, whereby the uncertainties of prescribed variables are dynamically estimated accounting for their spatial and temporal variations. Estimated random uncertainties are calculated by constructing error covariance matrices from co-located ECMWF short-range forecast and corresponding analysis profiles. Systematic uncertainties are estimated by empirical modeling. The influence of regarding or disregarding vertical error correlations is quantified. The new scheme is implemented with static input uncertainty profiles in WEGC's current OPSv5.6 processing system and with full scope in WEGC's next-generation system, the Reference Occultation Processing System (rOPS). Results from both WEGC systems, current OPSv5.6 and next-generation rOPS, are shown and discussed, based on both insights from individual profiles and statistical ensembles, and compared to moist air retrieval results from the UCAR Boulder and ROM-SAF Copenhagen centers. The results show that the new algorithmic scheme improves the temperature, humidity and pressure retrieval performance, in particular also the robustness including for integrated uncertainty estimation for large-scale applications, over the previous algorithms. The new rOPS-implemented algorithm will therefore be used in the first large-scale reprocessing towards a tropospheric climate data record 2001-2016 by the rOPS, including its integrated uncertainty propagation.
Xu, Z N
2014-12-01
In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop images with different hydrophobicity values and volumes.
Retrieving cloudy atmosphere parameters from RPG-HATPRO radiometer data
NASA Astrophysics Data System (ADS)
Kostsov, V. S.
2015-03-01
An algorithm for simultaneously determining both tropospheric temperature and humidity profiles and cloud liquid water content from ground-based measurements of microwave radiation is presented. A special feature of this algorithm is that it combines different types of measurements and different a priori information on the sought parameters. The features of its use in processing RPG-HATPRO radiometer data obtained in the course of atmospheric remote sensing experiments carried out by specialists from the Faculty of Physics of St. Petersburg State University are discussed. The results of a comparison of both temperature and humidity profiles obtained using a ground-based microwave remote sensing method with those obtained from radiosonde data are analyzed. It is shown that this combined algorithm is comparable (in accuracy) to the classical method of statistical regularization in determining temperature profiles; however, this algorithm demonstrates better accuracy (when compared to the method of statistical regularization) in determining humidity profiles.
Two-pass imputation algorithm for missing value estimation in gene expression time series.
Tsiporkova, Elena; Boeva, Veselka
2007-10-01
Gene expression microarray experiments frequently generate datasets with multiple values missing. However, most of the analysis, mining, and classification methods for gene expression data require a complete matrix of gene array values. Therefore, the accurate estimation of missing values in such datasets has been recognized as an important issue, and several imputation algorithms have already been proposed to the biological community. Most of these approaches, however, are not particularly suitable for time series expression profiles. In view of this, we propose a novel imputation algorithm, which is specially suited for the estimation of missing values in gene expression time series data. The algorithm utilizes Dynamic Time Warping (DTW) distance in order to measure the similarity between time expression profiles, and subsequently selects for each gene expression profile with missing values a dedicated set of candidate profiles for estimation. Three different DTW-based imputation (DTWimpute) algorithms have been considered: position-wise, neighborhood-wise, and two-pass imputation. These have initially been prototyped in Perl, and their accuracy has been evaluated on yeast expression time series data using several different parameter settings. The experiments have shown that the two-pass algorithm consistently outperforms, in particular for datasets with a higher level of missing entries, the neighborhood-wise and the position-wise algorithms. The performance of the two-pass DTWimpute algorithm has further been benchmarked against the weighted K-Nearest Neighbors algorithm, which is widely used in the biological community; the former algorithm has appeared superior to the latter one. Motivated by these findings, indicating clearly the added value of the DTW techniques for missing value estimation in time series data, we have built an optimized C++ implementation of the two-pass DTWimpute algorithm. The software also provides for a choice between three different initial rough imputation methods.
Accelerated Profile HMM Searches
Eddy, Sean R.
2011-01-01
Profile hidden Markov models (profile HMMs) and probabilistic inference methods have made important contributions to the theory of sequence database homology search. However, practical use of profile HMM methods has been hindered by the computational expense of existing software implementations. Here I describe an acceleration heuristic for profile HMMs, the “multiple segment Viterbi” (MSV) algorithm. The MSV algorithm computes an optimal sum of multiple ungapped local alignment segments using a striped vector-parallel approach previously described for fast Smith/Waterman alignment. MSV scores follow the same statistical distribution as gapped optimal local alignment scores, allowing rapid evaluation of significance of an MSV score and thus facilitating its use as a heuristic filter. I also describe a 20-fold acceleration of the standard profile HMM Forward/Backward algorithms using a method I call “sparse rescaling”. These methods are assembled in a pipeline in which high-scoring MSV hits are passed on for reanalysis with the full HMM Forward/Backward algorithm. This accelerated pipeline is implemented in the freely available HMMER3 software package. Performance benchmarks show that the use of the heuristic MSV filter sacrifices negligible sensitivity compared to unaccelerated profile HMM searches. HMMER3 is substantially more sensitive and 100- to 1000-fold faster than HMMER2. HMMER3 is now about as fast as BLAST for protein searches. PMID:22039361
NASA Astrophysics Data System (ADS)
Kudo, Rei; Nishizawa, Tomoaki; Aoyagi, Toshinori
2016-07-01
The SKYLIDAR algorithm was developed to estimate vertical profiles of aerosol optical properties from sky radiometer (SKYNET) and lidar (AD-Net) measurements. The solar heating rate was also estimated from the SKYLIDAR retrievals. The algorithm consists of two retrieval steps: (1) columnar properties are retrieved from the sky radiometer measurements and the vertically mean depolarization ratio obtained from the lidar measurements and (2) vertical profiles are retrieved from the lidar measurements and the results of the first step. The derived parameters are the vertical profiles of the size distribution, refractive index (real and imaginary parts), extinction coefficient, single-scattering albedo, and asymmetry factor. Sensitivity tests were conducted by applying the SKYLIDAR algorithm to the simulated sky radiometer and lidar data for vertical profiles of three different aerosols, continental average, transported dust, and pollution aerosols. The vertical profiles of the size distribution, extinction coefficient, and asymmetry factor were well estimated in all cases. The vertical profiles of the refractive index and single-scattering albedo of transported dust, but not those of transported pollution aerosol, were well estimated. To demonstrate the performance and validity of the SKYLIDAR algorithm, we applied the SKYLIDAR algorithm to the actual measurements at Tsukuba, Japan. The detailed vertical structures of the aerosol optical properties and solar heating rate of transported dust and smoke were investigated. Examination of the relationship between the solar heating rate and the aerosol optical properties showed that the vertical profile of the asymmetry factor played an important role in creating vertical variation in the solar heating rate. We then compared the columnar optical properties retrieved with the SKYLIDAR algorithm to those produced with the more established scheme SKYRAD.PACK, and the surface solar irradiance calculated from the SKYLIDAR retrievals was compared with pyranometer measurement. The results showed good agreements: the columnar values of the SKYLIDAR retrievals agreed with reliable SKYRAD.PACK retrievals, and the SKYLIDAR retrievals were sufficiently accurate to evaluate the surface solar irradiance.
Ascent guidance algorithm using lidar wind measurements
NASA Technical Reports Server (NTRS)
Cramer, Evin J.; Bradt, Jerre E.; Hardtla, John W.
1990-01-01
The formulation of a general nonlinear programming guidance algorithm that incorporates wind measurements in the computation of ascent guidance steering commands is discussed. A nonlinear programming (NLP) algorithm that is designed to solve a very general problem has the potential to address the diversity demanded by future launch systems. Using B-splines for the command functional form allows the NLP algorithm to adjust the shape of the command profile to achieve optimal performance. The algorithm flexibility is demonstrated by simulation of ascent with dynamic loading constraints through a set of random wind profiles with and without wind sensing capability.
NASA Technical Reports Server (NTRS)
Shige, S.; Takayabu, Y.; Tao, W.-K.
2007-01-01
The global hydrological cycle is central to the Earth's climate system, with rainfall and the physics of precipitation formation acting as the key links in the cycle. Two-thirds of global rainfall occurs in the tropics with the associated latent heating (LH) accounting for threefourths of the total heat energy available to the Earth's atmosphere. In the last decade, it has been established that standard products of LH from satellite measurements, particularly TRMM measurements, would be a valuable resource for scientific research and applications. Such products would enable new insights and investigations concerning the complexities of convection system life cycles, the diabatic heating controls and feedbacks related to rne-sosynoptic circulations and their forecasting, the relationship of tropical patterns of LH to the global circulation and climate, and strategies for improving cloud parameterizations In environmental prediction models. However, the LH and water vapor profile or budget (called the apparent moisture sink, or Q2) is closely related. This paper presented the development of an algorithm for retrieving Q2 using 'TRMM precipitation radar. Since there is no direct measurement of LH and Q2, the validation of algorithm usually applies a method called consistency check. Consistency checking involving Cloud Resolving Model (CRM)-generated LH and 42 profiles and algorithm-reconstructed is a useful step in evaluating the performance of a given algorithm. In this process, the CRM simulation of a time-dependent precipitation process (multiple-day time series) is used to obtain the required input parameters for a given algorithm. The algorithm is then used to "econsti-LKth"e heating and moisture profiles that the CRM simulation originally produced, and finally both sets of conformal estimates (model and algorithm) are compared each other. The results indicate that discrepancies between the reconstructed and CM-simulated profiles for Q2, especially at low levels, are larger than those for latent heat. Larger discrepancies in Q2 at low levels are due to moistening for non-precipitating region that algorithm cannot reconstruct. Nevertheless, the algorithm-reconstructed total Q2 profiles are in good agreement with the CRM-simulated ones.
Atmospheric constituent density profiles from full disk solar occultation experiments
NASA Technical Reports Server (NTRS)
Lumpe, J. D.; Chang, C. S.; Strickland, D. J.
1991-01-01
Mathematical methods are described which permit the derivation of the number of density profiles of atmospheric constituents from solar occultation measurements. The algorithm is first applied to measurements corresponding to an arbitrary solar-intensity distribution to calculate the normalized absorption profile. The application of Fourier transform to the integral equation yields a precise expression for the corresponding number density, and the solution is employed with the data given in the form of Laguerre polynomials. The algorithm is employed to calculate the results for the case of uniform distribution of solar intensity, and the results demonstrate the convergence properties of the method. The algorithm can be used to effectively model representative model-density profiles with constant and altitude-dependent scale heights.
Cross-wind profiling based on the scattered wave scintillation in a telescope focus.
Banakh, V A; Marakasov, D A; Vorontsov, M A
2007-11-20
The problem of wind profile reconstruction from scintillation of an optical wave scattered off a rough surface in a telescope focus plane is considered. Both the expression for the spatiotemporal correlation function and the algorithm of cross-wind velocity and direction profiles reconstruction based on the spatiotemporal spectrum of intensity of an optical wave scattered by a diffuse target in a turbulent atmosphere are presented. Computer simulations performed under conditions of weak optical turbulence show wind profiles reconstruction by the developed algorithm.
NASA Technical Reports Server (NTRS)
Liu, Xu; Larar, Allen M.; Zhou, Daniel K.; Kizer, Susan H.; Wu, Wan; Barnet, Christopher; Divakarla, Murty; Guo, Guang; Blackwell, Bill; Smith, William L.;
2011-01-01
Different methods for retrieving atmospheric profiles in the presence of clouds from hyperspectral satellite remote sensing data will be described. We will present results from the JPSS cloud-clearing algorithm and NASA Langley cloud retrieval algorithm.
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.
NASA Astrophysics Data System (ADS)
Bucsela, E. J.; Perring, A. E.; Cohen, R. C.; Boersma, K. F.; Celarier, E. A.; Gleason, J. F.; Wenig, M. O.; Bertram, T. H.; Wooldridge, P. J.; Dirksen, R.; Veefkind, J. P.
2008-08-01
We present an analysis of in situ NO2 measurements from aircraft experiments between summer 2004 and spring 2006. The data are from the INTEX-A, PAVE, and INTEX-B campaigns and constitute the most comprehensive set of tropospheric NO2 profiles to date. Profile shapes from INTEX-A and PAVE are found to be qualitatively similar to annual mean profiles from the GEOS-Chem model. Using profiles from the INTEX-B campaign, we perform error-weighted linear regressions to compare the Ozone Monitoring Instrument (OMI) tropospheric NO2 columns from the near-real-time product (NRT) and standard product (SP) with the integrated in situ columns. Results indicate that the OMI SP algorithm yields NO2 amounts lower than the in situ columns by a factor of 0.86 (±0.2) and that NO2 amounts from the NRT algorithm are higher than the in situ data by a factor of 1.68 (±0.6). The correlation between the satellite and in situ data is good (r = 0.83) for both algorithms. Using averaging kernels, the influence of the algorithm's a priori profiles on the satellite retrieval is explored. Results imply that air mass factors from the a priori profiles are on average slightly larger (˜10%) than those from the measured profiles, but the differences are not significant.
Estimating Planetary Boundary Layer Heights from NOAA Profiler Network Wind Profiler Data
NASA Technical Reports Server (NTRS)
Molod, Andrea M.; Salmun, H.; Dempsey, M
2015-01-01
An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourly archived wind profiler data from the NOAA Profiler Network (NPN) sites located throughout the central United States. Unlike previous studies, the present algorithm has been applied to a long record of publicly available wind profiler signal backscatter data. Under clear conditions, summertime averaged hourly time series of PBL heights compare well with Richardson-number based estimates at the few NPN stations with hourly temperature measurements. Comparisons with clear sky reanalysis based estimates show that the wind profiler PBL heights are lower by approximately 250-500 m. The geographical distribution of daily maximum PBL heights corresponds well with the expected distribution based on patterns of surface temperature and soil moisture. Wind profiler PBL heights were also estimated under mostly cloudy conditions, and are generally higher than both the Richardson number based and reanalysis PBL heights, resulting in a smaller clear-cloudy condition difference. The algorithm presented here was shown to provide a reliable summertime climatology of daytime hourly PBL heights throughout the central United States.
Determining the near-surface current profile from measurements of the wave dispersion relation
NASA Astrophysics Data System (ADS)
Smeltzer, Benjamin; Maxwell, Peter; Aesøy, Eirik; Ellingsen, Simen
2017-11-01
The current-induced Doppler shifts of waves can yield information about the background mean flow, providing an attractive method of inferring the current profile in the upper layer of the ocean. We present measurements of waves propagating on shear currents in a laboratory water channel, as well as theoretical investigations of inversion techniques for determining the vertical current structure. Spatial and temporal measurements of the free surface profile obtained using a synthetic Schlieren method are analyzed to determine the wave dispersion relation and Doppler shifts as a function of wavelength. The vertical current profile can then be inferred from the Doppler shifts using an inversion algorithm. Most existing algorithms rely on a priori assumptions of the shape of the current profile, and developing a method that uses less stringent assumptions is a focus of this study, allowing for measurement of more general current profiles. The accuracy of current inversion algorithms are evaluated by comparison to measurements of the mean flow profile from particle image velocimetry (PIV), and a discussion of the sensitivity to errors in the Doppler shifts is presented.
Conditional clustering of temporal expression profiles
Wang, Ling; Montano, Monty; Rarick, Matt; Sebastiani, Paola
2008-01-01
Background Many microarray experiments produce temporal profiles in different biological conditions but common cluster techniques are not able to analyze the data conditional on the biological conditions. Results This article presents a novel technique to cluster data from time course microarray experiments performed across several experimental conditions. Our algorithm uses polynomial models to describe the gene expression patterns over time, a full Bayesian approach with proper conjugate priors to make the algorithm invariant to linear transformations, and an iterative procedure to identify genes that have a common temporal expression profile across two or more experimental conditions, and genes that have a unique temporal profile in a specific condition. Conclusion We use simulated data to evaluate the effectiveness of this new algorithm in finding the correct number of clusters and in identifying genes with common and unique profiles. We also use the algorithm to characterize the response of human T cells to stimulations of antigen-receptor signaling gene expression temporal profiles measured in six different biological conditions and we identify common and unique genes. These studies suggest that the methodology proposed here is useful in identifying and distinguishing uniquely stimulated genes from commonly stimulated genes in response to variable stimuli. Software for using this clustering method is available from the project home page. PMID:18334028
Development of an Aircraft Approach and Departure Atmospheric Profile Generation Algorithm
NASA Technical Reports Server (NTRS)
Buck, Bill K.; Velotas, Steven G.; Rutishauser, David K. (Technical Monitor)
2004-01-01
In support of NASA Virtual Airspace Modeling and Simulation (VAMS) project, an effort was initiated to develop and test techniques for extracting meteorological data from landing and departing aircraft, and for building altitude based profiles for key meteorological parameters from these data. The generated atmospheric profiles will be used as inputs to NASA s Aircraft Vortex Spacing System (AVOLSS) Prediction Algorithm (APA) for benefits and trade analysis. A Wake Vortex Advisory System (WakeVAS) is being developed to apply weather and wake prediction and sensing technologies with procedures to reduce current wake separation criteria when safe and appropriate to increase airport operational efficiency. The purpose of this report is to document the initial theory and design of the Aircraft Approach Departure Atmospheric Profile Generation Algorithm.
A New Algorithm for Detecting Cloud Height using OMPS/LP Measurements
NASA Technical Reports Server (NTRS)
Chen, Zhong; DeLand, Matthew; Bhartia, Pawan K.
2016-01-01
The Ozone Mapping and Profiler Suite Limb Profiler (OMPS/LP) ozone product requires the determination of cloud height for each event to establish the lower boundary of the profile for the retrieval algorithm. We have created a revised cloud detection algorithm for LP measurements that uses the spectral dependence of the vertical gradient in radiance between two wavelengths in the visible and near-IR spectral regions. This approach provides better discrimination between clouds and aerosols than results obtained using a single wavelength. Observed LP cloud height values show good agreement with coincident Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements.
NASA Technical Reports Server (NTRS)
Bauman, William H., III
2014-01-01
NASAs LSP customers and the future SLS program rely on observations of upper-level winds for steering, loads, and trajectory calculations for the launch vehicles flight. On the day of launch, the 45th Weather Squadron (45 WS) Launch Weather Officers (LWOs) monitor the upper-level winds and provide forecasts to the launch team via the AMU-developed LSP Upper Winds tool for launches at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station. This tool displays wind speed and direction profiles from rawinsondes released during launch operations, the 45th Space Wing 915-MHz Doppler Radar Wind Profilers (DRWPs) and KSC 50-MHz DRWP, and output from numerical weather prediction models.The goal of this task was to splice the wind speed and direction profiles from the 45th Space Wing (45 SW) 915-MHz Doppler radar Wind Profilers (DRWPs) and KSC 50-MHz DRWP at altitudes where the wind profiles overlap to create a smooth profile. In the first version of the LSP Upper Winds tool, the top of the 915-MHz DRWP wind profile and the bottom of the 50-MHz DRWP were not spliced, sometimes creating a discontinuity in the profile. The Marshall Space Flight Center (MSFC) Natural Environments Branch (NE) created algorithms to splice the wind profiles from the two sensors to generate an archive of vertically complete wind profiles for the SLS program. The AMU worked with MSFC NE personnel to implement these algorithms in the LSP Upper Winds tool to provide a continuous spliced wind profile.The AMU transitioned the MSFC NE algorithms to interpolate and fill data gaps in the data, implement a Gaussian weighting function to produce 50-m altitude intervals in each sensor, and splice the data together from both DRWPs. They did so by porting the MSFC NE code written with MATLAB software into Microsoft Excel Visual Basic for Applications (VBA). After testing the new algorithms in stand-alone VBA modules, the AMU replaced the existing VBA code in the LSP Upper Winds tool with the new algorithms. They then tested the code in the LSP Upper Winds tool with archived data. The tool will be delivered to the 45 WS after the 50-MHz DRWP upgrade is complete and the tool is tested with real-time data. The 50-MHz DRWP upgrade is expected to be finished in October 2014.
Distributed Sensing and Shape Control of Piezoelectric Bimorph Mirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redmond, James M.; Barney, Patrick S.; Henson, Tammy D.
1999-07-28
As part of a collaborative effort between Sandia National Laboratories and the University of Kentucky to develop a deployable mirror for remote sensing applications, research in shape sensing and control algorithms that leverage the distributed nature of electron gun excitation for piezoelectric bimorph mirrors is summarized. A coarse shape sensing technique is developed that uses reflected light rays from the sample surface to provide discrete slope measurements. Estimates of surface profiles are obtained with a cubic spline curve fitting algorithm. Experiments on a PZT bimorph illustrate appropriate deformation trends as a function of excitation voltage. A parallel effort to effectmore » desired shape changes through electron gun excitation is also summarized. A one dimensional model-based algorithm is developed to correct profile errors in bimorph beams. A more useful two dimensional algorithm is also developed that relies on measured voltage-curvature sensitivities to provide corrective excitation profiles for the top and bottom surfaces of bimorph plates. The two algorithms are illustrated using finite element models of PZT bimorph structures subjected to arbitrary disturbances. Corrective excitation profiles that yield desired parabolic forms are computed, and are shown to provide the necessary corrective action.« less
NASA Technical Reports Server (NTRS)
Shepherd, J. Marshall; Einaudi, Franco (Technical Monitor)
2000-01-01
The Tropical Rainfall Measuring Mission (TRMM) as a part of NASA's Earth System Enterprise is the first mission dedicated to measuring tropical rainfall through microwave and visible sensors, and includes the first spaceborne rain radar. Tropical rainfall comprises two-thirds of global rainfall. It is also the primary distributor of heat through the atmosphere's circulation. It is this circulation that defines Earth's weather and climate. Understanding rainfall and its variability is crucial to understanding and predicting global climate change. Weather and climate models need an accurate assessment of the latent heating released as tropical rainfall occurs. Currently, cloud model-based algorithms are used to derive latent heating based on rainfall structure. Ultimately, these algorithms can be applied to actual data from TRMM. This study investigates key underlying assumptions used in developing the latent heating algorithms. For example, the standard algorithm is highly dependent on a system's rainfall amount and structure. It also depends on an a priori database of model-derived latent heating profiles based on the aforementioned rainfall characteristics. Unanswered questions remain concerning the sensitivity of latent heating profiles to environmental conditions (both thermodynamic and kinematic), regionality, and seasonality. This study investigates and quantifies such sensitivities and seeks to determine the optimal latent heating profile database based on the results. Ultimately, the study seeks to produce an optimized latent heating algorithm based not only on rainfall structure but also hydrometeor profiles.
Stochastic characterization of phase detection algorithms in phase-shifting interferometry
Munteanu, Florin
2016-11-01
Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here,more » we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.« less
Real-time feedback control of the plasma density profile on ASDEX Upgrade
NASA Astrophysics Data System (ADS)
Mlynek, A.; Reich, M.; Giannone, L.; Treutterer, W.; Behler, K.; Blank, H.; Buhler, A.; Cole, R.; Eixenberger, H.; Fischer, R.; Lohs, A.; Lüddecke, K.; Merkel, R.; Neu, G.; Ryter, F.; Zasche, D.; ASDEX Upgrade Team
2011-04-01
The spatial distribution of density in a fusion experiment is of significant importance as it enters in numerous analyses and contributes to the fusion performance. The reconstruction of the density profile is therefore commonly done in offline data analysis. In this paper, we present an algorithm which allows for density profile reconstruction from the data of the submillimetre interferometer and the magnetic equilibrium in real-time. We compare the obtained results to the profiles yielded by a numerically more complex offline algorithm. Furthermore, we present recent ASDEX Upgrade experiments in which we used the real-time density profile for active feedback control of the shape of the density profile.
NASA Technical Reports Server (NTRS)
Han, Jongil; Arya, S. Pal; Shaohua, Shen; Lin, Yuh-Lang; Proctor, Fred H. (Technical Monitor)
2000-01-01
Algorithms are developed to extract atmospheric boundary layer profiles for turbulence kinetic energy (TKE) and energy dissipation rate (EDR), with data from a meteorological tower as input. The profiles are based on similarity theory and scalings for the atmospheric boundary layer. The calculated profiles of EDR and TKE are required to match the observed values at 5 and 40 m. The algorithms are coded for operational use and yield plausible profiles over the diurnal variation of the atmospheric boundary layer.
A simple algorithm for beam profile diagnostics using a thermographic camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katagiri, Ken; Hojo, Satoru; Honma, Toshihiro
2014-03-15
A new algorithm for digital image processing apparatuses is developed to evaluate profiles of high-intensity DC beams from temperature images of irradiated thin foils. Numerical analyses are performed to examine the reliability of the algorithm. To simulate the temperature images acquired by a thermographic camera, temperature distributions are numerically calculated for 20 MeV proton beams with different parameters. Noise in the temperature images which is added by the camera sensor is also simulated to account for its effect. Using the algorithm, beam profiles are evaluated from the simulated temperature images and compared with exact solutions. We find that niobium ismore » an appropriate material for the thin foil used in the diagnostic system. We also confirm that the algorithm is adaptable over a wide beam current range of 0.11–214 μA, even when employing a general-purpose thermographic camera with rather high noise (ΔT{sub NETD} ≃ 0.3 K; NETD: noise equivalent temperature difference)« less
Empirical algorithms for ocean optics parameters
NASA Astrophysics Data System (ADS)
Smart, Jeffrey H.
2007-06-01
As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.
SU-F-T-431: Dosimetric Validation of Acuros XB Algorithm for Photon Dose Calculation in Water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, L; Yadav, G; Kishore, V
2016-06-15
Purpose: To validate the Acuros XB algorithm implemented in Eclipse Treatment planning system version 11 (Varian Medical System, Inc., Palo Alto, CA, USA) for photon dose calculation. Methods: Acuros XB is a Linear Boltzmann transport equation (LBTE) solver that solves LBTE equation explicitly and gives result equivalent to Monte Carlo. 6MV photon beam from Varian Clinac-iX (2300CD) was used for dosimetric validation of Acuros XB. Percentage depth dose (PDD) and profiles (at dmax, 5, 10, 20 and 30 cm) measurements were performed in water for field size ranging from 2×2,4×4, 6×6, 10×10, 20×20, 30×30 and 40×40 cm{sup 2}. Acuros XBmore » results were compared against measurements and anisotropic analytical algorithm (AAA) algorithm. Results: Acuros XB result shows good agreement with measurements, and were comparable to AAA algorithm. Result for PDD and profiles shows less than one percent difference from measurements, and from calculated PDD and profiles by AAA algorithm for all field size. TPS calculated Gamma error histogram values, average gamma errors in PDD curves before dmax and after dmax were 0.28, 0.15 for Acuros XB and 0.24, 0.17 for AAA respectively, average gamma error in profile curves in central region, penumbra region and outside field region were 0.17, 0.21, 0.42 for Acuros XB and 0.10, 0.22, 0.35 for AAA respectively. Conclusion: The dosimetric validation of Acuros XB algorithms in water medium was satisfactory. Acuros XB algorithm has potential to perform photon dose calculation with high accuracy, which is more desirable for modern radiotherapy environment.« less
Genetic Algorithm for Opto-thermal Skin Hydration Depth Profiling Measurements
NASA Astrophysics Data System (ADS)
Cui, Y.; Xiao, Perry; Imhof, R. E.
2013-09-01
Stratum corneum is the outermost skin layer, and the water content in stratum corneum plays a key role in skin cosmetic properties as well as skin barrier functions. However, to measure the water content, especially the water concentration depth profile, within stratum corneum is very difficult. Opto-thermal emission radiometry, or OTTER, is a promising technique that can be used for such measurements. In this paper, a study on stratum corneum hydration depth profiling by using a genetic algorithm (GA) is presented. The pros and cons of a GA compared against other inverse algorithms such as neural networks, maximum entropy, conjugate gradient, and singular value decomposition will be discussed first. Then, it will be shown how to use existing knowledge to optimize a GA for analyzing the opto-thermal signals. Finally, these latest GA results on hydration depth profiling of stratum corneum under different conditions, as well as on the penetration profiles of externally applied solvents, will be shown.
NASA Astrophysics Data System (ADS)
Toropov, S. Yu; Toropov, V. S.
2018-05-01
In order to design more accurately trenchless pipeline passages, a technique has been developed for calculating the passage profile, based on specific parameters of the horizontal directional drilling rig, including the range of possible drilling angles and a list of compatible drill pipe sets. The algorithm for calculating the parameters of the trenchless passage profile is shown in the paper. This algorithm is based on taking into account the features of HDD technology, namely, three different stages of production. The authors take into account that the passage profile is formed at the first stage of passage construction, that is, when drilling a pilot well. The algorithm involves calculating the profile by taking into account parameters of the drill pipes used and angles of their deviation relative to each other during the pilot drilling. This approach allows us to unambiguously calibrate the designed profile for the HDD rig capabilities and the auxiliary and navigation equipment used in the construction process.
An Alternative Retrieval Algorithm for the Ozone Mapping and Profiler Suite Limb Profiler
2012-05-01
behavior of aerosol extinction from the upper troposphere through the stratosphere is critical for retrieving ozone in this region. Aerosol scattering is......include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT An Alternative Retrieval Algorithm for the Ozone Mapping and
Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses
ERIC Educational Resources Information Center
Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu
2011-01-01
Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…
X-ray Photoelectron Spectroscopy of High-κ Dielectrics
NASA Astrophysics Data System (ADS)
Mathew, A.; Demirkan, K.; Wang, C.-G.; Wilk, G. D.; Watson, D. G.; Opila, R. L.
2005-09-01
Photoelectron spectroscopy is a powerful technique for the analysis of gate dielectrics because it can determine the elemental composition, the chemical states, and the compositional depth profiles non-destructively. The sampling depth, determined by the escape depth of the photoelectrons, is comparable to the thickness of current gate oxides. A maximum entropy algorithm was used to convert photoelectron collection angle dependence of the spectra to compositional depth profiles. A nitrided hafnium silicate film is used to demonstrate the utility of the technique. The algorithm balances deviations from a simple assumed depth profile against a calculated depth profile that best fits the angular dependence of the photoelectron spectra. A flow chart of the program is included in this paper. The development of the profile is also shown as the program is iterated. Limitations of the technique include the electron escape depths and elemental sensitivity factors used to calculate the profile. The technique is also limited to profiles that extend to the depth of approximately twice the escape depth. These limitations restrict conclusions to comparison among a family of similar samples. Absolute conclusions about depths and concentrations must be used cautiously. Current work to improve the algorithm is also described.
Generate stepper motor linear speed profile in real time
NASA Astrophysics Data System (ADS)
Stoychitch, M. Y.
2018-01-01
In this paper we consider the problem of realization of linear speed profile of stepper motors in real time. We considered the general case when changes of speed in the phases of acceleration and deceleration are different. The new and practical algorithm of the trajectory planning is given. The algorithms of the real time speed control which are suitable for realization to the microcontroller and FPGA circuits are proposed. The practical realization one of these algorithms, using Arduino platform, is given also.
Ontological Problem-Solving Framework for Dynamically Configuring Sensor Systems and Algorithms
Qualls, Joseph; Russomanno, David J.
2011-01-01
The deployment of ubiquitous sensor systems and algorithms has led to many challenges, such as matching sensor systems to compatible algorithms which are capable of satisfying a task. Compounding the challenges is the lack of the requisite knowledge models needed to discover sensors and algorithms and to subsequently integrate their capabilities to satisfy a specific task. A novel ontological problem-solving framework has been designed to match sensors to compatible algorithms to form synthesized systems, which are capable of satisfying a task and then assigning the synthesized systems to high-level missions. The approach designed for the ontological problem-solving framework has been instantiated in the context of a persistence surveillance prototype environment, which includes profiling sensor systems and algorithms to demonstrate proof-of-concept principles. Even though the problem-solving approach was instantiated with profiling sensor systems and algorithms, the ontological framework may be useful with other heterogeneous sensing-system environments. PMID:22163793
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Guoping; D'Azevedo, Ed F; Zhang, Fan
2010-01-01
Calibration of groundwater models involves hundreds to thousands of forward solutions, each of which may solve many transient coupled nonlinear partial differential equations, resulting in a computationally intensive problem. We describe a hybrid MPI/OpenMP approach to exploit two levels of parallelisms in software and hardware to reduce calibration time on multi-core computers. HydroGeoChem 5.0 (HGC5) is parallelized using OpenMP for direct solutions for a reactive transport model application, and a field-scale coupled flow and transport model application. In the reactive transport model, a single parallelizable loop is identified to account for over 97% of the total computational time using GPROF.more » Addition of a few lines of OpenMP compiler directives to the loop yields a speedup of about 10 on a 16-core compute node. For the field-scale model, parallelizable loops in 14 of 174 HGC5 subroutines that require 99% of the execution time are identified. As these loops are parallelized incrementally, the scalability is found to be limited by a loop where Cray PAT detects over 90% cache missing rates. With this loop rewritten, similar speedup as the first application is achieved. The OpenMP-parallelized code can be run efficiently on multiple workstations in a network or multiple compute nodes on a cluster as slaves using parallel PEST to speedup model calibration. To run calibration on clusters as a single task, the Levenberg Marquardt algorithm is added to HGC5 with the Jacobian calculation and lambda search parallelized using MPI. With this hybrid approach, 100 200 compute cores are used to reduce the calibration time from weeks to a few hours for these two applications. This approach is applicable to most of the existing groundwater model codes for many applications.« less
Accelerating Information Retrieval from Profile Hidden Markov Model Databases.
Tamimi, Ahmad; Ashhab, Yaqoub; Tamimi, Hashem
2016-01-01
Profile Hidden Markov Model (Profile-HMM) is an efficient statistical approach to represent protein families. Currently, several databases maintain valuable protein sequence information as profile-HMMs. There is an increasing interest to improve the efficiency of searching Profile-HMM databases to detect sequence-profile or profile-profile homology. However, most efforts to enhance searching efficiency have been focusing on improving the alignment algorithms. Although the performance of these algorithms is fairly acceptable, the growing size of these databases, as well as the increasing demand for using batch query searching approach, are strong motivations that call for further enhancement of information retrieval from profile-HMM databases. This work presents a heuristic method to accelerate the current profile-HMM homology searching approaches. The method works by cluster-based remodeling of the database to reduce the search space, rather than focusing on the alignment algorithms. Using different clustering techniques, 4284 TIGRFAMs profiles were clustered based on their similarities. A representative for each cluster was assigned. To enhance sensitivity, we proposed an extended step that allows overlapping among clusters. A validation benchmark of 6000 randomly selected protein sequences was used to query the clustered profiles. To evaluate the efficiency of our approach, speed and recall values were measured and compared with the sequential search approach. Using hierarchical, k-means, and connected component clustering techniques followed by the extended overlapping step, we obtained an average reduction in time of 41%, and an average recall of 96%. Our results demonstrate that representation of profile-HMMs using a clustering-based approach can significantly accelerate data retrieval from profile-HMM databases.
Improved OSIRIS NO2 retrieval algorithm: description and validation
NASA Astrophysics Data System (ADS)
Sioris, Christopher E.; Rieger, Landon A.; Lloyd, Nicholas D.; Bourassa, Adam E.; Roth, Chris Z.; Degenstein, Douglas A.; Camy-Peyret, Claude; Pfeilsticker, Klaus; Berthet, Gwenaël; Catoire, Valéry; Goutail, Florence; Pommereau, Jean-Pierre; McLinden, Chris A.
2017-03-01
A new retrieval algorithm for OSIRIS (Optical Spectrograph and Infrared Imager System) nitrogen dioxide (NO2) profiles is described and validated. The algorithm relies on spectral fitting to obtain slant column densities of NO2, followed by inversion using an algebraic reconstruction technique and the SaskTran spherical radiative transfer model (RTM) to obtain vertical profiles of local number density. The validation covers different latitudes (tropical to polar), years (2002-2012), all seasons (winter, spring, summer, and autumn), different concentrations of nitrogen dioxide (from denoxified polar vortex to polar summer), a range of solar zenith angles (68.6-90.5°), and altitudes between 10.5 and 39 km, thereby covering the full retrieval range of a typical OSIRIS NO2 profile. The use of a larger spectral fitting window than used in previous retrievals reduces retrieval uncertainties and the scatter in the retrieved profiles due to noisy radiances. Improvements are also demonstrated through the validation in terms of bias reduction at 15-17 km relative to the OSIRIS operational v3.0 algorithm. The diurnal variation of NO2 along the line of sight is included in a fully spherical multiple scattering RTM for the first time. Using this forward model with built-in photochemistry, the scatter of the differences relative to the correlative balloon NO2 profile data is reduced.
Generation of optimum vertical profiles for an advanced flight management system
NASA Technical Reports Server (NTRS)
Sorensen, J. A.; Waters, M. H.
1981-01-01
Algorithms for generating minimum fuel or minimum cost vertical profiles are derived and examined. The option for fixing the time of flight is included in the concepts developed. These algorithms form the basis for the design of an advanced on-board flight management system. The variations in the optimum vertical profiles (resulting from these concepts) due to variations in wind, takeoff mass, and range-to-destination are presented. Fuel savings due to optimum climb, free cruise altitude, and absorbing delays enroute are examined.
Ant colony system algorithm for the optimization of beer fermentation control.
Xiao, Jie; Zhou, Ze-Kui; Zhang, Guang-Xin
2004-12-01
Beer fermentation is a dynamic process that must be guided along a temperature profile to obtain the desired results. Ant colony system algorithm was applied to optimize the kinetic model of this process. During a fixed period of fermentation time, a series of different temperature profiles of the mixture were constructed. An optimal one was chosen at last. Optimal temperature profile maximized the final ethanol production and minimized the byproducts concentration and spoilage risk. The satisfactory results obtained did not require much computation effort.
SASS Applied to Optimum Work Roll Profile Selection in the Hot Rolling of Wide Steel
NASA Astrophysics Data System (ADS)
Nolle, Lars
The quality of steel strip produced in a wide strip rolling mill depends heavily on the careful selection of initial ground work roll profiles for each of the mill stands in the finishing train. In the past, these profiles were determined by human experts, based on their knowledge and experience. In previous work, the profiles were successfully optimised using a self-organising migration algorithm (SOMA). In this research, SASS, a novel heuristic optimisation algorithm that has only one control parameter, has been used to find the optimum profiles for a simulated rolling mill. The resulting strip quality produced using the profiles found by SASS is compared with results from previous work and the quality produced using the original profile specifications. The best set of profiles found by SASS clearly outperformed the original set and performed equally well as SOMA without the need of finding a suitable set of control parameters.
Angular filter refractometry analysis using simulated annealing.
Angland, P; Haberberger, D; Ivancic, S T; Froula, D H
2017-10-01
Angular filter refractometry (AFR) is a novel technique used to characterize the density profiles of laser-produced, long-scale-length plasmas [Haberberger et al., Phys. Plasmas 21, 056304 (2014)]. A new method of analysis for AFR images was developed using an annealing algorithm to iteratively converge upon a solution. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on the minimization of the χ 2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in an average uncertainty in the density profile of 5%-20% in the region of interest.
Evaluation and Application of Satellite-Based Latent Heating Profile Estimation Methods
NASA Technical Reports Server (NTRS)
Olson, William S.; Grecu, Mircea; Yang, Song; Tao, Wei-Kuo
2004-01-01
In recent years, methods for estimating atmospheric latent heating vertical structure from both passive and active microwave remote sensing have matured to the point where quantitative evaluation of these methods is the next logical step. Two approaches for heating algorithm evaluation are proposed: First, application of heating algorithms to synthetic data, based upon cloud-resolving model simulations, can be used to test the internal consistency of heating estimates in the absence of systematic errors in physical assumptions. Second, comparisons of satellite-retrieved vertical heating structures to independent ground-based estimates, such as rawinsonde-derived analyses of heating, provide an additional test. The two approaches are complementary, since systematic errors in heating indicated by the second approach may be confirmed by the first. A passive microwave and combined passive/active microwave heating retrieval algorithm are evaluated using the described approaches. In general, the passive microwave algorithm heating profile estimates are subject to biases due to the limited vertical heating structure information contained in the passive microwave observations. These biases may be partly overcome by including more environment-specific a priori information into the algorithm s database of candidate solution profiles. The combined passive/active microwave algorithm utilizes the much higher-resolution vertical structure information provided by spaceborne radar data to produce less biased estimates; however, the global spatio-temporal sampling by spaceborne radar is limited. In the present study, the passive/active microwave algorithm is used to construct a more physically-consistent and environment-specific set of candidate solution profiles for the passive microwave algorithm and to help evaluate errors in the passive algorithm s heating estimates. Although satellite estimates of latent heating are based upon instantaneous, footprint- scale data, suppression of random errors requires averaging to at least half-degree resolution. Analysis of mesoscale and larger space-time scale phenomena based upon passive and passive/active microwave heating estimates from TRMM, SSMI, and AMSR data will be presented at the conference.
GFIT2: an experimental algorithm for vertical profile retrieval from near-IR spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connor, Brian J.; Sherlock, Vanessa; Toon, Geoff
An algorithm for retrieval of vertical profiles from ground-based spectra in the near IR is described and tested. Known as GFIT2, the algorithm is primarily intended for CO 2, and is used exclusively for CO 2 in this paper. Retrieval of CO 2 vertical profiles from ground-based spectra is theoretically possible, would be very beneficial for carbon cycle studies and the validation of satellite measurements, and has been the focus of much research in recent years. GFIT2 is tested by application both to synthetic spectra and to measurements at two Total Carbon Column Observing Network (TCCON) sites. We demonstrate thatmore » there are approximately 3° of freedom for the CO 2 profile, and the algorithm performs as expected on synthetic spectra. We show that the accuracy of retrievals of CO 2 from measurements in the 1.61 μ (6220 cm -1) spectral band is limited by small uncertainties in calculation of the atmospheric spectrum. We investigate several techniques to minimize the effect of these uncertainties in calculation of the spectrum. These techniques are somewhat effective but to date have not been demonstrated to produce CO 2 profile retrievals with sufficient precision for applications to carbon dynamics. As a result, we finish by discussing ongoing research which may allow CO 2 profile retrievals with sufficient accuracy to significantly improve the scientific value of the measurements from that achieved with column retrievals.« less
GFIT2: an experimental algorithm for vertical profile retrieval from near-IR spectra
Connor, Brian J.; Sherlock, Vanessa; Toon, Geoff; ...
2016-08-02
An algorithm for retrieval of vertical profiles from ground-based spectra in the near IR is described and tested. Known as GFIT2, the algorithm is primarily intended for CO 2, and is used exclusively for CO 2 in this paper. Retrieval of CO 2 vertical profiles from ground-based spectra is theoretically possible, would be very beneficial for carbon cycle studies and the validation of satellite measurements, and has been the focus of much research in recent years. GFIT2 is tested by application both to synthetic spectra and to measurements at two Total Carbon Column Observing Network (TCCON) sites. We demonstrate thatmore » there are approximately 3° of freedom for the CO 2 profile, and the algorithm performs as expected on synthetic spectra. We show that the accuracy of retrievals of CO 2 from measurements in the 1.61 μ (6220 cm -1) spectral band is limited by small uncertainties in calculation of the atmospheric spectrum. We investigate several techniques to minimize the effect of these uncertainties in calculation of the spectrum. These techniques are somewhat effective but to date have not been demonstrated to produce CO 2 profile retrievals with sufficient precision for applications to carbon dynamics. As a result, we finish by discussing ongoing research which may allow CO 2 profile retrievals with sufficient accuracy to significantly improve the scientific value of the measurements from that achieved with column retrievals.« less
Inter-method Performance Study of Tumor Volumetry Assessment on Computed Tomography Test-retest Data
Buckler, Andrew J.; Danagoulian, Jovanna; Johnson, Kjell; Peskin, Adele; Gavrielides, Marios A.; Petrick, Nicholas; Obuchowski, Nancy A.; Beaumont, Hubert; Hadjiiski, Lubomir; Jarecha, Rudresh; Kuhnigk, Jan-Martin; Mantri, Ninad; McNitt-Gray, Michael; Moltz, Jan Hendrik; Nyiri, Gergely; Peterson, Sam; Tervé, Pierre; Tietjen, Christian; von Lavante, Etienne; Ma, Xiaonan; Pierre, Samantha St.; Athelogou, Maria
2015-01-01
Rationale and objectives Tumor volume change has potential as a biomarker for diagnosis, therapy planning, and treatment response. Precision was evaluated and compared among semi-automated lung tumor volume measurement algorithms from clinical thoracic CT datasets. The results inform approaches and testing requirements for establishing conformance with the Quantitative Imaging Biomarker Alliance (QIBA) CT Volumetry Profile. Materials and Methods Industry and academic groups participated in a challenge study. Intra-algorithm repeatability and inter-algorithm reproducibility were estimated. Relative magnitudes of various sources of variability were estimated using a linear mixed effects model. Segmentation boundaries were compared to provide a basis on which to optimize algorithm performance for developers. Results Intra-algorithm repeatability ranged from 13% (best performing) to 100% (least performing), with most algorithms demonstrating improved repeatability as the tumor size increased. Inter-algorithm reproducibility determined in three partitions and found to be 58% for the four best performing groups, 70% for the set of groups meeting repeatability requirements, and 84% when all groups but the least performer were included. The best performing partition performed markedly better on tumors with equivalent diameters above 40 mm. Larger tumors benefitted by human editing but smaller tumors did not. One-fifth to one-half of the total variability came from sources independent of the algorithms. Segmentation boundaries differed substantially, not just in overall volume but in detail. Conclusions Nine of the twelve participating algorithms pass precision requirements similar to what is indicated in the QIBA Profile, with the caveat that the current study was not designed to explicitly evaluate algorithm Profile conformance. Change in tumor volume can be measured with confidence to within ±14% using any of these nine algorithms on tumor sizes above 10 mm. No partition of the algorithms were able to meet the QIBA requirements for interchangeability down to 10 mm, though the partition comprised of the best performing algorithms did meet this requirement above a tumor size of approximately 40 mm. PMID:26376841
Variability in Tropospheric Ozone over China Derived from Assimilated GOME-2 Ozone Profiles
NASA Astrophysics Data System (ADS)
van Peet, J. C. A.; van der A, R. J.; Kelder, H. M.
2016-08-01
A tropospheric ozone dataset is derived from assimilated GOME-2 ozone profiles for 2008. Ozone profiles are retrieved with the OPERA algorithm, using the optimal estimation method. The retrievals are done on a spatial resolution of 160×160 km on 16 layers ranging from the surface up to 0.01 hPa. By using the averaging kernels in the data assimilation, the algorithm maintains the high resolution vertical structures of the model, while being constrained by observations with a lower vertical resolution.
A Comparative Study of Interval Management Control Law Capabilities
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Smith, Colin L.; Palmer, Susan O.; Abbott, Terence S.
2012-01-01
This paper presents a new tool designed to allow for rapid development and testing of different control algorithms for airborne spacing. This tool, Interval Management Modeling and Spacing Tool (IM MAST), is a fast-time, low-fidelity tool created to model the approach of aircraft to a runway, with a focus on their interactions with each other. Errors can be induced between pairs of aircraft by varying initial positions, winds, speed profiles, and altitude profiles. Results to-date show that only a few of the algorithms tested had poor behavior in the arrival and approach environment. The majority of the algorithms showed only minimal variation in performance under the test conditions. Trajectory-based algorithms showed high susceptibility to wind forecast errors, while performing marginally better than the other algorithms under other conditions. Trajectory-based algorithms have a sizable advantage, however, of being able to perform relative spacing operations between aircraft on different arrival routes and flight profiles without employing ghosting. methods. This comes at the higher cost of substantially increased complexity, however. Additionally, it was shown that earlier initiation of relative spacing operations provided more time for corrections to be made without any significant problems in the spacing operation itself. Initiating spacing farther out, however, would require more of the aircraft to begin spacing before they merge onto a common route.
Reflectivity retrieval in a networked radar environment
NASA Astrophysics Data System (ADS)
Lim, Sanghun
Monitoring of precipitation using a high-frequency radar system such as X-band is becoming increasingly popular due to its lower cost compared to its counterpart at S-band. Networks of meteorological radar systems at higher frequencies are being pursued for targeted applications such as coverage over a city or a small basin. However, at higher frequencies, the impact of attenuation due to precipitation needs to be resolved for successful implementation. In this research, new attenuation correction algorithms are introduced to compensate the attenuation impact due to rain medium. In order to design X-band radar systems as well as evaluate algorithm development, it is useful to have simultaneous X-band observation with and without the impact of path attenuation. One way to obtain that data set is through theoretical models. Methodologies for generating realistic range profiles of radar variables at attenuating frequencies such as X-band for rain medium are presented here. Fundamental microphysical properties of precipitation, namely size and shape distribution information, are used to generate realistic profiles of X-band starting with S-band observations. Conditioning the simulation from S-band radar measurements maintains the natural distribution of microphysical parameters associated with rainfall. In this research, data taken by the CSU-CHILL radar and the National Center for Atmospheric Research S-POL radar are used to simulate X-band radar variables. Three procedures to simulate the radar variables at X-band and sample applications are presented. A new attenuation correction algorithm based on profiles of reflectivity, differential reflectivity, and differential propagation phase shift is presented. A solution for specific attenuation retrieval in rain medium is proposed that solves the integral equations for reflectivity and differential reflectivity with cumulative differential propagation phase shift constraint. The conventional rain profiling algorithms that connect reflectivity and specific attenuation can retrieve specific attenuation values along the radar path assuming a constant intercept parameter of the normalized drop size distribution. However, in convective storms, the drop size distribution parameters can have significant variation along the path. In this research, a dual-polarization rain profiling algorithm for horizontal-looking radars incorporating reflectivity as well as differential reflectivity profiles is developed. The dual-polarization rain profiling algorithm has been evaluated with X-band radar observations simulated from drop size distribution derived from high-resolution S-band measurements collected by the CSU-CHILL radar. The analysis shows that the dual-polarization rain profiling algorithm provides significant improvement over the current algorithms. A methodology for reflectivity and attenuation retrieval for rain medium in a networked radar environment is described. Electromagnetic waves backscattered from a common volume in networked radar systems are attenuated differently along the different paths. A solution for the specific attenuation distribution is proposed by solving the integral equation for reflectivity. The set of governing integral equations describing the backscatter and propagation of common resolution volume are solved simultaneously with constraints on total path attenuation. The proposed algorithm is evaluated based on simulated X-band radar observations synthesized from S-band measurements collected by the CSU-CHILL radar. Retrieved reflectivity and specific attenuation using the proposed method show good agreement with simulated reflectivity and specific attenuation.
NASA Astrophysics Data System (ADS)
Haffner, D. P.; McPeters, R. D.; Bhartia, P. K.; Labow, G. J.
2015-12-01
The TOMS V9 total ozone algorithm will be applied to the OMPS Nadir Mapper instrument to supersede the exisiting V8.6 data product in operational processing and re-processing for public release. Becuase the quality of the V8.6 data is already quite high, enchancements in V9 are mainly with information provided by the retrieval and simplifcations to the algorithm. The design of the V9 algorithm has been influenced by improvements both in our knowledge of atmospheric effects, such as those of clouds made possible by studies with OMI, and also limitations in the V8 algorithms applied to both OMI and OMPS. But the namesake instruments of the TOMS algorithm are substantially more limited in their spectral and noise characterisitics, and a requirement of our algorithm is to also apply the algorithm to these discrete band spectrometers which date back to 1978. To achieve continuity for all these instruments, the TOMS V9 algorithm continues to use radiances in discrete bands, but now uses Rodgers optimal estimation to retrieve a coarse profile and provide uncertainties for each retrieval. The algorithm remains capable of achieving high accuracy results with a small number of discrete wavelengths, and in extreme cases, such as unusual profile shapes and high solar zenith angles, the quality of the retrievals is improved. Despite the intended design to use limited wavlenegths, the algorithm can also utilitze additional wavelengths from hyperspectral sensors like OMPS to augment the retreival's error detection and information content; for example SO2 detection and correction of Ring effect on atmospheric radiances. We discuss these and other aspects of the V9 algorithm as it will be applied to OMPS, and will mention potential improvements which aim to take advantage of a synergy with OMPS Limb Profiler and Nadir Mapper to further improve the quality of total ozone from the OMPS instrument.
NASA Technical Reports Server (NTRS)
Chu, W. P.; Chiou, E. W.; Larsen, J. C.; Thomason, L. W.; Rind, D.; Buglia, J. J.; Oltmans, S.; Mccormick, M. P.; Mcmaster, L. M.
1993-01-01
The operational inversion algorithm used for the retrieval of the water-vapor vertical profiles from the Stratospheric Aerosol and Gas Experiment II (SAGE II) occultation data is presented. Unlike the algorithm used for the retrieval of aerosol, O3, and NO2, the water-vapor retrieval algorithm accounts for the nonlinear relationship between the concentration versus the broad-band absorption characteristics of water vapor. Problems related to the accuracy of the computational scheme, the accuracy of the removal of other interfering species, and the expected uncertainty of the retrieved profile are examined. Results are presented on the error analysis of the SAGE II water vapor retrieval, indicating that the SAGE II instrument produced good quality water vapor data.
Analyzing gene expression time-courses based on multi-resolution shape mixture model.
Li, Ying; He, Ye; Zhang, Yu
2016-11-01
Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2006-01-01
Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5 -resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.
NASA Astrophysics Data System (ADS)
Zhu, Lianqing; Chen, Yunfang; Chen, Qingshan; Meng, Hao
2011-05-01
According to minimum zone condition, a method for evaluating the profile error of Archimedes helicoid surface based on Genetic Algorithm (GA) is proposed. The mathematic model of the surface is provided and the unknown parameters in the equation of surface are acquired through least square method. Principle of GA is explained. Then, the profile error of Archimedes Helicoid surface is obtained through GA optimization method. To validate the proposed method, the profile error of an Archimedes helicoid surface, Archimedes Cylindrical worm (ZA worm) surface, is evaluated. The results show that the proposed method is capable of correctly evaluating the profile error of Archimedes helicoid surface and satisfy the evaluation standard of the Minimum Zone Method. It can be applied to deal with the measured data of profile error of complex surface obtained by three coordinate measurement machines (CMM).
Luck, Margaux; Schmitt, Caroline; Talbi, Neila; Gouya, Laurent; Caradeuc, Cédric; Puy, Hervé; Bertho, Gildas; Pallet, Nicolas
2018-01-01
Metabolomic profiling combines Nuclear Magnetic Resonance spectroscopy with supervised statistical analysis that might allow to better understanding the mechanisms of a disease. In this study, the urinary metabolic profiling of individuals with porphyrias was performed to predict different types of disease, and to propose new pathophysiological hypotheses. Urine 1 H-NMR spectra of 73 patients with asymptomatic acute intermittent porphyria (aAIP) and familial or sporadic porphyria cutanea tarda (f/sPCT) were compared using a supervised rule-mining algorithm. NMR spectrum buckets bins, corresponding to rules, were extracted and a logistic regression was trained. Our rule-mining algorithm generated results were consistent with those obtained using partial least square discriminant analysis (PLS-DA) and the predictive performance of the model was significant. Buckets that were identified by the algorithm corresponded to metabolites involved in glycolysis and energy-conversion pathways, notably acetate, citrate, and pyruvate, which were found in higher concentrations in the urines of aAIP compared with PCT patients. Metabolic profiling did not discriminate sPCT from fPCT patients. These results suggest that metabolic reprogramming occurs in aAIP individuals, even in the absence of overt symptoms, and supports the relationship that occur between heme synthesis and mitochondrial energetic metabolism.
Design of the OMPS limb sensor correction algorithm
NASA Astrophysics Data System (ADS)
Jaross, Glen; McPeters, Richard; Seftor, Colin; Kowitt, Mark
The Sensor Data Records (SDR) for the Ozone Mapping and Profiler Suite (OMPS) on NPOESS (National Polar-orbiting Operational Environmental Satellite System) contains geolocated and calibrated radiances, and are similar to the Level 1 data of NASA Earth Observing System and other programs. The SDR algorithms (one for each of the 3 OMPS focal planes) are the processes by which the Raw Data Records (RDR) from the OMPS sensors are converted into the records that contain all data necessary for ozone retrievals. Consequently, the algorithms must correct and calibrate Earth signals, geolocate the data, and identify and ingest collocated ancillary data. As with other limb sensors, ozone profile retrievals are relatively insensitive to calibration errors due to the use of altitude normalization and wavelength pairing. But the profile retrievals as they pertain to OMPS are not immune from sensor changes. In particular, the OMPS Limb sensor images an altitude range of > 100 km and a spectral range of 290-1000 nm on its detector. Uncorrected sensor degradation and spectral registration drifts can lead to changes in the measured radiance profile, which in turn affects the ozone trend measurement. Since OMPS is intended for long-term monitoring, sensor calibration is a specific concern. The calibration is maintained via the ground data processing. This means that all sensor calibration data, including direct solar measurements, are brought down in the raw data and processed separately by the SDR algorithms. One of the sensor corrections performed by the algorithm is the correction for stray light. The imaging spectrometer and the unique focal plane design of OMPS makes these corrections particularly challenging and important. Following an overview of the algorithm flow, we will briefly describe the sensor stray light characterization and the correction approach used in the code.
NASA Technical Reports Server (NTRS)
Stocker, Erich Franz; Kelley, O.; Kummerow, C.; Huffman, G.; Olson, W.; Kwiatkowski, J.
2015-01-01
In February 2015, the Global Precipitation Measurement (GPM) mission core satellite will complete its first year in space. The core satellite carries a conically scanning microwave imager called the GPM Microwave Imager (GMI), which also has 166 GHz and 183 GHz frequency channels. The GPM core satellite also carries a dual frequency radar (DPR) which operates at Ku frequency, similar to the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar, and a new Ka frequency. The precipitation processing system (PPS) is producing swath-based instantaneous precipitation retrievals from GMI, both radars including a dual-frequency product, and a combined GMIDPR precipitation retrieval. These level 2 products are written in the HDF5 format and have many additional parameters beyond surface precipitation that are organized into appropriate groups. While these retrieval algorithms were developed prior to launch and are not optimal, these algorithms are producing very creditable retrievals. It is appropriate for a wide group of users to have access to the GPM retrievals. However, for researchers requiring only surface precipitation, these L2 swath products can appear to be very intimidating and they certainly do contain many more variables than the average researcher needs. Some researchers desire only surface retrievals stored in a simple easily accessible format. In response, PPS has begun to produce gridded text based products that contain just the most widely used variables for each instrument (surface rainfall rate, fraction liquid, fraction convective) in a single line for each grid box that contains one or more observations.This paper will describe the gridded data products that are being produced and provide an overview of their content. Currently two types of gridded products are being produced: (1) surface precipitation retrievals from the core satellite instruments GMI, DPR, and combined GMIDPR (2) surface precipitation retrievals for the partner constellation satellites. Both of these gridded products are generated for a.25 degree x.25 degree hourly grid, which are packaged into daily ASCII (American Standard Code for Information Interchange) files that can downloaded from the PPS FTP (File Transfer Protocol) site. To reduce the download size, the files are compressed using the gzip utility.This paper will focus on presenting high-level details about the gridded text product being generated from the instruments on the GPM core satellite. But summary information will also be presented about the partner radiometer gridded product. All retrievals for the partner radiometer are done using the GPROF2014 algorithmusing as input the PPS generated inter-calibrated 1C product for the radiometer.
Yang, Xiaoxia; Wang, Jia; Sun, Jun; Liu, Rong
2015-01-01
Protein-nucleic acid interactions are central to various fundamental biological processes. Automated methods capable of reliably identifying DNA- and RNA-binding residues in protein sequence are assuming ever-increasing importance. The majority of current algorithms rely on feature-based prediction, but their accuracy remains to be further improved. Here we propose a sequence-based hybrid algorithm SNBRFinder (Sequence-based Nucleic acid-Binding Residue Finder) by merging a feature predictor SNBRFinderF and a template predictor SNBRFinderT. SNBRFinderF was established using the support vector machine whose inputs include sequence profile and other complementary sequence descriptors, while SNBRFinderT was implemented with the sequence alignment algorithm based on profile hidden Markov models to capture the weakly homologous template of query sequence. Experimental results show that SNBRFinderF was clearly superior to the commonly used sequence profile-based predictor and SNBRFinderT can achieve comparable performance to the structure-based template methods. Leveraging the complementary relationship between these two predictors, SNBRFinder reasonably improved the performance of both DNA- and RNA-binding residue predictions. More importantly, the sequence-based hybrid prediction reached competitive performance relative to our previous structure-based counterpart. Our extensive and stringent comparisons show that SNBRFinder has obvious advantages over the existing sequence-based prediction algorithms. The value of our algorithm is highlighted by establishing an easy-to-use web server that is freely accessible at http://ibi.hzau.edu.cn/SNBRFinder.
Optimal control of CPR procedure using hemodynamic circulation model
Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok
2007-12-25
A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.
Angland, P.; Haberberger, D.; Ivancic, S. T.; ...
2017-10-30
Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angland, P.; Haberberger, D.; Ivancic, S. T.
Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less
Schumann, A; Priegnitz, M; Schoene, S; Enghardt, W; Rohling, H; Fiedler, F
2016-10-07
Range verification and dose monitoring in proton therapy is considered as highly desirable. Different methods have been developed worldwide, like particle therapy positron emission tomography (PT-PET) and prompt gamma imaging (PGI). In general, these methods allow for a verification of the proton range. However, quantification of the dose from these measurements remains challenging. For the first time, we present an approach for estimating the dose from prompt γ-ray emission profiles. It combines a filtering procedure based on Gaussian-powerlaw convolution with an evolutionary algorithm. By means of convolving depth dose profiles with an appropriate filter kernel, prompt γ-ray depth profiles are obtained. In order to reverse this step, the evolutionary algorithm is applied. The feasibility of this approach is demonstrated for a spread-out Bragg-peak in a water target.
Spacecraft Angular State Estimation After Sensor Failure
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); BarItzhack, Itzhack Y.; Harman, Richard R.
2002-01-01
This work describes two algorithms for computing the angular rate and attitude in case of a gyro failure in a spacecraft (SC) with a special mission profile. The source of the problem is presented, two algorithms are suggested, an observability study is carried out, and the efficiency of the algorithms is demonstrated.
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.
2004-01-01
Rainfall rate estimates from space-borne k&ents are generally accepted as reliable by a majority of the atmospheric science commu&y. One-of the Tropical Rainfall Measuring Mission (TRh4M) facility rain rate algorithms is based upon passive microwave observations fiom the TRMM Microwave Imager (TMI). Part I of this study describes improvements in the TMI algorithm that are required to introduce cloud latent heating and drying as additional algorithm products. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, OP5resolution estimates of surface rain rate over ocean fiom the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over forerunning algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm, and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly, 2.5 deg. -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data are limited, TMI estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with: (a) additional contextual information brought to the estimation problem, and/or; (b) physically-consistent and representative databases supporting the algorithm. A model of the random error in instantaneous, 0.5 deg-resolution rain rate estimates appears to be consistent with the levels of error determined from TMI comparisons to collocated radar. Error model modifications for non-raining situations will be required, however. Sampling error appears to represent only a fraction of the total error in monthly, 2S0-resolution TMI estimates; the remaining error is attributed to physical inconsistency or non-representativeness of cloud-resolving model simulated profiles supporting the algorithm.
FitSearch: a robust way to interpret a yeast fitness profile in terms of drug's mode-of-action.
Lee, Minho; Han, Sangjo; Chang, Hyeshik; Kwak, Youn-Sig; Weller, David M; Kim, Dongsup
2013-01-01
Yeast deletion-mutant collections have been successfully used to infer the mode-of-action of drugs especially by profiling chemical-genetic and genetic-genetic interactions on a genome-wide scale. Although tens of thousands of those profiles are publicly available, a lack of an accurate method for mining such data has been a major bottleneck for more widespread use of these useful resources. For general usage of those public resources, we designed FitRankDB as a general repository of fitness profiles, and developed a new search algorithm, FitSearch, for identifying the profiles that have a high similarity score with statistical significance for a given fitness profile. We demonstrated that our new repository and algorithm are highly beneficial to researchers who attempting to make hypotheses based on unknown modes-of-action of bioactive compounds, regardless of the types of experiments that have been performed using yeast deletion-mutant collection in various types of different measurement platforms, especially non-chip-based platforms. We showed that our new database and algorithm are useful when attempting to construct a hypothesis regarding the unknown function of a bioactive compound through small-scale experiments with a yeast deletion collection in a platform independent manner. The FitRankDB and FitSearch enhance the ease of searching public yeast fitness profiles and obtaining insights into unknown mechanisms of action of drugs. FitSearch is freely available at http://fitsearch.kaist.ac.kr.
FitSearch: a robust way to interpret a yeast fitness profile in terms of drug's mode-of-action
2013-01-01
Background Yeast deletion-mutant collections have been successfully used to infer the mode-of-action of drugs especially by profiling chemical-genetic and genetic-genetic interactions on a genome-wide scale. Although tens of thousands of those profiles are publicly available, a lack of an accurate method for mining such data has been a major bottleneck for more widespread use of these useful resources. Results For general usage of those public resources, we designed FitRankDB as a general repository of fitness profiles, and developed a new search algorithm, FitSearch, for identifying the profiles that have a high similarity score with statistical significance for a given fitness profile. We demonstrated that our new repository and algorithm are highly beneficial to researchers who attempting to make hypotheses based on unknown modes-of-action of bioactive compounds, regardless of the types of experiments that have been performed using yeast deletion-mutant collection in various types of different measurement platforms, especially non-chip-based platforms. Conclusions We showed that our new database and algorithm are useful when attempting to construct a hypothesis regarding the unknown function of a bioactive compound through small-scale experiments with a yeast deletion collection in a platform independent manner. The FitRankDB and FitSearch enhance the ease of searching public yeast fitness profiles and obtaining insights into unknown mechanisms of action of drugs. FitSearch is freely available at http://fitsearch.kaist.ac.kr. PMID:23368702
NASA Astrophysics Data System (ADS)
Zhou, Chaojie; Ding, Xiaohua; Zhang, Jie; Yang, Jungang; Ma, Qiang
2017-12-01
While global oceanic surface information with large-scale, real-time, high-resolution data is collected by satellite remote sensing instrumentation, three-dimensional (3D) observations are usually obtained from in situ measurements, but with minimal coverage and spatial resolution. To meet the needs of 3D ocean investigations, we have developed a new algorithm to reconstruct the 3D ocean temperature field based on the Array for Real-time Geostrophic Oceanography (Argo) profiles and sea surface temperature (SST) data. The Argo temperature profiles are first optimally fitted to generate a series of temperature functions of depth, with the vertical temperature structure represented continuously. By calculating the derivatives of the fitted functions, the calculation of the vertical temperature gradient of the Argo profiles at an arbitrary depth is accomplished. A gridded 3D temperature gradient field is then found by applying inverse distance weighting interpolation in the horizontal direction. Combined with the processed SST, the 3D temperature field reconstruction is realized below the surface using the gridded temperature gradient. Finally, to confirm the effectiveness of the algorithm, an experiment in the Pacific Ocean south of Japan is conducted, for which a 3D temperature field is generated. Compared with other similar gridded products, the reconstructed 3D temperature field derived by the proposed algorithm achieves satisfactory accuracy, with correlation coefficients of 0.99 obtained, including a higher spatial resolution (0.25° × 0.25°), resulting in the capture of smaller-scale characteristics. Finally, both the accuracy and the superiority of the algorithm are validated.
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-08-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.
Transposon identification using profile HMMs
2010-01-01
Background Transposons are "jumping genes" that account for large quantities of repetitive content in genomes. They are known to affect transcriptional regulation in several different ways, and are implicated in many human diseases. Transposons are related to microRNAs and viruses, and many genes, pseudogenes, and gene promoters are derived from transposons or have origins in transposon-induced duplication. Modeling transposon-derived genomic content is difficult because they are poorly conserved. Profile hidden Markov models (profile HMMs), widely used for protein sequence family modeling, are rarely used for modeling DNA sequence families. The algorithm commonly used to estimate the parameters of profile HMMs, Baum-Welch, is prone to prematurely converge to local optima. The DNA domain is especially problematic for the Baum-Welch algorithm, since it has only four letters as opposed to the twenty residues of the amino acid alphabet. Results We demonstrate with a simulation study and with an application to modeling the MIR family of transposons that two recently introduced methods, Conditional Baum-Welch and Dynamic Model Surgery, achieve better estimates of the parameters of profile HMMs across a range of conditions. Conclusions We argue that these new algorithms expand the range of potential applications of profile HMMs to many important DNA sequence family modeling problems, including that of searching for and modeling the virus-like transposons that are found in all known genomes. PMID:20158867
NASA Astrophysics Data System (ADS)
Stupina, T.; Koulakov, I.; Kopp, H.
2009-04-01
We consider questions of creating structural models and resolution assessment in tomographic inversion of wide-angle active seismic profiling data. For our investigations, we use the PROFIT (Profile Forward and Inverse Tomographic modeling) algorithm which was tested earlier with different datasets. Here we consider offshore seismic profiling data from three areas (Chile, Java and Central Pacific). Two of the study areas are characterized by subduction zones whereas the third data set covers a seamount province. We have explored different algorithmic issues concerning the quality of the solution, such as (1) resolution assessment using different sizes and complexity of synthetic anomalies; (2) grid spacing effects; (3) amplitude damping and smoothing; (4) criteria for rejection of outliers; (5) quantitative criteria for comparing models. Having determined optimal algorithmic parameters for the observed seismic profiling data we have created structural synthetic models which reproduce the results of the observed data inversion. For the Chilean and Java subduction zones our results show similar patterns: a relatively thin sediment layer on the oceanic plate, thicker inhomogeneous sediments in the overlying plate and a large area of very strong low velocity anomalies in the accretionary wedge. For two seamounts in the Pacific we observe high velocity anomalies in the crust which can be interpreted as frozen channels inside the dormant volcano cones. Along both profiles we obtain considerable crustal thickening beneath the seamounts.
Visco, Carlo; Li, Yan; Xu-Monette, Zijun Y.; Miranda, Roberto N.; Green, Tina M.; Li, Yong; Tzankov, Alexander; Wen, Wei; Liu, Wei-min; Kahl, Brad S.; d’Amore, Emanuele S. G.; Montes-Moreno, Santiago; Dybkær, Karen; Chiu, April; Tam, Wayne; Orazi, Attilio; Zu, Youli; Bhagat, Govind; Winter, Jane N.; Wang, Huan-You; O’Neill, Stacey; Dunphy, Cherie H.; Hsi, Eric D.; Zhao, X. Frank; Go, Ronald S.; Choi, William W. L.; Zhou, Fan; Czader, Magdalena; Tong, Jiefeng; Zhao, Xiaoying; van Krieken, J. Han; Huang, Qing; Ai, Weiyun; Etzell, Joan; Ponzoni, Maurilio; Ferreri, Andres J. M.; Piris, Miguel A.; Møller, Michael B.; Bueso-Ramos, Carlos E.; Medeiros, L. Jeffrey; Wu, Lin; Young, Ken H.
2013-01-01
Gene expression profiling (GEP) has stratified diffuse large B-cell lymphoma (DLBCL) into molecular subgroups that correspond to different stages of lymphocyte development - namely germinal center B-cell-like and activated B-cell-like. This classification has prognostic significance, but GEP is expensive and not readily applicable into daily practice, which has lead to immunohistochemical algorithms proposed as a surrogate for GEP analysis. We assembled tissue microarrays from 475 de novo DLBCL patients who were treated with rituximab-CHOP chemotherapy. All cases were successfully profiled by GEP on formalin-fixed, paraffin-embedded tissue samples. Sections were stained with antibodies reactive with CD10, GCET1, FOXP1, MUM1, and BCL6 and cases were classified following a rationale of sequential steps of differentiation of B-cells. Cutoffs for each marker were obtained using receiver operating characteristic curves, obviating the need for any arbitrary method. An algorithm based on the expression of CD10, FOXP1, and BCL6 was developed that had a simpler structure than other recently proposed algorithms and 92.6% concordance with GEP. In multivariate analysis, both the International Prognostic Index and our proposed algorithm were significant independent predictors of progression-free and overall survival. In conclusion, this algorithm effectively predicts prognosis of DLBCL patients matching GEP subgroups in the era of rituximab therapy. PMID:22437443
Design and Implementation of the Automated Rendezvous Targeting Algorithms for Orion
NASA Technical Reports Server (NTRS)
DSouza, Christopher; Weeks, Michael
2010-01-01
The Orion vehicle will be designed to perform several rendezvous missions: rendezvous with the ISS in Low Earth Orbit (LEO), rendezvous with the EDS/Altair in LEO, a contingency rendezvous with the ascent stage of the Altair in Low Lunar Orbit (LLO) and a contingency rendezvous in LLO with the ascent and descent stage in the case of an aborted lunar landing. Therefore, it is not difficult to realize that each of these scenarios imposes different operational, timing, and performance constraints on the GNC system. To this end, a suite of on-board guidance and targeting algorithms have been designed to meet the requirement to perform the rendezvous independent of communications with the ground. This capability is particularly relevant for the lunar missions, some of which may occur on the far side of the moon. This paper will describe these algorithms which are designed to be structured and arranged in such a way so as to be flexible and able to safely perform a wide variety of rendezvous trajectories. The goal of the algorithms is not to merely fly one specific type of canned rendezvous profile. Conversely, it was designed from the start to be general enough such that any type of trajectory profile can be flown.(i.e. a coelliptic profile, a stable orbit rendezvous profile, and a expedited LLO rendezvous profile, etc) all using the same rendezvous suite of algorithms. Each of these profiles makes use of maneuver types which have been designed with dual goals of robustness and performance. They are designed to converge quickly under dispersed conditions and they are designed to perform many of the functions performed on the ground today. The targeting algorithms consist of a phasing maneuver (NC), an altitude adjust maneuver (NH), and plane change maneuver (NPC), a coelliptic maneuver (NSR), a Lambert targeted maneuver, and several multiple-burn targeted maneuvers which combine one of more of these algorithms. The derivation and implementation of each of these algorithms will be discussed in detail, as well and the Rendezvous Targeting "wrapper" which will sequentially tie them all together into a single onboard targeting tool which can produce a final integrated rendezvous trajectory. In a similar fashion, the various guidance modes available for flying out each of these maneuvers will be discussed as well. This paradigm of having the onboard guidance & targeting capability described above is different than the way the Space Shuttle has operated thus far. As a result, a discussion of these differences in terms of operations and ground and crew intervention will also be discussed. However, the general framework of how the mission designers on the ground first perform all mission design and planning functions, and then uplink that burn plan to the vehicle ensures that the ground will be involved to ensure safety and reliability. The only real difference is which of these functions will be done onboard vs. on the ground as done currently. Finally, this paper will describe the performance of each of these algorithms individually as well as the entire suite of algorithms as applied to the Orion ISS and EDS/Altair rendezvous missions in LEO. These algorithms have been incorporated in both a Linear Covariance environment and a Monte Carlo environment and the results of these dispersion analyses will be presented in the paper as well.
A Bayesian approach to microwave precipitation profile retrieval
NASA Technical Reports Server (NTRS)
Evans, K. Franklin; Turk, Joseph; Wong, Takmeng; Stephens, Graeme L.
1995-01-01
A multichannel passive microwave precipitation retrieval algorithm is developed. Bayes theorem is used to combine statistical information from numerical cloud models with forward radiative transfer modeling. A multivariate lognormal prior probability distribution contains the covariance information about hydrometeor distribution that resolves the nonuniqueness inherent in the inversion process. Hydrometeor profiles are retrieved by maximizing the posterior probability density for each vector of observations. The hydrometeor profile retrieval method is tested with data from the Advanced Microwave Precipitation Radiometer (10, 19, 37, and 85 GHz) of convection over ocean and land in Florida. The CP-2 multiparameter radar data are used to verify the retrieved profiles. The results show that the method can retrieve approximate hydrometeor profiles, with larger errors over land than water. There is considerably greater accuracy in the retrieval of integrated hydrometeor contents than of profiles. Many of the retrieval errors are traced to problems with the cloud model microphysical information, and future improvements to the algorithm are suggested.
Backfilling with guarantees granted upon job submission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leung, Vitus Joseph; Bunde, David P.; Lindsay, Alexander M.
2011-01-01
In this paper, we present scheduling algorithms that simultaneously support guaranteed starting times and favor jobs with system desired traits. To achieve the first of these goals, our algorithms keep a profile with potential starting times for every unfinished job and never move these starting times later, just as in Conservative Backfilling. To achieve the second, they exploit previously unrecognized flexibility in the handling of holes opened in this profile when jobs finish early. We find that, with one choice of job selection function, our algorithms can consistently yield a lower average waiting time than Conservative Backfilling while still providingmore » a guaranteed start time to each job as it arrives. In fact, in most cases, the algorithms give a lower average waiting time than the more aggressive EASY backfilling algorithm, which does not provide guaranteed start times. Alternately, with a different choice of job selection function, our algorithms can focus the benefit on the widest submitted jobs, the reason for the existence of parallel systems. In this case, these jobs experience significantly lower waiting time than Conservative Backfilling with minimal impact on other jobs.« less
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
Rainflow Algorithm-Based Lifetime Estimation of Power Semiconductors in Utility Applications
GopiReddy, Lakshmi Reddy; Tolbert, Leon M.; Ozpineci, Burak; ...
2015-07-15
Rainflow algorithms are one of the popular counting methods used in fatigue and failure analysis in conjunction with semiconductor lifetime estimation models. However, the rain-flow algorithm used in power semiconductor reliability does not consider the time-dependent mean temperature calculation. The equivalent temperature calculation proposed by Nagode et al. is applied to semiconductor lifetime estimation in this paper. A month-long arc furnace load profile is used as a test profile to estimate temperatures in insulated-gate bipolar transistors (IGBTs) in a STATCOM for reactive compensation of load. In conclusion, the degradation in the life of the IGBT power device is predicted basedmore » on time-dependent temperature calculation.« less
Yet one more dwell time algorithm
NASA Astrophysics Data System (ADS)
Haberl, Alexander; Rascher, Rolf
2017-06-01
The current demand of even more powerful and efficient microprocessors, for e.g. deep learning, has led to an ongoing trend of reducing the feature size of the integrated circuits. These processors are patterned with EUV-lithography which enables 7 nm chips [1]. To produce mirrors which satisfy the needed requirements is a challenging task. Not only increasing requirements on the imaging properties, but also new lens shapes, such as aspheres or lenses with free-form surfaces, require innovative production processes. However, these lenses need new deterministic sub-aperture polishing methods that have been established in the past few years. These polishing methods are characterized, by an empirically determined TIF and local stock removal. Such a deterministic polishing method is ion-beam-figuring (IBF). The beam profile of an ion beam is adjusted to a nearly ideal Gaussian shape by various parameters. With the known removal function, a dwell time profile can be generated for each measured error profile. Such a profile is always generated pixel-accurately to the predetermined error profile, with the aim always of minimizing the existing surface structures up to the cut-off frequency of the tool used [2]. The processing success of a correction-polishing run depends decisively on the accuracy of the previously computed dwell-time profile. So the used algorithm to calculate the dwell time has to accurately reflect the reality. But furthermore the machine operator should have no influence on the dwell-time calculation. Conclusively there mustn't be any parameters which have an influence on the calculation result. And lastly it should take a minimum of machining time to get a minimum of remaining error structures. Unfortunately current dwell time algorithm calculations are divergent, user-dependent, tending to create high processing times and need several parameters to bet set. This paper describes an, realistic, convergent and user independent dwell time algorithm. The typical processing times are reduced to about 80 % up to 50 % compared to conventional algorithms (Lucy-Richardson, Van-Cittert …) as used in established machines. To verify its effectiveness a plane surface was machined on an IBF.
A Partitioning Algorithm for Block-Diagonal Matrices With Overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guy Antoine Atenekeng Kahou; Laura Grigori; Masha Sosonkina
2008-02-02
We present a graph partitioning algorithm that aims at partitioning a sparse matrix into a block-diagonal form, such that any two consecutive blocks overlap. We denote this form of the matrix as the overlapped block-diagonal matrix. The partitioned matrix is suitable for applying the explicit formulation of Multiplicative Schwarz preconditioner (EFMS) described in [3]. The graph partitioning algorithm partitions the graph of the input matrix into K partitions, such that every partition {Omega}{sub i} has at most two neighbors {Omega}{sub i-1} and {Omega}{sub i+1}. First, an ordering algorithm, such as the reverse Cuthill-McKee algorithm, that reduces the matrix profile ismore » performed. An initial overlapped block-diagonal partition is obtained from the profile of the matrix. An iterative strategy is then used to further refine the partitioning by allowing nodes to be transferred between neighboring partitions. Experiments are performed on matrices arising from real-world applications to show the feasibility and usefulness of this approach.« less
Sequenza: allele-specific copy number and mutation profiles from tumor sequencing data.
Favero, F; Joshi, T; Marquard, A M; Birkbak, N J; Krzystanek, M; Li, Q; Szallasi, Z; Eklund, A C
2015-01-01
Exome or whole-genome deep sequencing of tumor DNA along with paired normal DNA can potentially provide a detailed picture of the somatic mutations that characterize the tumor. However, analysis of such sequence data can be complicated by the presence of normal cells in the tumor specimen, by intratumor heterogeneity, and by the sheer size of the raw data. In particular, determination of copy number variations from exome sequencing data alone has proven difficult; thus, single nucleotide polymorphism (SNP) arrays have often been used for this task. Recently, algorithms to estimate absolute, but not allele-specific, copy number profiles from tumor sequencing data have been described. We developed Sequenza, a software package that uses paired tumor-normal DNA sequencing data to estimate tumor cellularity and ploidy, and to calculate allele-specific copy number profiles and mutation profiles. We applied Sequenza, as well as two previously published algorithms, to exome sequence data from 30 tumors from The Cancer Genome Atlas. We assessed the performance of these algorithms by comparing their results with those generated using matched SNP arrays and processed by the allele-specific copy number analysis of tumors (ASCAT) algorithm. Comparison between Sequenza/exome and SNP/ASCAT revealed strong correlation in cellularity (Pearson's r = 0.90) and ploidy estimates (r = 0.42, or r = 0.94 after manual inspecting alternative solutions). This performance was noticeably superior to previously published algorithms. In addition, in artificial data simulating normal-tumor admixtures, Sequenza detected the correct ploidy in samples with tumor content as low as 30%. The agreement between Sequenza and SNP array-based copy number profiles suggests that exome sequencing alone is sufficient not only for identifying small scale mutations but also for estimating cellularity and inferring DNA copy number aberrations. © The Author 2014. Published by Oxford University Press on behalf of the European Society for Medical Oncology.
NASA Astrophysics Data System (ADS)
Petropavlovskikh, I.; Weatherhead, E.; Cede, A.; Oltmans, S. J.; Kireev, S.; Maillard, E.; Bhartia, P. K.; Flynn, L. E.
2005-12-01
The first NPOESS satellite is scheduled to be launched in 2010 and will carry the Ozone Mapping and Profiler Suite (OMPS) instruments for ozone monitoring. Prior this, the OMPS instruments and algorithms will be tested by flight on the NPOESS/NPP satellite, scheduled for launch in 2008. Pre-launch planning for validation, post launch data validation and verification of the nadir and limb profile algorithm are key components for insuring that the NPOESS will produce a high quality, reliable ozone profile data set. The heritage of satellite instrument validation (TOMS, SBUV, GOME, SCIAMACHY, SAGE, HALOE, ATMOS, etc) has always relied upon surface-based observations. While the global coverage of satellite observations is appealing for validating another satellite, there is no substitute for the hard reference point of a ground-based system such as the Dobson or Brewer network, whose instruments are routinely calibrated and intercompared to standard references. The standard solar occultation instruments, SAGE II and HALOE are well beyond their planned lifetimes and might be inoperative during the OMPS period. The Umkehr network has been one of the key data sets for stratospheric ozone trend calculations and has earned its place as a benchmark network for stratospheric ozone profile observations. The normalization of measurements at different solar zenith angle (SZAs) to the measurement at the smallest SZA cancels out many calibration parameters, including the extra-terrestrial solar flux and instrumental constant, thus providing a "self-calibrating" technique in the same manner relied upon by the occultation sensors on satellites. Moreover, the ground-based Umkehr measurement is the only technique that provides data with the same altitude resolution and in the same units (DU) as do the UV-nadir instruments (SBUV-2, GOME-2, OMPS-nadir), i.e., as ozone amount in pressure layers, whereas, occultation instruments measure ozone density with height. A new Umkehr algorithm will enhance the information content of the retrieved profiles and extend the applicability of the technique. Automated Dobson and Brewer instruments offer the potential for greatly expanded network of Umkehr observations once the new algorithm is applied. We will discuss the new algorithm development and present results of its performance in comparisons of retrievals between co-located Brewer and Dobson ozone profiles measured at Arosa station in Switzerland.
NASA Technical Reports Server (NTRS)
Hoffman, Matthew J.; Eluszkiewicz, Janusz; Weisenstein, Deborah; Uymin, Gennady; Moncet, Jean-Luc
2012-01-01
Motivated by the needs of Mars data assimilation. particularly quantification of measurement errors and generation of averaging kernels. we have evaluated atmospheric temperature retrievals from Mars Global Surveyor (MGS) Thermal Emission Spectrometer (TES) radiances. Multiple sets of retrievals have been considered in this study; (1) retrievals available from the Planetary Data System (PDS), (2) retrievals based on variants of the retrieval algorithm used to generate the PDS retrievals, and (3) retrievals produced using the Mars 1-Dimensional Retrieval (M1R) algorithm based on the Optimal Spectral Sampling (OSS ) forward model. The retrieved temperature profiles are compared to the MGS Radio Science (RS) temperature profiles. For the samples tested, the M1R temperature profiles can be made to agree within 2 K with the RS temperature profiles, but only after tuning the prior and error statistics. Use of a global prior that does not take into account the seasonal dependence leads errors of up 6 K. In polar samples. errors relative to the RS temperature profiles are even larger. In these samples, the PDS temperature profiles also exhibit a poor fit with RS temperatures. This fit is worse than reported in previous studies, indicating that the lack of fit is due to a bias correction to TES radiances implemented after 2004. To explain the differences between the PDS and Ml R temperatures, the algorithms are compared directly, with the OSS forward model inserted into the PDS algorithm. Factors such as the filtering parameter, the use of linear versus nonlinear constrained inversion, and the choice of the forward model, are found to contribute heavily to the differences in the temperature profiles retrieved in the polar regions, resulting in uncertainties of up to 6 K. Even outside the poles, changes in the a priori statistics result in different profile shapes which all fit the radiances within the specified error. The importance of the a priori statistics prevents reliable global retrievals based a single a priori and strongly implies that a robust science analysis must instead rely on retrievals employing localized a priori information, for example from an ensemble based data assimilation system such as the Local Ensemble Transform Kalman Filter (LETKF).
Changes in prescribed doses for the Seattle neutron therapy system
NASA Astrophysics Data System (ADS)
Popescu, A.
2008-06-01
From the beginning of the neutron therapy program at the University of Washington Medical Center, the neutron dose distribution in tissue has been calculated using an in-house treatment planning system called PRISM. In order to increase the accuracy of the absorbed dose calculations, two main improvements were made to the PRISM treatment planning system: (a) the algorithm was changed by the addition of an analytical expression of the central axis wedge factor dependence with field size and depth developed at UWMC. Older versions of the treatment-planning algorithm used a constant central axis wedge factor; (b) a complete newly commissioned set of measured data was introduced in the latest version of PRISM. The new version of the PRISM algorithm allowed for the use of the wedge profiles measured at different depths instead of one wedge profile measured at one depth. The comparison of the absorbed dose calculations using the old and the improved algorithm showed discrepancies mainly due to the missing central axis wedge factor dependence with field size and depth and due to the absence of the wedge profiles at depths different from 10 cm. This study concludes that the previously reported prescribed doses for neutron therapy should be changed.
An algorithm to estimate PBL heights from wind profiler data
NASA Astrophysics Data System (ADS)
Molod, A.; Salmun, H.
2016-12-01
An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourlyarchived wind profiler data from the NOAA Profiler Network (NPN) sites located throughoutthe central United States from the period 1992-2012. The long period of record allows ananalysis of climatological mean PBL heights as well as some estimates of year to yearvariability. Under clear conditions, summertime averaged hourly time series of PBL heightscompare well with Richardson-number based estimates at the few NPN stations with hourlytemperature measurements. Comparisons with clear sky MERRA estimates show that the windprofiler (WP) and the Richardson number based PBL heights are lower by approximately 250-500 m.The geographical distribution of daily maximum WP PBL heights corresponds well with theexpected distribution based on patterns of surface temperature and soil moisture. Windprofiler PBL heights were also estimated under mostly cloudy conditions, but the WP estimatesshow a smaller clear-cloudy condition difference than either of the other two PBL height estimates.The algorithm presented here is shown to provide a reliable summer, fall and springclimatology of daytime hourly PBL heights throughout the central United States. The reliabilityof the algorithm has prompted its use to obtain hourly PBL heights from other archived windprofiler data located throughout the world.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.
2004-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).
Bricault, Ivan; Ferretti, Gilbert
2005-01-01
While multislice spiral computed tomography (CT) scanners are provided by all major manufacturers, their specific interpolation algorithms have been rarely evaluated. Because the results published so far relate to distinct particular cases and differ significantly, there are contradictory recommendations about the choice of pitch in clinical practice. In this paper, we present a new tool for the evaluation of multislice spiral CT z-interpolation algorithms, and apply it to the four-slice case. Our software is based on the computation of a "Weighted Radiation Profile" (WRP), and compares WRP to an expected ideal profile in terms of widening and heterogeneity. It provides a unique scheme for analyzing a large variety of spiral CT acquisition procedures. Freely chosen parameters include: number of detector rows, detector collimation, nominal slice width, helical pitch, and interpolation algorithm with any filter shape and width. Moreover, it is possible to study any longitudinal and off-isocenter positions. Theoretical and experimental results show that WRP, more than Slice Sensitivity Profile (SSP), provides a comprehensive characterization of interpolation algorithms. WRP analysis demonstrates that commonly "preferred helical pitches" are actually nonoptimal regarding the formerly distinguished z-sampling gap reduction criterion. It is also shown that "narrow filter" interpolation algorithms do not enable a general preferred pitch discussion, since they present poor properties with large longitudinal and off-center variations. In the more stable case of "wide filter" interpolation algorithms, SSP width or WRP widening are shown to be almost constant. Therefore, optimal properties should no longer be sought in terms of these criteria. On the contrary, WRP heterogeneity is related to variable artifact phenomena and can pertinently characterize optimal pitches. In particular, the exemplary interpolation properties of pitch = 1 "wide filter" mode are demonstrated.
PDE Nozzle Optimization Using a Genetic Algorithm
NASA Technical Reports Server (NTRS)
Billings, Dana; Turner, James E. (Technical Monitor)
2000-01-01
Genetic algorithms, which simulate evolution in natural systems, have been used to find solutions to optimization problems that seem intractable to standard approaches. In this study, the feasibility of using a GA to find an optimum, fixed profile nozzle for a pulse detonation engine (PDE) is demonstrated. The objective was to maximize impulse during the detonation wave passage and blow-down phases of operation. Impulse of each profile variant was obtained by using the CFD code Mozart/2.0 to simulate the transient flow. After 7 generations, the method has identified a nozzle profile that certainly is a candidate for optimum solution. The constraints on the generality of this possible solution remain to be clarified.
NASA Technical Reports Server (NTRS)
Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)
2001-01-01
A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.
Iterative algorithms for a non-linear inverse problem in atmospheric lidar
NASA Astrophysics Data System (ADS)
Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto
2017-08-01
We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.
Predicting drug-target interactions by dual-network integrated logistic matrix factorization
NASA Astrophysics Data System (ADS)
Hao, Ming; Bryant, Stephen H.; Wang, Yanli
2017-01-01
In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Wei-Kuo; Takayabu, Yukari N.; Lang, Steve
Yanai et al. (1973) utilized the meteorological data collected from a sounding network to present a pioneering work on thermodynamic budgets, which are referred to as the apparent heat source (Q1) and apparent moisture sink (Q2). Latent heating (LH) is one of the most dominant terms in Q1. Yanai’s paper motivated the development of satellite-based LH algorithms and provided a theoretical background for imposing large-scale advective forcing into cloud-resolving models (CRMs). These CRM-simulated LH and Q1 data have been used to generate the look-up tables in Tropical Rainfall Measuring Mission (TRMM) LH algorithms. A set of algorithms developed for retrievingmore » LH profiles from TRMM-based rainfall profiles are described and evaluated, including details concerning their intrinsic space-time resolutions. Included in the paper are results from a variety of validation analyses that define the uncertainty of the LH profile estimates. Also, examples of how TRMM-retrieved LH profiles have been used to understand the lifecycle of the MJO and improve the predictions of global weather and climate models as well as comparisons with large-scale analyses are provided. Areas for further improvement of the TRMM products are discussed.« less
NASA Astrophysics Data System (ADS)
Zawada, Daniel J.; Rieger, Landon A.; Bourassa, Adam E.; Degenstein, Douglas A.
2018-04-01
Measurements of limb-scattered sunlight from the Ozone Mapping and Profiler Suite Limb Profiler (OMPS-LP) can be used to obtain vertical profiles of ozone in the stratosphere. In this paper we describe a two-dimensional, or tomographic, retrieval algorithm for OMPS-LP where variations are retrieved simultaneously in altitude and the along-orbital-track dimension. The algorithm has been applied to measurements from the center slit for the full OMPS-LP mission to create the publicly available University of Saskatchewan (USask) OMPS-LP 2D v1.0.2 dataset. Tropical ozone anomalies are compared with measurements from the Microwave Limb Sounder (MLS), where differences are less than 5 % of the mean ozone value for the majority of the stratosphere. Examples of near-coincident measurements with MLS are also shown, and agreement at the 5 % level is observed for the majority of the stratosphere. Both simulated retrievals and coincident comparisons with MLS are shown at the edge of the polar vortex, comparing the results to a traditional one-dimensional retrieval. The one-dimensional retrieval is shown to consistently overestimate the amount of ozone in areas of large horizontal gradients relative to both MLS and the two-dimensional retrieval.
NASA Astrophysics Data System (ADS)
Lipton, A.; Moncet, J. L.; Lynch, R.; Payne, V.; Alvarado, M. J.
2016-12-01
We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. For analysis of satellite profiles over multi-decade periods, a concern is that the algorithm could respond inadequately to climate change if it uses a static background as a retrieval constraint, leading to retrievals that underestimate secular changes over extended periods of time and become biased toward an outdated climatology. We assessed the ability of our algorithm to respond appropriately to changes in temperature and water vapor profiles associated with climate change and, in particular, on the impact of using a climatological background in retrievals when the climatology is not static. We simulated a scenario wherein our algorithm processes 30 years of data from CrIS and ATMS (CrIMSS) with a static background based on data from the start of the 30-year period. We performed simulations using products from Coupled Model Intercomparison Project 5 (CMIP5), and in particular the "representative concentration pathways" midrange emissions (RCP4.5) scenario from the GISS-E2-R model. We will present results indicating that regularization using empirical orthogonal functions (EOFs) from a 30-year outdated covariance had a negligible effect on results. For temperature, the secular change is represented with high fidelity with the CrIMSS retrievals. For water vapor, an outdated background adds distortion to the secular moistening trend in the troposphere only above 300 mb, where the sensor information content is less than at lower levels. We will also present results illustrating the consistency between retrievals from near-simultaneous AIRS and CrIMSS measurements.
Uniform, optimal signal processing of mapped deep-sequencing data.
Kumar, Vibhor; Muratani, Masafumi; Rayan, Nirmala Arul; Kraus, Petra; Lufkin, Thomas; Ng, Huck Hui; Prabhakar, Shyam
2013-07-01
Despite their apparent diversity, many problems in the analysis of high-throughput sequencing data are merely special cases of two general problems, signal detection and signal estimation. Here we adapt formally optimal solutions from signal processing theory to analyze signals of DNA sequence reads mapped to a genome. We describe DFilter, a detection algorithm that identifies regulatory features in ChIP-seq, DNase-seq and FAIRE-seq data more accurately than assay-specific algorithms. We also describe EFilter, an estimation algorithm that accurately predicts mRNA levels from as few as 1-2 histone profiles (R ∼0.9). Notably, the presence of regulatory motifs in promoters correlates more with histone modifications than with mRNA levels, suggesting that histone profiles are more predictive of cis-regulatory mechanisms. We show by applying DFilter and EFilter to embryonic forebrain ChIP-seq data that regulatory protein identification and functional annotation are feasible despite tissue heterogeneity. The mathematical formalism underlying our tools facilitates integrative analysis of data from virtually any sequencing-based functional profile.
From the SAIN,LIM system to the SENS algorithm: a review of a French approach of nutrient profiling.
Tharrey, Marion; Maillot, Matthieu; Azaïs-Braesco, Véronique; Darmon, Nicole
2017-08-01
Nutrient profiling aims to classify or rank foods according to their nutritional composition to assist policies aimed at improving the nutritional quality of foods and diets. The present paper reviews a French approach of nutrient profiling by describing the SAIN,LIM system and its evolution from its early draft to the simplified nutrition labelling system (SENS) algorithm. Considered in 2010 by WHO as the 'French model' of nutrient profiling, SAIN,LIM classifies foods into four classes based on two scores: a nutrient density score (NDS) called SAIN and a score of nutrients to limit called LIM, and one threshold on each score. The system was first developed by the French Food Standard Agency in 2008 in response to the European regulation on nutrition and health claims (European Commission (EC) 1924/2006) to determine foods that may be eligible for bearing claims. Recently, the European regulation (EC 1169/2011) on the provision of food information to consumers allowed simplified nutrition labelling to facilitate consumer information and help them make fully informed choices. In that context, the SAIN,LIM was adapted to obtain the SENS algorithm, a system able to rank foods for simplified nutrition labelling. The implementation of the algorithm followed a step-by-step, systematic, transparent and logical process where shortcomings of the SAIN,LIM were addressed by integrating specificities of food categories in the SENS, reducing the number of nutrients, ordering the four classes and introducing European reference intakes. Through the French example, this review shows how an existing nutrient profiling system can be specifically adapted to support public health nutrition policies.
NASA Astrophysics Data System (ADS)
Kikuchi, N.; Yoshida, Y.; Uchino, O.; Morino, I.; Yokota, T.
2016-11-01
We present an algorithm for retrieving column-averaged dry air mole fraction of carbon dioxide (XCO2) and methane (XCH4) from reflected spectra in the shortwave infrared (SWIR) measured by the TANSO-FTS (Thermal And Near infrared Sensor for carbon Observation Fourier Transform Spectrometer) sensor on board the Greenhouse gases Observing SATellite (GOSAT). The algorithm uses the two linear polarizations observed by TANSO-FTS to improve corrections to the interference effects of atmospheric aerosols, which degrade the accuracy in the retrieved greenhouse gas concentrations. To account for polarization by the land surface reflection in the forward model, we introduced a bidirectional reflection matrix model that has two parameters to be retrieved simultaneously with other state parameters. The accuracy in XCO2 and XCH4 values retrieved with the algorithm was evaluated by using simulated retrievals over both land and ocean, focusing on the capability of the algorithm to correct imperfect prior knowledge of aerosols. To do this, we first generated simulated TANSO-FTS spectra using a global distribution of aerosols computed by the aerosol transport model SPRINTARS. Then the simulated spectra were submitted to the algorithms as measurements both with and without polarization information, adopting a priori profiles of aerosols that differ from the true profiles. We found that the accuracy of XCO2 and XCH4, as well as profiles of aerosols, retrieved with polarization information was considerably improved over values retrieved without polarization information, for simulated observations over land with aerosol optical thickness greater than 0.1 at 1.6 μm.
Finger tracking for hand-held device interface using profile-matching stereo vision
NASA Astrophysics Data System (ADS)
Chang, Yung-Ping; Lee, Dah-Jye; Moore, Jason; Desai, Alok; Tippetts, Beau
2013-01-01
Hundreds of millions of people use hand-held devices frequently and control them by touching the screen with their fingers. If this method of operation is being used by people who are driving, the probability of deaths and accidents occurring substantially increases. With a non-contact control interface, people do not need to touch the screen. As a result, people will not need to pay as much attention to their phones and thus drive more safely than they would otherwise. This interface can be achieved with real-time stereovision. A novel Intensity Profile Shape-Matching Algorithm is able to obtain 3-D information from a pair of stereo images in real time. While this algorithm does have a trade-off between accuracy and processing speed, the result of this algorithm proves the accuracy is sufficient for the practical use of recognizing human poses and finger movement tracking. By choosing an interval of disparity, an object at a certain distance range can be segmented. In other words, we detect the object by its distance to the cameras. The advantage of this profile shape-matching algorithm is that detection of correspondences relies on the shape of profile and not on intensity values, which are subjected to lighting variations. Based on the resulting 3-D information, the movement of fingers in space from a specific distance can be determined. Finger location and movement can then be analyzed for non-contact control of hand-held devices.
Ocean Observations with EOS/MODIS: Algorithm Development and Post Launch Studies
NASA Technical Reports Server (NTRS)
Gordon, Howard R.; Conboy, B. (Technical Monitor)
1999-01-01
Significant accomplishments made during the present reporting period include: 1) Installed spectral optimization algorithm in the SeaDas image processing environment and successfully processed SeaWiFS imagery. The results were superior to the standard SeaWiFS algorithm (the MODIS prototype) in a turbid atmosphere off the US East Coast, but similar in a clear (typical) oceanic atmosphere; 2) Inverted ACE-2 LIDAR measurements coupled with sun photometer-derived aerosol optical thickness to obtain the vertical profile of aerosol optical thickness. The profile was validated with simultaneous aircraft measurements; and 3) Obtained LIDAR and CIMEL measurements of typical maritime and mineral dust-dominated marine atmosphere in the U.S. Virgin Islands. Contemporaneous SeaWiFS imagery were also acquired.
NASA Astrophysics Data System (ADS)
Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco
2005-01-01
We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.
Strategic Control Algorithm Development : Volume 1. Summary.
DOT National Transportation Integrated Search
1974-08-01
Strategic control is an air traffic management concept wherein a central control authority determines, and assigns to each participating airplane, a conflict-free, four-dimensional route-time profile. The route-time profile assignments are long term ...
NASA Astrophysics Data System (ADS)
Lisitsa, Y. V.; Yatskou, M. M.; Apanasovich, V. V.; Apanasovich, T. V.
2015-09-01
We have developed an algorithm for segmentation of cancer cell nuclei in three-channel luminescent images of microbiological specimens. The algorithm is based on using a correlation between fluorescence signals in the detection channels for object segmentation, which permits complete automation of the data analysis procedure. We have carried out a comparative analysis of the proposed method and conventional algorithms implemented in the CellProfiler and ImageJ software packages. Our algorithm has an object localization uncertainty which is 2-3 times smaller than for the conventional algorithms, with comparable segmentation accuracy.
An Algorithm For Climate-Quality Atmospheric Profiling Continuity From EOS Aqua To Suomi-NPP
NASA Astrophysics Data System (ADS)
Moncet, J. L.
2015-12-01
We will present results from an algorithm that is being developed to produce climate-quality atmospheric profiling earth system data records (ESDRs) for application to hyperspectral sounding instrument data from Suomi-NPP, EOS Aqua, and other spacecraft. The current focus is on data from the S-NPP Cross-track Infrared Sounder (CrIS) and Advanced Technology Microwave Sounder (ATMS) instruments as well as the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua. The algorithm development at Atmospheric and Environmental Research (AER) has common heritage with the optimal estimation (OE) algorithm operationally processing S-NPP data in the Interface Data Processing Segment (IDPS), but the ESDR algorithm has a flexible, modular software structure to support experimentation and collaboration and has several features adapted to the climate orientation of ESDRs. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. The radiative transfer component uses an enhanced version of optimal spectral sampling (OSS) with updated spectroscopy, treatment of emission that is not in local thermodynamic equilibrium (non-LTE), efficiency gains with "global" optimal sampling over all channels, and support for channel selection. The algorithm is designed for adaptive treatment of clouds, with capability to apply "cloud clearing" or simultaneous cloud parameter retrieval, depending on conditions. We will present retrieval results demonstrating the impact of a new capability to perform the retrievals on sigma or hybrid vertical grid (as opposed to a fixed pressure grid), which particularly affects profile accuracy over land with variable terrain height and with sharp vertical structure near the surface. In addition, we will show impacts of alternative treatments of regularization of the inversion. While OE algorithms typically implement regularization by using background estimates from climatological or numerical forecast data, those sources are problematic for climate applications due to the imprint of biases from past climate analyses or from model error.
Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang
2014-01-01
Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection. PMID:25192314
Han, Wenhua; Shen, Xiaohui; Xu, Jun; Wang, Ping; Tian, Guiyun; Wu, Zhengyang
2014-09-04
Magnetic flux leakage (MFL) inspection is one of the most important and sensitive nondestructive testing approaches. For online MFL inspection of a long-range railway track or oil pipeline, a fast and effective defect profile estimating method based on a multi-power affine projection algorithm (MAPA) is proposed, where the depth of a sampling point is related with not only the MFL signals before it, but also the ones after it, and all of the sampling points related to one point appear as serials or multi-power. Defect profile estimation has two steps: regulating a weight vector in an MAPA filter and estimating a defect profile with the MAPA filter. Both simulation and experimental data are used to test the performance of the proposed method. The results demonstrate that the proposed method exhibits high speed while maintaining the estimated profiles clearly close to the desired ones in a noisy environment, thereby meeting the demand of accurate online inspection.
NASA Astrophysics Data System (ADS)
Ilhan, Z.; Wehner, W. P.; Schuster, E.; Boyer, M. D.; Gates, D. A.; Gerhardt, S.; Menard, J.
2015-11-01
Active control of the toroidal current density profile is crucial to achieve and maintain high-performance, MHD-stable plasma operation in NSTX-U. A first-principles-driven, control-oriented model describing the temporal evolution of the current profile has been proposed earlier by combining the magnetic diffusion equation with empirical correlations obtained at NSTX-U for the electron density, electron temperature, and non-inductive current drives. A feedforward + feedback control scheme for the requlation of the current profile is constructed by embedding the proposed nonlinear, physics-based model into the control design process. Firstly, nonlinear optimization techniques are used to design feedforward actuator trajectories that steer the plasma to a desired operating state with the objective of supporting the traditional trial-and-error experimental process of advanced scenario planning. Secondly, a feedback control algorithm to track a desired current profile evolution is developed with the goal of adding robustness to the overall control scheme. The effectiveness of the combined feedforward + feedback control algorithm for current profile regulation is tested in predictive simulations carried out in TRANSP. Supported by PPPL.
Retrieval of Atmospheric Water Vapor Profiles from the Special Sensor Microwave TEMPERATURE-2
NASA Astrophysics Data System (ADS)
Al-Khalaf, Abdulrahman Khal
1995-01-01
Radiometric measurements from the Special Sensor Microwave/Temperature-2 (SSM/T-2) instrument are used to retrieve atmospheric water vapor profiles over ocean, land, coast, and ice/snow backgrounds. These measurements are used to retrieve vertical distribution of integrated water vapor (IWV) and total integrated water vapor (TIWV) using a physical algorithm. The algorithm infers the presence of cloud at a given height from super-saturation of the retrieved humidity at that height then the algorithm estimate the cloud liquid water content. Retrievals of IWV over five different layers are validated against available ground truth such as global radiosondes and ECMWF analyses. Over ocean, the retrieved total integrated water vapor (TIWV) and IWV close to the surface compare quite well, with those from radiosonde observations and the European Center for Medium Range Weather Forecasts (ECMWF) analyses. However, comparisons to radiosonde results are better than (ECMWF) analyses. TIWV root mean square (RMS) difference was 5.95 mm and TWV RMS difference for the lowest layer (SFC-850 mb) was 2.8 mm for radiosonde comparisons. Water vapor retrieval over land is less accurate than over ocean due to the low contrast between the surface and the atmosphere near the surface; therefore, land retrievals are more reliable at layers above 700 mb. However, TIWV and IWV at all layers compare appropriately with ground truth. Over coastal areas the agreement between retrieved water vapor profiles and ground truth is quite good for both TIWV and IWV for the five layers. The natural variability and large variations in the surface emissivity over ice and snow fields leads toward poor results. Clouds degrade retrievals over land and coast, improve the retrievals a little over ocean, and improve dramatically over snow/ice. Examples of retrieved relative humidity profiles were shown to illustrate the algorithm performance for the actual profile retrieval. The overall features of the retrieved profiles compared well with those from radiosonde data and ECMWF analyses. However, due to the limited number of channels, the retrieved profiles generally do not reproduce the fine details when a rapid change in relative humidity versus height was observed.
A 3D Cloud-Construction Algorithm for the EarthCARE Satellite Mission
NASA Technical Reports Server (NTRS)
Barker, H. W.; Jerg, M. P.; Wehr, T.; Kato, S.; Donovan, D. P.; Hogan, R. J.
2011-01-01
This article presents and assesses an algorithm that constructs 3D distributions of cloud from passive satellite imagery and collocated 2D nadir profiles of cloud properties inferred synergistically from lidar, cloud radar and imager data.
NASA Technical Reports Server (NTRS)
Li, Can; Krotkov, Nickolay A.; Carn, Simon; Zhang, Yan; Spurr, Robert J. D.; Joiner, Joanna
2017-01-01
Since the fall of 2004, the Ozone Monitoring Instrument (OMI) has been providing global monitoring of volcanic SO2 emissions, helping to understand their climate impacts and to mitigate aviation hazards. Here we introduce a new-generation OMI volcanic SO2 dataset based on a principal component analysis (PCA) retrieval technique. To reduce retrieval noise and artifacts as seen in the current operational linear fit (LF) algorithm, the new algorithm, OMSO2VOLCANO, uses characteristic features extracted directly from OMI radiances in the spectral fitting, thereby helping to minimize interferences from various geophysical processes (e.g., O3 absorption) and measurement details (e.g., wavelength shift). To solve the problem of low bias for large SO2 total columns in the LF product, the OMSO2VOLCANO algorithm employs a table lookup approach to estimate SO2 Jacobians (i.e., the instrument sensitivity to a perturbation in the SO2 column amount) and iteratively adjusts the spectral fitting window to exclude shorter wavelengths where the SO2 absorption signals are saturated. To first order, the effects of clouds and aerosols are accounted for using a simple Lambertian equivalent reflectivity approach. As with the LF algorithm, OMSO2VOLCANO provides total column retrievals based on a set of predefined SO2 profiles from the lower troposphere to the lower stratosphere, including a new profile peaked at 13 km for plumes in the upper troposphere. Examples given in this study indicate that the new dataset shows significant improvement over the LF product, with at least 50% reduction in retrieval noise over the remote Pacific. For large eruptions such as Kasatochi in 2008 (approximately 1700 kt total SO2/ and Sierra Negra in 2005 (greater than 1100DU maximum SO2), OMSO2VOLCANO generally agrees well with other algorithms that also utilize the full spectral content of satellite measurements, while the LF algorithm tends to underestimate SO2. We also demonstrate that, despite the coarser spatial and spectral resolution of the Suomi National Polar-orbiting Partnership (Suomi-NPP) Ozone Mapping and Profiler Suite (OMPS) instrument, application of the new PCA algorithm to OMPS data produces highly consistent retrievals between OMI and OMPS. The new PCA algorithm is therefore capable of continuing the volcanic SO2 data record well into the future using current and future hyperspectral UV satellite instruments.
Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes
NASA Astrophysics Data System (ADS)
Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.
2018-04-01
To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.
Multidither Adaptive Algorithms.
1977-01-01
MIRROR MECHANICAL PROPERTIES...........17 Deformable Mirror Design and Construction...........17 Influence Function .......................26...actuator location numbering guide ......... ....................... 27 6 Influence function profiles of beryllium mirror...28 7 Influence function profile of beryllium mirror ........ ...................... 29 8 RADC mirror faceplate influence function ........ . 30 9
Bumps in river profiles: uncertainty assessment and smoothing using quantile regression techniques
NASA Astrophysics Data System (ADS)
Schwanghart, Wolfgang; Scherler, Dirk
2017-12-01
The analysis of longitudinal river profiles is an important tool for studying landscape evolution. However, characterizing river profiles based on digital elevation models (DEMs) suffers from errors and artifacts that particularly prevail along valley bottoms. The aim of this study is to characterize uncertainties that arise from the analysis of river profiles derived from different, near-globally available DEMs. We devised new algorithms - quantile carving and the CRS algorithm - that rely on quantile regression to enable hydrological correction and the uncertainty quantification of river profiles. We find that globally available DEMs commonly overestimate river elevations in steep topography. The distributions of elevation errors become increasingly wider and right skewed if adjacent hillslope gradients are steep. Our analysis indicates that the AW3D DEM has the highest precision and lowest bias for the analysis of river profiles in mountainous topography. The new 12 m resolution TanDEM-X DEM has a very low precision, most likely due to the combined effect of steep valley walls and the presence of water surfaces in valley bottoms. Compared to the conventional approaches of carving and filling, we find that our new approach is able to reduce the elevation bias and errors in longitudinal river profiles.
Nemenman, Ilya; Escola, G Sean; Hlavacek, William S; Unkefer, Pat J; Unkefer, Clifford J; Wall, Michael E
2007-12-01
We investigate the ability of algorithms developed for reverse engineering of transcriptional regulatory networks to reconstruct metabolic networks from high-throughput metabolite profiling data. For benchmarking purposes, we generate synthetic metabolic profiles based on a well-established model for red blood cell metabolism. A variety of data sets are generated, accounting for different properties of real metabolic networks, such as experimental noise, metabolite correlations, and temporal dynamics. These data sets are made available online. We use ARACNE, a mainstream algorithm for reverse engineering of transcriptional regulatory networks from gene expression data, to predict metabolic interactions from these data sets. We find that the performance of ARACNE on metabolic data is comparable to that on gene expression data.
iPcc: a novel feature extraction method for accurate disease class discovery and prediction
Ren, Xianwen; Wang, Yong; Zhang, Xiang-Sun; Jin, Qi
2013-01-01
Gene expression profiling has gradually become a routine procedure for disease diagnosis and classification. In the past decade, many computational methods have been proposed, resulting in great improvements on various levels, including feature selection and algorithms for classification and clustering. In this study, we present iPcc, a novel method from the feature extraction perspective to further propel gene expression profiling technologies from bench to bedside. We define ‘correlation feature space’ for samples based on the gene expression profiles by iterative employment of Pearson’s correlation coefficient. Numerical experiments on both simulated and real gene expression data sets demonstrate that iPcc can greatly highlight the latent patterns underlying noisy gene expression data and thus greatly improve the robustness and accuracy of the algorithms currently available for disease diagnosis and classification based on gene expression profiles. PMID:23761440
Screening for prenatal substance use: development of the Substance Use Risk Profile-Pregnancy scale.
Yonkers, Kimberly A; Gotman, Nathan; Kershaw, Trace; Forray, Ariadna; Howell, Heather B; Rounsaville, Bruce J
2010-10-01
To report on the development of a questionnaire to screen for hazardous substance use in pregnant women and to compare the performance of the questionnaire with other drug and alcohol measures. Pregnant women were administered a modified TWEAK (Tolerance, Worried, Eye-openers, Amnesia, K[C] Cut Down) questionnaire, the 4Ps Plus questionnaire, items from the Addiction Severity Index, and two questions about domestic violence (N=2,684). The sample was divided into "training" (n=1,610) and "validation" (n=1,074) subsamples. We applied recursive partitioning class analysis to the responses from individuals in the training subsample that resulted in a three-item Substance Use Risk Profile-Pregnancy scale. We examined sensitivity, specificity, and the fit of logistic regression models in the validation subsample to compare the performance of the Substance Use Risk Profile-Pregnancy scale with the modified TWEAK and various scoring algorithms of the 4Ps. The Substance Use Risk Profile-Pregnancy scale is comprised of three informative questions that can be scored for high- or low-risk populations. The Substance Use Risk Profile-Pregnancy scale algorithm for low-risk populations was mostly highly predictive of substance use in the validation subsample (Akaike's Information Criterion=579.75, Nagelkerke R=0.27) with high sensitivity (91%) and adequate specificity (67%). The high-risk algorithm had lower sensitivity (57%) but higher specificity (88%). The Substance Use Risk Profile-Pregnancy scale is simple and flexible with good sensitivity and specificity. The Substance Use Risk Profile-Pregnancy scale can potentially detect a range of substances that may be abused. Clinicians need to further assess women with a positive screen to identify those who require treatment for alcohol or illicit substance use in pregnancy. III.
NASA Technical Reports Server (NTRS)
Blissit, J. A.
1986-01-01
Using analysis results from the post trajectory optimization program, an adaptive guidance algorithm is developed to compensate for density, aerodynamic and thrust perturbations during an atmospheric orbital plane change maneuver. The maneuver offers increased mission flexibility along with potential fuel savings for future reentry vehicles. Although designed to guide a proposed NASA Entry Research Vehicle, the algorithm is sufficiently generic for a range of future entry vehicles. The plane change analysis provides insight suggesting a straight-forward algorithm based on an optimized nominal command profile. Bank angle, angle of attack, and engine thrust level, ignition and cutoff times are modulated to adjust the vehicle's trajectory to achieve the desired end-conditions. A performance evaluation of the scheme demonstrates a capability to guide to within 0.05 degrees of the desired plane change and five nautical miles of the desired apogee altitude while maintaining heating constraints. The algorithm is tested under off-nominal conditions of + or -30% density biases, two density profile models, + or -15% aerodynamic uncertainty, and a 33% thrust loss and for various combinations of these conditions.
A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering
Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani
2012-01-01
Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386
Comparison of MAX-DOAS profiling algorithms during CINDI-2 - Part 1: aerosols
NASA Astrophysics Data System (ADS)
Friess, Udo; Hendrick, Francois; Tirpitz, Jan-Lukas; Apituley, Arnoud; van Roozendael, Michel; Kreher, Karin; Richter, Andreas; Wagner, Thomas
2017-04-01
The second Cabauw Intercomparison campaign for Nitrogen Dioxide measuring Instruments (CINDI-2) took place at the Cabauw Experimental Site for Atmospheric Research (CESAR; Utrecht area, The Netherlands) from 25 August until 7 October 2016. CINDI-2 was aiming at assessing the consistency of MAX-DOAS slant column density measurements of tropospheric species (NO2, HCHO, O3, and O4) relevant for the validation of future ESA atmospheric Sentinel missions, through coordinated operation of a large number of DOAS and MAXDOAS instruments from all over the world. An important objective of the campaign was to study the relationship between remote-sensing column and profile measurements of the above species and collocated reference ancillary observations. For this purpose, the CINDI-2 Profiling Task Team (CPTT) was created, involving 22 groups performing aerosol and trace gas vertical profile inversion using dedicated MAX-DOAS profiling algorithms, as well as the teams responsible for ancillary profile and surface concentration measurements (NO2 analysers, NO2 sondes, NO2 and Raman LIDARs, CAPS, Long-Path DOAS, sun photometer, nephelometer, etc). The main purpose of the CPTT is to assess the consistency of the different profiling tools for retrieving aerosol extinction and trace gas vertical profiles through comparison exercises using commonly defined settings and to validate the retrievals with correlative observations. In this presentation, we give an overview of the MAX-DOAS vertical profile comparison results, focusing on the retrieval of aerosol extinction profiles, with the trace gas retrievals being presented in a companion abstract led by F. Hendrick. The performance of the different algorithms is investigated with respect to the variable visibility and cloud conditions encountered during the campaign. The consistency between optimal-estimation-based and parameterized profiling tools is also evaluated for these different conditions, together with the level of agreement with available ancillary aerosol observations, including sun photometer, nephelometer and LIDAR. This comparison study will be put in the perspective of the development of a centralized MAX-DOAS processing system within the framework of the ESA Fiducial Reference Measurements (FRM) project.
NASA Technical Reports Server (NTRS)
Petty, Grant W.; Stettner, David R.
1994-01-01
This paper discusses certain aspects of a new inversion based algorithm for the retrieval of rain rate over the open ocean from the special sensor microwave/imager (SSM/I) multichannel imagery. This algorithm takes a more detailed physical approach to the retrieval problem than previously discussed algorithms that perform explicit forward radiative transfer calculations based on detailed model hydrometer profiles and attempt to match the observations to the predicted brightness temperature.
Berenbrock, Charles E.
2015-01-01
The effects of reduced cross-sectional data points on steady-flow profiles were also determined. Thirty-five cross sections of the original steady-flow model of the Kootenai River were used. These two methods were tested for all cross sections with each cross section resolution reduced to 10, 20 and 30 data points, that is, six tests were completed for each of the thirty-five cross sections. Generally, differences from the original water-surface elevation were smaller as the number of data points in reduced cross sections increased, but this was not always the case, especially in the braided reach. Differences were smaller for reduced cross sections developed by the genetic algorithm method than the standard algorithm method.
Predictive Lateral Logic for Numerical Entry Guidance Algorithms
NASA Technical Reports Server (NTRS)
Smith, Kelly M.
2016-01-01
Recent entry guidance algorithm development123 has tended to focus on numerical integration of trajectories onboard in order to evaluate candidate bank profiles. Such methods enjoy benefits such as flexibility to varying mission profiles and improved robustness to large dispersions. A common element across many of these modern entry guidance algorithms is a reliance upon the concept of Apollo heritage lateral error (or azimuth error) deadbands in which the number of bank reversals to be performed is non-deterministic. This paper presents a closed-loop bank reversal method that operates with a fixed number of bank reversals defined prior to flight. However, this number of bank reversals can be modified at any point, including in flight, based on contingencies such as fuel leaks where propellant usage must be minimized.
Data Assimilation Experiments Using Quality Controlled AIRS Version 5 Temperature Soundings
NASA Technical Reports Server (NTRS)
Susskind, Joel
2009-01-01
The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains a number of significant improvements over Version 4. Two very significant improvements are described briefly below. 1) The AIRS Science Team Radiative Transfer Algorithm (RTA) has now been upgraded to accurately account for effects of non-local thermodynamic equilibrium on the AIRS observations. This allows for use of AIRS observations in the entire 4.3 micron CO2 absorption band in the retrieval algorithm during both day and night. Following theoretical considerations, tropospheric temperature profile information is obtained almost exclusively from clear column radiances in the 4.3 micron CO2 band in the AIRS Version 5 temperature profile retrieval step. These clear column radiances are a derived product that are indicative of radiances AIRS channels would have seen if the field of view were completely clear. Clear column radiances for all channels are determined using tropospheric sounding 15 micron CO2 observations. This approach allows for the generation of accurate values of clear column radiances and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by-channel clear column radiances. These error estimates are used for quality control of the retrieved products. Based on error estimate thresholds, each temperature profiles is assigned a characteristic pressure, pg, down to which the profile is characterized as good for use for data assimilation purposes. We have conducted forecast impact experiments assimilating AIRS quality controlled temperature profiles using the NASA GEOS-5 data assimilation system, consisting of the NCEP GSI analysis coupled with the NASA FVGCM, at a spatial resolution of 0.5 deg by 0.5 deg. Assimilation of Quality Controlled AIRS temperature profiles down to pg resulted in significantly improved forecast skill compared to that obtained from experiments when all data used operationally by NCEP, except for AIRS data, is assimilated. These forecasts were also significantly better than to those obtained when AIRS radiances (rather than temperature profiles) are assimilated, which is the way AIRS data is used operationally by NCEP and ECMWF.
Analysis of miRNA expression profile based on SVM algorithm
NASA Astrophysics Data System (ADS)
Ting-ting, Dai; Chang-ji, Shan; Yan-shou, Dong; Yi-duo, Bian
2018-05-01
Based on mirna expression spectrum data set, a new data mining algorithm - tSVM - KNN (t statistic with support vector machine - k nearest neighbor) is proposed. the idea of the algorithm is: firstly, the feature selection of the data set is carried out by the unified measurement method; Secondly, SVM - KNN algorithm, which combines support vector machine (SVM) and k - nearest neighbor (k - nearest neighbor) is used as classifier. Simulation results show that SVM - KNN algorithm has better classification ability than SVM and KNN alone. Tsvm - KNN algorithm only needs 5 mirnas to obtain 96.08 % classification accuracy in terms of the number of mirna " tags" and recognition accuracy. compared with similar algorithms, tsvm - KNN algorithm has obvious advantages.
Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander
2011-01-01
This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806
NASA Technical Reports Server (NTRS)
Barbre, Robert E., Jr.
2012-01-01
This paper presents the process used by the Marshall Space Flight Center Natural Environments Branch (EV44) to quality control (QC) data from the Kennedy Space Center's 50-MHz Doppler Radar Wind Profiler for use in vehicle wind loads and steering commands. The database has been built to mitigate limitations of using the currently archived databases from weather balloons. The DRWP database contains wind measurements from approximately 2.7-18.6 km altitude at roughly five minute intervals for the August 1997 to December 2009 period of record, and the extensive QC process was designed to remove spurious data from various forms of atmospheric and non-atmospheric artifacts. The QC process is largely based on DRWP literature, but two new algorithms have been developed to remove data contaminated by convection and excessive first guess propagations from the Median Filter First Guess Algorithm. In addition to describing the automated and manual QC process in detail, this paper describes the extent of the data retained. Roughly 58% of all possible wind observations exist in the database, with approximately 100 times as many complete profile sets existing relative to the EV44 balloon databases. This increased sample of near-continuous wind profile measurements may help increase launch availability by reducing the uncertainty of wind changes during launch countdown
Solution algorithm of dwell time in slope-based figuring model
NASA Astrophysics Data System (ADS)
Li, Yong; Zhou, Lin
2017-10-01
Surface slope profile is commonly used to evaluate X-ray reflective optics, which is used in synchrotron radiation beam. Moreover, the measurement result of measuring instrument for X-ray reflective optics is usually the surface slope profile rather than the surface height profile. To avoid the conversion error, the slope-based figuring model is introduced introduced by processing the X-ray reflective optics based on surface height-based model. However, the pulse iteration method, which can quickly obtain the dell time solution of the traditional height-based figuring model, is not applied to the slope-based figuring model because property of the slope removal function have both positive and negative values and complex asymmetric structure. To overcome this problem, we established the optimal mathematical model for the dwell time solution, By introducing the upper and lower limits of the dwell time and the time gradient constraint. Then we used the constrained least squares algorithm to solve the dwell time in slope-based figuring model. To validate the proposed algorithm, simulations and experiments are conducted. A flat mirror with effective aperture of 80 mm is polished on the ion beam machine. After iterative polishing three times, the surface slope profile error of the workpiece is converged from RMS 5.65 μrad to RMS 1.12 μrad.
Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements
NASA Technical Reports Server (NTRS)
Vaughan, Mark A.; Powell, Kathleen A.; Kuehn, Ralph E.; Young, Stuart A.; Winker, David M.; Hostetler, Chris A.; Hunt, William H.; Liu, Zhaoyan; McGill, Matthew J.; Getzewich, Brian J.
2009-01-01
Accurate knowledge of the vertical and horizontal extent of clouds and aerosols in the earth s atmosphere is critical in assessing the planet s radiation budget and for advancing human understanding of climate change issues. To retrieve this fundamental information from the elastic backscatter lidar data acquired during the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission, a selective, iterated boundary location (SIBYL) algorithm has been developed and deployed. SIBYL accomplishes its goals by integrating an adaptive context-sensitive profile scanner into an iterated multiresolution spatial averaging scheme. This paper provides an in-depth overview of the architecture and performance of the SIBYL algorithm. It begins with a brief review of the theory of target detection in noise-contaminated signals, and an enumeration of the practical constraints levied on the retrieval scheme by the design of the lidar hardware, the geometry of a space-based remote sensing platform, and the spatial variability of the measurement targets. Detailed descriptions are then provided for both the adaptive threshold algorithm used to detect features of interest within individual lidar profiles and the fully automated multiresolution averaging engine within which this profile scanner functions. The resulting fusion of profile scanner and averaging engine is specifically designed to optimize the trade-offs between the widely varying signal-to-noise ratio of the measurements and the disparate spatial resolutions of the detection targets. Throughout the paper, specific algorithm performance details are illustrated using examples drawn from the existing CALIPSO dataset. Overall performance is established by comparisons to existing layer height distributions obtained by other airborne and space-based lidars.
SU-F-T-527: A Novel Dynamic Multileaf Collimator Leaf-Sequencing Algorithm in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing, J; Lin, H; Chow, J
Purpose: A novel leaf-sequencing algorithm is developed for generating arbitrary beam intensity profiles in discrete levels using dynamic multileaf collimator (MLC). The efficiency of this dynamic MLC leaf-sequencing method was evaluated using external beam treatment plans delivered by intensity modulated radiation therapy technique. Methods: To qualify and validate this algorithm, integral test for the beam segment of MLC generated by the CORVUS treatment planning system was performed with clinical intensity map experiments. The treatment plans were optimized and the fluence maps for all photon beams were determined. This algorithm started with the algebraic expression for the area under the beammore » profile. The coefficients in the expression can be transformed into the specifications for the leaf-setting sequence. The leaf optimization procedure was then applied and analyzed for clinical relevant intensity profiles in cancer treatment. Results: The macrophysical effect of this method can be described by volumetric plan evaluation tools such as dose-volume histograms (DVHs). The DVH results are in good agreement compared to those from the CORVUS treatment planning system. Conclusion: We developed a dynamic MLC method to examine the stability of leaf speed including effects of acceleration and deceleration of leaf motion in order to make sure the stability of leaf speed did not affect the intensity profile generated. It was found that the mechanical requirements were better satisfied using this method. The Project is sponsored by the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry.« less
Overhead longwave infrared hyperspectral material identification using radiometric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelinski, M. E.
Material detection algorithms used in hyperspectral data processing are computationally efficient but can produce relatively high numbers of false positives. Material identification performed as a secondary processing step on detected pixels can help separate true and false positives. This paper presents a material identification processing chain for longwave infrared hyperspectral data of solid materials collected from airborne platforms. The algorithms utilize unwhitened radiance data and an iterative algorithm that determines the temperature, humidity, and ozone of the atmospheric profile. Pixel unmixing is done using constrained linear regression and Bayesian Information Criteria for model selection. The resulting product includes an optimalmore » atmospheric profile and full radiance material model that includes material temperature, abundance values, and several fit statistics. A logistic regression method utilizing all model parameters to improve identification is also presented. This paper details the processing chain and provides justification for the algorithms used. Several examples are provided using modeled data at different noise levels.« less
Identifying technical aliases in SELDI mass spectra of complex mixtures of proteins
2013-01-01
Background Biomarker discovery datasets created using mass spectrum protein profiling of complex mixtures of proteins contain many peaks that represent the same protein with different charge states. Correlated variables such as these can confound the statistical analyses of proteomic data. Previously we developed an algorithm that clustered mass spectrum peaks that were biologically or technically correlated. Here we demonstrate an algorithm that clusters correlated technical aliases only. Results In this paper, we propose a preprocessing algorithm that can be used for grouping technical aliases in mass spectrometry protein profiling data. The stringency of the variance allowed for clustering is customizable, thereby affecting the number of peaks that are clustered. Subsequent analysis of the clusters, instead of individual peaks, helps reduce difficulties associated with technically-correlated data, and can aid more efficient biomarker identification. Conclusions This software can be used to pre-process and thereby decrease the complexity of protein profiling proteomics data, thus simplifying the subsequent analysis of biomarkers by decreasing the number of tests. The software is also a practical tool for identifying which features to investigate further by purification, identification and confirmation. PMID:24010718
Cumulative area of peaks in a multidimensional high performance liquid chromatogram.
Stevenson, Paul G; Guiochon, Georges
2013-09-20
An algorithm was developed to recognize peaks in a multidimensional separation and calculate their cumulative peak area. To find the retention times of peaks in a one dimensional chromatogram, the Savitzky-Golay smoothing filter was used to smooth and find the first through third derivatives of the experimental profiles. Close examination of the shape of these curves informs on the number of peaks that are present and provides starting values for fitting theoretical profiles. Due to the nature of comprehensive multidimensional HPLC, adjacent cut fractions may contain compounds common to more than one cut fraction. The algorithm determines which components were common in adjacent cuts and subsequently calculates the area of a two-dimensional peak profile by interpolating the surface of the 2D peaks between adjacent peaks. This algorithm was tested by calculating the cumulative peak area of a series of 2D-HPLC separations of alkylbenzenes, phenol and caffeine with varied concentrations. A good relationship was found between the concentration and the cumulative peak area. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Konovalov, Igor; Breitenstein, Otwin
2001-01-01
An iterative algorithm for the derivation of depth profiles of the minority carrier collection probability in a semiconductor with or without a coating on the top is presented using energy-resolved electron-beam-induced current measurements in planar geometry. The calculation is based on the depth-dose function of Everhart and Hoff (Everhart T E and Hoff P H 1971 J. Appl. Phys. 42 5837) and on the penetration-range function of Kanaya and Okayama (Kanaya K and Okayama S 1972 J. Phys. D: Appl. Phys. 5 43) or on that of Fitting (Fitting H-J 1974 Phys. Status Solidi/ a 26 525). It can also be performed with any other depth-dose functions. Using this algorithm does not require us to make any assumptions on the shape of the collection profile within the depth of interest. The influence of an absorbing top contact and/or a limited thickness of the semiconductor layer appear in the result, but can also be taken explicitly into account. Examples using silicon and CIS solar cells as well as a GaAs LED are presented.
A novel resource sharing algorithm based on distributed construction for radiant enclosure problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finzell, Peter; Bryden, Kenneth M.
This study demonstrates a novel approach to solving inverse radiant enclosure problems based on distributed construction. Specifically, the problem of determining the temperature distribution needed on the heater surfaces to achieve a desired design surface temperature profile is recast as a distributed construction problem in which a shared resource, temperature, is distributed by computational agents moving blocks. The sharing of blocks between agents enables them to achieve their desired local state, which in turn achieves the desired global state. Each agent uses the current state of their local environment and a simple set of rules to determine when to exchangemore » blocks, each block representing a discrete unit of temperature change. This algorithm is demonstrated using the established two-dimensional inverse radiation enclosure problem. The temperature profile on the heater surfaces is adjusted to achieve a desired temperature profile on the design surfaces. The resource sharing algorithm was able to determine the needed temperatures on the heater surfaces to obtain the desired temperature distribution on the design surfaces in the nine cases examined.« less
A novel resource sharing algorithm based on distributed construction for radiant enclosure problems
Finzell, Peter; Bryden, Kenneth M.
2017-03-06
This study demonstrates a novel approach to solving inverse radiant enclosure problems based on distributed construction. Specifically, the problem of determining the temperature distribution needed on the heater surfaces to achieve a desired design surface temperature profile is recast as a distributed construction problem in which a shared resource, temperature, is distributed by computational agents moving blocks. The sharing of blocks between agents enables them to achieve their desired local state, which in turn achieves the desired global state. Each agent uses the current state of their local environment and a simple set of rules to determine when to exchangemore » blocks, each block representing a discrete unit of temperature change. This algorithm is demonstrated using the established two-dimensional inverse radiation enclosure problem. The temperature profile on the heater surfaces is adjusted to achieve a desired temperature profile on the design surfaces. The resource sharing algorithm was able to determine the needed temperatures on the heater surfaces to obtain the desired temperature distribution on the design surfaces in the nine cases examined.« less
NASA Astrophysics Data System (ADS)
Costa Surós, Montserrat; Stachlewska, Iwona S.
2016-04-01
A long-term study, assessing ground-based remote Raman lidar versus in-situ radiosounding has been conducted with the aim of improving the knowledge on the water content vertical profile through the atmosphere, and thus the conditions for cloud formation processes. Water vapor mixing ratio (WVMR) and relative humidity (RH) profiles were retrieved from ADR Lidar (PollyXT-type, EARLINET site in Warsaw). So far, more than 100 nighttime profiles averaged over 1h around midnight from July 2013 to December 2015 have been investigated. Data were evaluated with molecular extinctions calculated using two approximations: the US62 standard atmosphere and the radiosounding launched in Legionowo (12374). The calibration factor CH2O for lidar retrievals was obtained for each profile using the regression method and the profile method to determine the best calibration factor approximation to be used in the final WVMR and RH calculation. Thus, statistically representative results for comparisons between lidar WVMR median profiles obtained by calibrating using radiosounding profiles and using atmospheric synthetic profiles, all of them with the best calibration factor, will be presented. Finally, in order to constrain the conditions of cloud formation in function of the RH profile, the COS14 algorithm, capable of deriving cloud bases and tops by applying thresholds to the RH profiles, was applied to find the cloud vertical structure (CVS). The algorithm was former applied to radiosounding profiles at SGP-ARM site and tested against the CVS obtained from the Active Remote Sensing of Clouds (ARSCL) data. Similarly, it was applied for lidar measurements at the Warsaw measurement site.
Impact of GPM Rainrate Data Assimilation on Simulation of Hurricane Harvey (2017)
NASA Technical Reports Server (NTRS)
Li, Xuanli; Srikishen, Jayanthi; Zavodsky, Bradley; Mecikalski, John
2018-01-01
Built upon Tropical Rainfall Measuring Mission (TRMM) legacy for next-generation global observation of rain and snow. The GPM was launched in February 2014 with Dual-frequency Precipitation Radar (DPR) and GPM Microwave Imager (GMI) onboard. The GPM has a broad global coverage approximately 70deg S -70deg N with a swath of 245/125-km for the Ka (35.5 GHz)/Ku (13.6 GHz) band radar, and 850-km for the 13-channel GMI. GPM also features better retrievals for heavy, moderate, and light rain and snowfall To develop methodology to assimilate GPM surface precipitation data with Grid-point Statistical Interpolation (GSI) data assimilation system and WRF ARW model To investigate the potential and the value of utilizing GPM observation into NWP for operational environment The GPM rain rate data has been successfully assimilated using the GSI rain data assimilation package. Impacts of rain rate data have been found in temperature and moisture fields of initial conditions. 2.Assimilation of either GPM IMERG or GPROF rain product produces significant improvement in precipitation amount and structure for Hurricane Harvey (2017) forecast. Since IMERG data is available half-hourly, further forecast improvement is expected with continuous assimilation of IMERG data
NASA Technical Reports Server (NTRS)
Izumi, K. H.; Thompson, J. L.; Groce, J. L.; Schwab, R. W.
1986-01-01
The design requirements for a 4D path definition algorithm are described. These requirements were developed for the NASA ATOPS as an extension of the Local Flow Management/Profile Descent algorithm. They specify the processing flow, functional and data architectures, and system input requirements, and recommended the addition of a broad path revision (reinitialization) function capability. The document also summarizes algorithm design enhancements and the implementation status of the algorithm on an in-house PDP-11/70 computer. Finally, the requirements for the pilot-computer interfaces, the lateral path processor, and guidance and steering function are described.
NASA Astrophysics Data System (ADS)
Gok, Gokhan; Mosna, Zbysek; Arikan, Feza; Arikan, Orhan; Erdem, Esra
2016-07-01
Ionospheric observation is essentially accomplished by specialized radar systems called ionosondes. The time delay between the transmitted and received signals versus frequency is measured by the ionosondes and the received signals are processed to generate ionogram plots, which show the time delay or reflection height of signals with respect to transmitted frequency. The critical frequencies of ionospheric layers and virtual heights, that provide useful information about ionospheric structurecan be extracted from ionograms . Ionograms also indicate the amount of variability or disturbances in the ionosphere. With special inversion algorithms and tomographical methods, electron density profiles can also be estimated from the ionograms. Although structural pictures of ionosphere in the vertical direction can be observed from ionosonde measurements, some errors may arise due to inaccuracies that arise from signal propagation, modeling, data processing and tomographic reconstruction algorithms. Recently IONOLAB group (www.ionolab.org) developed a new algorithm for effective and accurate extraction of ionospheric parameters and reconstruction of electron density profile from ionograms. The electron density reconstruction algorithm applies advanced optimization techniques to calculate parameters of any existing analytical function which defines electron density with respect to height using ionogram measurement data. The process of reconstructing electron density with respect to height is known as the ionogram scaling or true height analysis. IONOLAB-RAY algorithm is a tool to investigate the propagation path and parameters of HF wave in the ionosphere. The algorithm models the wave propagation using ray representation under geometrical optics approximation. In the algorithm , the structural ionospheric characteristics arerepresented as realistically as possible including anisotropicity, inhomogenity and time dependence in 3-D voxel structure. The algorithm is also used for various purposes including calculation of actual height and generation of ionograms. In this study, the performance of electron density reconstruction algorithm of IONOLAB group and standard electron density profile algorithms of ionosondes are compared with IONOLAB-RAY wave propagation simulation in near vertical incidence. The electron density reconstruction and parameter extraction algorithms of ionosondes are validated with the IONOLAB-RAY results both for quiet anddisturbed ionospheric states in Central Europe using ionosonde stations such as Pruhonice and Juliusruh . It is observed that IONOLAB ionosonde parameter extraction and electron density reconstruction algorithm performs significantly better compared to standard algorithms especially for disturbed ionospheric conditions. IONOLAB-RAY provides an efficient and reliable tool to investigate and validate ionosonde electron density reconstruction algorithms, especially in determination of reflection height (true height) of signals and critical parameters of ionosphere. This study is supported by TUBITAK 114E541, 115E915 and Joint TUBITAK 114E092 and AS CR 14/001 projects.
NASA Astrophysics Data System (ADS)
Zhao, G.; Zhao, C.
2016-12-01
Micro-pulse Lidar (MPL) measurements have been widely used to profile the ambient aerosol extincting coefficient(). Lidar Ratio (LR) ,which highly depends on the particle number size distribution (PNSD) and aerosol hygroscopicity, is the most important factor to retrieve the profile. A constant AOD constrained LR is usually used in current algorithms, which would lead to large bias when the relative humidity (RH) in the mixed layer is high. In this research, the influences of PNSD, aerosol hygroscopicity and RH profiles on the vertical variation of LR were investigated based on the datasets from field measurements in the North China Plain (NCP). Results show that LR can have an enhancement factor of more than 120% when the RH reaches to 92%. A new algorithm of retrieving the profile is proposed based on the variation of LR due to aerosol hygroscopicity. The magnitude and vertical structures of retrieved using this method can be significantly different to that of the fiexed LR method. The relative difference can reach up to 40% when the RH in the mixed layer is higher than 90% . Sensitivity studies show that RH profile and PNSD affect most on the retrieved by fiexed LR method. In view of this, a scheme of LR enhancement factor by RH is proposed in the NCP. The relative differnce of the calculated between using this scheme and the new algorithm with the variable LR can be less than 10%.
Point spread function based classification of regions for linear digital tomosynthesis
NASA Astrophysics Data System (ADS)
Israni, Kenny; Avinash, Gopal; Li, Baojun
2007-03-01
In digital tomosynthesis, one of the limitations is the presence of out-of-plane blur due to the limited angle acquisition. The point spread function (PSF) characterizes blur in the imaging volume, and is shift-variant in tomosynthesis. The purpose of this research is to classify the tomosynthesis imaging volume into four different categories based on PSF-driven focus criteria. We considered linear tomosynthesis geometry and simple back projection algorithm for reconstruction. The three-dimensional PSF at every pixel in the imaging volume was determined. Intensity profiles were computed for every pixel by integrating the PSF-weighted intensities contained within the line segment defined by the PSF, at each slice. Classification rules based on these intensity profiles were used to categorize image regions. At background and low-frequency pixels, the derived intensity profiles were flat curves with relatively low and high maximum intensities respectively. At in-focus pixels, the maximum intensity of the profiles coincided with the PSF-weighted intensity of the pixel. At out-of-focus pixels, the PSF-weighted intensity of the pixel was always less than the maximum intensity of the profile. We validated our method using human observer classified regions as gold standard. Based on the computed and manual classifications, the mean sensitivity and specificity of the algorithm were 77+/-8.44% and 91+/-4.13% respectively (t=-0.64, p=0.56, DF=4). Such a classification algorithm may assist in mitigating out-of-focus blur from tomosynthesis image slices.
Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms
Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.
2016-04-12
Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less
Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.
Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the cause of this statistical significance is likely explained by the fact the WANG correction also accounts for cloud cover – a condition not accounted for in the radiance closure experiments.« less
McMahon, Ryan; Papiez, Lech; Rangaraj, Dharanipathy
2007-08-01
An algorithm is presented that allows for the control of multileaf collimation (MLC) leaves based entirely on real-time calculations of the intensity delivered over the target. The algorithm is capable of efficiently correcting generalized delivery errors without requiring the interruption of delivery (self-correcting trajectories), where a generalized delivery error represents anything that causes a discrepancy between the delivered and intended intensity profiles. The intensity actually delivered over the target is continually compared to its intended value. For each pair of leaves, these comparisons are used to guide the control of the following leaf and keep this discrepancy below a user-specified value. To demonstrate the basic principles of the algorithm, results of corrected delivery are shown for a leading leaf positional error during dynamic-MLC (DMLC) IMRT delivery over a rigid moving target. It is then shown that, with slight modifications, the algorithm can be used to track moving targets in real time. The primary results of this article indicate that the algorithm is capable of accurately delivering DMLC IMRT over a rigid moving target whose motion is (1) completely unknown prior to delivery and (2) not faster than the maximum MLC leaf velocity over extended periods of time. These capabilities are demonstrated for clinically derived intensity profiles and actual tumor motion data, including situations when the target moves in some instances faster than the maximum admissible MLC leaf velocity. The results show that using the algorithm while calculating the delivered intensity every 50 ms will provide a good level of accuracy when delivering IMRT over a rigid moving target translating along the direction of MLC leaf travel. When the maximum velocities of the MLC leaves and target were 4 and 4.2 cm/s, respectively, the resulting error in the two intensity profiles used was 0.1 +/- 3.1% and -0.5 +/- 2.8% relative to the maximum of the intensity profiles. For the same target motion, the error was shown to increase rapidly as (1) the maximum MLC leaf velocity was reduced below 75% of the maximum target velocity and (2) the system response time was increased.
MPL-Net data products available at co-located AERONET sites and field experiment locations
NASA Astrophysics Data System (ADS)
Welton, E. J.; Campbell, J. R.; Berkoff, T. A.
2002-05-01
Micro-pulse lidar (MPL) systems are small, eye-safe lidars capable of profiling the vertical distribution of aerosol and cloud layers. There are now over 20 MPL systems around the world, and they have been used in numerous field experiments. A new project was started at NASA Goddard Space Flight Center in 2000. The new project, MPL-Net, is a coordinated network of long-time MPL sites. The network also supports a limited number of field experiments each year. Most MPL-Net sites and field locations are co-located with AERONET sunphotometers. At these locations, the AERONET and MPL-Net data are combined together to provide both column and vertically resolved aerosol and cloud measurements. The MPL-Net project coordinates the maintenance and repair for all instruments in the network. In addition, data is archived and processed by the project using common, standardized algorithms that have been developed and utilized over the past 10 years. These procedures ensure that stable, calibrated MPL systems are operating at sites and that the data quality remains high. Rigorous uncertainty calculations are performed on all MPL-Net data products. Automated, real-time level 1.0 data processing algorithms have been developed and are operational. Level 1.0 algorithms are used to process the raw MPL data into the form of range corrected, uncalibrated lidar signals. Automated, real-time level 1.5 algorithms have also been developed and are now operational. Level 1.5 algorithms are used to calibrate the MPL systems, determine cloud and aerosol layer heights, and calculate the optical depth and extinction profile of the aerosol boundary layer. The co-located AERONET sunphotometer provides the aerosol optical depth, which is used as a constraint to solve for the extinction-to-backscatter ratio and the aerosol extinction profile. Browse images and data files are available on the MPL-Net web-site. An overview of the processing algorithms and initial results from selected sites and field experiments will be presented. The capability of the MPL-Net project to produce automated real-time (next day) profiles of aerosol extinction will be shown. Finally, early results from Level 2.0 and Level 3.0 algorithms currently under development will be presented. The level 3.0 data provide continuous (day/night) retrievals of multiple aerosol and cloud heights, and optical properties of each layer detected.
Evaluation of Improvements to the TRMM Microwave Rain Algorithm
NASA Technical Reports Server (NTRS)
Yang, Song; Olson, Williams S.; Smith, Eric A.; Kummerow, Christian
2002-01-01
Improvements made to the Version 5 TRMM passive microwave rain retrieval algorithm (2A-12) are evaluated using independent data. Surface rain rate estimates from the Version 5 TRMM TMI (2A-12), PR (2A-25) and TMI/PR Combined (2B-31) algorithms and ground-based radar estimates for selected coincident subset datasets in 1998 over Melbourne and Kwajalein show varying degrees of agreement. The surface rain rates are then classified into convective and stratiform rain types over ocean, land, and coastal areas for more detailed comparisons to the ground radar measurements. These comparisons lead to a better understanding of the relative performances of the current TRMM rain algorithms. For example, at Melbourne more than 80% of the radar-derived rainfall is classified as convective rain. Convective rain from the TRMM rain algorithms is less than that from ground radar measurements, while TRMM stratiform rain is much greater. Rain area coverage from 2A-12 is also in reasonable agreement with ground radar measurements, with about 25% more over ocean and 25% less over land and coastal areas. Retrieved rain rates from the improved (Version 6) 2A-12 algorithm will be compared to 2A-25, 2B-31, and ground-based radar measurements to evaluate the impact of improvements to 2A-12 in Version 6. An important improvement to the Version 6 2A-12 algorithm is the retrieval of Q1/Q2 (latent heating/drying) profiles in addition to the surface rain rate and hydrometeor profiles. In order to ascertain the credibility of the new products, retrieved Q1/Q2 profiles are compared to independent ground-based estimates. Analyses of dual-Doppler radar data in conjunction with coincident rawinsonde data yield estimates of the vertical distributions of diabatic heating/drying at high horizontal resolution for selected cases over the Kwajalein and LBA field sites. The estimated vertical heating/drying structures appear to be reasonable. Comparisons of Q1/Q2 profiles from Version 6 2A-12 and the ground-based estimates are in progress. Retrieved Q1/Q2 structures will also be compared to MM5 hurricane simulations for selected cases. The results of these intercomparisons will be presented at the conference.
Comparison of MAX-DOAS profiling algorithms during CINDI-2 - Part 2: trace gases
NASA Astrophysics Data System (ADS)
Hendrick, Francois; Friess, Udo; Tirpitz, Lukas; Apituley, Arnoud; Van Roozendael, Michel; Kreher, Karin; Richter, Andreas; Wagner, Thomas
2017-04-01
The second Cabauw Intercomparison campaign for Nitrogen Dioxide measuring Instruments (CINDI-2) took place at the Cabauw Experimental Site for Atmospheric Research (CESAR; Utrecht area, The Netherlands) from 25 August until 7 October 2016. CINDI-2 was aiming at assessing the consistency of MAX-DOAS slant column density measurements of tropospheric species (NO2, HCHO, O3, and O4) relevant for the validation of future ESA atmospheric Sentinel missions, through coordinated operation of a large number of DOAS and MAXDOAS instruments from all over the world. An important objective of the campaign was to study the relationship between remote-sensing column and profile measurements of the above species and collocated reference ancillary observations. For this purpose, the CINDI-2 Profiling Task Team (CPTT) was created, involving 22 groups performing aerosol and trace gas vertical profile inversion using dedicated MAX-DOAS profiling algorithms, as well as the teams responsible for ancillary profile and surface concentration measurements (NO2 analysers, NO2 sondes, NO2 and Raman LIDARs, CAPS, Long-Path DOAS, sunphotometer, nephelometer, etc). The main purpose of the CPTT is to assess the consistency of the different profiling tools for retrieving aerosol extinction and trace gas vertical profiles through comparison exercises using commonly defined settings and to validate the retrievals with correlative observations. In this presentation, we give an overview of the MAX-DOAS vertical profile comparison results, focusing on NO2 and HCHO, the aerosol retrievals being presented in a companion abstract led by U. Frieß. The performance of the different algorithms is investigated with respect to the various sky and weather conditions and aerosol loadings encountered during the campaign. The consistency between optimal-estimation-based and parameterized profiling tools is also evaluated for these different conditions, together with the level of agreement with available NO2 and HCHO ancillary observations. This comparison study will be put in the perspective of the development of a centralized MAX-DOAS processing system within the framework of the ESA Fiducial Reference Measurements (FRM) project.
Lee, Hun Joo; Han, Eunyoung; Lee, Jaesin; Chung, Heesun; Min, Sung-Gi
2016-11-01
The aim of this study is to improve resolution of impurity peaks using a newly devised normalization algorithm for multi-internal standards (ISs) and to describe a visual peak selection system (VPSS) for efficient support of impurity profiling. Drug trafficking routes, location of manufacture, or synthetic route can be identified from impurities in seized drugs. In the analysis of impurities, different chromatogram profiles are obtained from gas chromatography and used to examine similarities between drug samples. The data processing method using relative retention time (RRT) calculated by a single internal standard is not preferred when many internal standards are used and many chromatographic peaks present because of the risk of overlapping between peaks and difficulty in classifying impurities. In this study, impurities in methamphetamine (MA) were extracted by liquid-liquid extraction (LLE) method using ethylacetate containing 4 internal standards and analyzed by gas chromatography-flame ionization detection (GC-FID). The newly developed VPSS consists of an input module, a conversion module, and a detection module. The input module imports chromatograms collected from GC and performs preprocessing, which is converted with a normalization algorithm in the conversion module, and finally the detection module detects the impurities in MA samples using a visualized zoning user interface. The normalization algorithm in the conversion module was used to convert the raw data from GC-FID. The VPSS with the built-in normalization algorithm can effectively detect different impurities in samples even in complex matrices and has high resolution keeping the time sequence of chromatographic peaks the same as that of the RRT method. The system can widen a full range of chromatograms so that the peaks of impurities were better aligned for easy separation and classification. The resolution, accuracy, and speed of impurity profiling showed remarkable improvement. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A trajectory generation framework for modeling spacecraft entry in MDAO
NASA Astrophysics Data System (ADS)
D`Souza, Sarah N.; Sarigul-Klijn, Nesrin
2016-04-01
In this paper a novel trajectory generation framework was developed that optimizes trajectory event conditions for use in a Generalized Entry Guidance algorithm. The framework was developed to be adaptable via the use of high fidelity equations of motion and drag based analytical bank profiles. Within this framework, a novel technique was implemented that resolved the sensitivity of the bank profile to atmospheric non-linearities. The framework's adaptability was established by running two different entry bank conditions. Each case yielded a reference trajectory and set of transition event conditions that are flight feasible and implementable in a Generalized Entry Guidance algorithm.
NASA Astrophysics Data System (ADS)
Zarubin, V.; Bychkov, A.; Simonova, V.; Zhigarkov, V.; Karabutov, A.; Cherepetskaya, E.
2018-05-01
In this paper, a technique for reflection mode immersion 2D laser-ultrasound tomography of solid objects with piecewise linear 2D surface profiles is presented. Pulsed laser radiation was used for generation of short ultrasonic probe pulses, providing high spatial resolution. A piezofilm sensor array was used for detection of the waves reflected by the surface and internal inhomogeneities of the object. The original ultrasonic image reconstruction algorithm accounting for refraction of acoustic waves at the liquid-solid interface provided longitudinal resolution better than 100 μm in the polymethyl methacrylate sample object.
An algorithm to diagnose ball bearing faults in servomotors running arbitrary motion profiles
NASA Astrophysics Data System (ADS)
Cocconcelli, Marco; Bassi, Luca; Secchi, Cristian; Fantuzzi, Cesare; Rubini, Riccardo
2012-02-01
This paper describes a procedure to extend the scope of classical methods to detect ball bearing faults (based on envelope analysis and fault frequencies identification) beyond their usual area of application. The objective of this procedure is to allow condition-based monitoring of such bearings in servomotor applications, where typically the motor in its normal mode of operation has to follow a non-constant angular velocity profile that may contain motion inversions. After describing and analyzing the algorithm from a theoretical point of view, experimental results obtained on a real industrial application are presented and commented.
A ranking method for the concurrent learning of compounds with various activity profiles.
Dörr, Alexander; Rosenbaum, Lars; Zell, Andreas
2015-01-01
In this study, we present a SVM-based ranking algorithm for the concurrent learning of compounds with different activity profiles and their varying prioritization. To this end, a specific labeling of each compound was elaborated in order to infer virtual screening models against multiple targets. We compared the method with several state-of-the-art SVM classification techniques that are capable of inferring multi-target screening models on three chemical data sets (cytochrome P450s, dehydrogenases, and a trypsin-like protease data set) containing three different biological targets each. The experiments show that ranking-based algorithms show an increased performance for single- and multi-target virtual screening. Moreover, compounds that do not completely fulfill the desired activity profile are still ranked higher than decoys or compounds with an entirely undesired profile, compared to other multi-target SVM methods. SVM-based ranking methods constitute a valuable approach for virtual screening in multi-target drug design. The utilization of such methods is most helpful when dealing with compounds with various activity profiles and the finding of many ligands with an already perfectly matching activity profile is not to be expected.
López-Rodríguez, Patricia; Escot-Bocanegra, David; Fernández-Recio, Raúl; Bravo, Ignacio
2015-01-01
Radar high resolution range profiles are widely used among the target recognition community for the detection and identification of flying targets. In this paper, singular value decomposition is applied to extract the relevant information and to model each aircraft as a subspace. The identification algorithm is based on angle between subspaces and takes place in a transformed domain. In order to have a wide database of radar signatures and evaluate the performance, simulated range profiles are used as the recognition database while the test samples comprise data of actual range profiles collected in a measurement campaign. Thanks to the modeling of aircraft as subspaces only the valuable information of each target is used in the recognition process. Thus, one of the main advantages of using singular value decomposition, is that it helps to overcome the notable dissimilarities found in the shape and signal-to-noise ratio between actual and simulated profiles due to their difference in nature. Despite these differences, the recognition rates obtained with the algorithm are quite promising. PMID:25551484
Effects of daily, high spatial resolution a priori profiles of satellite-derived NOx emissions
NASA Astrophysics Data System (ADS)
Laughner, J.; Zare, A.; Cohen, R. C.
2016-12-01
The current generation of space-borne NO2 column observations provides a powerful method of constraining NOx emissions due to the spatial resolution and global coverage afforded by the Ozone Monitoring Instrument (OMI). The greater resolution available in next generation instruments such as TROPOMI and the capabilities of geosynchronous platforms TEMPO, Sentinel-4, and GEMS will provide even greater capabilities in this regard, but we must apply lessons learned from the current generation of retrieval algorithms to make the best use of these instruments. Here, we focus on the effect of the resolution of the a priori NO2 profiles used in the retrieval algorithms. We show that for an OMI retrieval, using daily high-resolution a priori profiles results in changes in the retrieved VCDs up to 40% when compared to a retrieval using monthly average profiles at the same resolution. Further, comparing a retrieval with daily high spatial resolution a priori profiles to a more standard one, we show that emissions derived increase by 100% when using the optimized retrieval.
CellProfiler Tracer: exploring and validating high-throughput, time-lapse microscopy image data.
Bray, Mark-Anthony; Carpenter, Anne E
2015-11-04
Time-lapse analysis of cellular images is an important and growing need in biology. Algorithms for cell tracking are widely available; what researchers have been missing is a single open-source software package to visualize standard tracking output (from software like CellProfiler) in a way that allows convenient assessment of track quality, especially for researchers tuning tracking parameters for high-content time-lapse experiments. This makes quality assessment and algorithm adjustment a substantial challenge, particularly when dealing with hundreds of time-lapse movies collected in a high-throughput manner. We present CellProfiler Tracer, a free and open-source tool that complements the object tracking functionality of the CellProfiler biological image analysis package. Tracer allows multi-parametric morphological data to be visualized on object tracks, providing visualizations that have already been validated within the scientific community for time-lapse experiments, and combining them with simple graph-based measures for highlighting possible tracking artifacts. CellProfiler Tracer is a useful, free tool for inspection and quality control of object tracking data, available from http://www.cellprofiler.org/tracer/.
Understanding the HMI Pseudocontinuum in White-light Solar Flares
NASA Astrophysics Data System (ADS)
Švanda, Michal; Jurčák, Jan; Kašparová, Jana; Kleint, Lucia
2018-06-01
We analyze observations of the X9.3 solar flare (SOL2017-09-06T11:53) observed by SDO/HMI and Hinode/Solar Optical Telescope. Our aim is to learn about the nature of the HMI pseudocontinuum I c used as a proxy for the white-light continuum. From model atmospheres retrieved by an inversion code applied to the Stokes profiles observed by the Hinode satellite, we synthesize profiles of the Fe I 617.3 nm line and compare them to HMI observations. Based on a pixel-by-pixel comparison, we show that the value of I c represents the continuum level well in quiet-Sun regions only. In magnetized regions, it suffers from a simplistic algorithm that is applied to a complex line shape. During this flare, both instruments also registered emission profiles in the flare ribbons. Such emission profiles are poorly represented by the six spectral points of HMI and the MDI-like algorithm does not account for emission profiles in general; thus, the derived pseudocontinuum intensity does not approximate the continuum value properly.
A study of metaheuristic algorithms for high dimensional feature selection on microarray data
NASA Astrophysics Data System (ADS)
Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna
2017-11-01
Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.
Phylo_dCor: distance correlation as a novel metric for phylogenetic profiling.
Sferra, Gabriella; Fratini, Federica; Ponzi, Marta; Pizzi, Elisabetta
2017-09-05
Elaboration of powerful methods to predict functional and/or physical protein-protein interactions from genome sequence is one of the main tasks in the post-genomic era. Phylogenetic profiling allows the prediction of protein-protein interactions at a whole genome level in both Prokaryotes and Eukaryotes. For this reason it is considered one of the most promising methods. Here, we propose an improvement of phylogenetic profiling that enables handling of large genomic datasets and infer global protein-protein interactions. This method uses the distance correlation as a new measure of phylogenetic profile similarity. We constructed robust reference sets and developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation that makes it applicable to large genomic data. Using Saccharomyces cerevisiae and Escherichia coli genome datasets, we showed that Phylo-dCor outperforms phylogenetic profiling methods previously described based on the mutual information and Pearson's correlation as measures of profile similarity. In this work, we constructed and assessed robust reference sets and propose the distance correlation as a measure for comparing phylogenetic profiles. To make it applicable to large genomic data, we developed Phylo-dCor, a parallelized version of the algorithm for calculating the distance correlation. Two R scripts that can be run on a wide range of machines are available upon request.
Barber, Ramon; Zwilling, Valerie; Salichs, Miguel A.
2014-01-01
Nowadays the automobile industry is becoming more and more demanding as far as quality is concerned. Within the wide variety of processes in which this quality must be ensured, those regarding the squeezing of the auto bodywork are especially important due to the fact that the quality of the resulting product is tested manually by experts, leading to inaccuracies of all types. In this paper, an algorithm is proposed for the automated evaluation of the imperfections in the sheets of the bodywork after the squeezing process. The algorithm processes the profile signals from a retroreflective image and characterizes an imperfection. It is based on a convergence criterion that follows the line of the maximum gradient of the imperfection and gives its geometrical characteristics as a result: maximum gradient, length, width, and area. PMID:24504105
Barber, Ramon; Zwilling, Valerie; Salichs, Miguel A
2014-02-05
Nowadays the automobile industry is becoming more and more demanding as far as quality is concerned. Within the wide variety of processes in which this quality must be ensured, those regarding the squeezing of the auto bodywork are especially important due to the fact that the quality of the resulting product is tested manually by experts, leading to inaccuracies of all types. In this paper, an algorithm is proposed for the automated evaluation of the imperfections in the sheets of the bodywork after the squeezing process. The algorithm processes the profile signals from a retroreflective image and characterizes an imperfection. It is based on a convergence criterion that follows the line of the maximum gradient of the imperfection and gives its geometrical characteristics as a result: maximum gradient, length, width, and area.
Evaluating A Priori Ozone Profile Information Used in TEMPO Tropospheric Ozone Retrievals
NASA Technical Reports Server (NTRS)
Johnson, Matthew S.; Sullivan, John T.; Liu, Xiong; Newchurch, Mike; Kuang, Shi; McGee, Thomas J.; Langford, Andrew O'Neil; Senff, Christoph J.; Leblanc, Thierry; Berkoff, Timothy;
2016-01-01
Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality. Typically, monitoring of surface air quality and O3 mixing ratios is primarily conducted using in situ measurement networks. This is partially due to high-quality information related to air quality being limited from space-borne platforms due to coarse spatial resolution, limited temporal frequency, and minimal sensitivity to lower tropospheric and surface-level O3. The Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite is designed to address these limitations of current space-based platforms and to improve our ability to monitor North American air quality. TEMPO will provide hourly data of total column and vertical profiles of O3 with high spatial resolution to be used as a near-real-time air quality product. TEMPO O3 retrievals will apply the Smithsonian Astrophysical Observatory profile algorithm developed based on work from GOME, GOME-2, and OMI. This algorithm uses a priori O3 profile information from a climatological data-base developed from long-term ozone-sonde measurements (tropopause-based (TB) O3 climatology). It has been shown that satellite O3 retrievals are sensitive to a priori O3 profiles and covariance matrices. During this work we investigate the climatological data to be used in TEMPO algorithms (TB O3) and simulated data from the NASA GMAO Goddard Earth Observing System (GEOS-5) Forward Processing (FP) near-real-time (NRT) model products. These two data products will be evaluated with ground-based lidar data from the Tropospheric Ozone Lidar Network (TOLNet) at various locations of the US. This study evaluates the TB climatology, GEOS-5 climatology, and 3-hourly GEOS-5 data compared to lower tropospheric observations to demonstrate the accuracy of a priori information to potentially be used in TEMPO O3 algorithms. Here we present our initial analysis and the theoretical impact on TEMPO retrievals in the lower troposphere.
Evaluating a Priori Ozone Profile Information Used in TEMPO Tropospheric Ozone Retrievals
NASA Technical Reports Server (NTRS)
Johnson, Matthew S.; Sullivan, John; Liu, Xiong; Newchurch, Mike; Kuang, Shi; McGee, Thomas; Langford, Andrew; Senff, Chris; Leblanc, Thierry; Berkoff, Timothy;
2016-01-01
Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality. Typically, monitoring of surface air quality and O3 mixing ratios is primarily conducted using in situ measurement networks. This is partially due to high-quality information related to air quality being limited from space-borne platforms due to coarse spatial resolution, limited temporal frequency, and minimal sensitivity to lower tropospheric and surface-level O3. The Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite is designed to address these limitations of current space-based platforms and to improve our ability to monitor North American air quality. TEMPO will provide hourly data of total column and vertical profiles of O3 with high spatial resolution to be used as a near-real-time air quality product.TEMPO O3 retrievals will apply the Smithsonian Astrophysical Observatory profile algorithm developed based on work from GOME, GOME-2, and OMI. This algorithm uses a priori O3 profile information from a climatological data-base developed from long-term ozone-sonde measurements (tropopause-based (TB) O3 climatology). It has been shown that satellite O3 retrievals are sensitive to a priori O3 profiles and covariance matrices. During this work we investigate the climatological data to be used in TEMPO algorithms (TB O3) and simulated data from the NASA GMAO Goddard Earth Observing System (GEOS-5) Forward Processing (FP) near-real-time (NRT) model products. These two data products will be evaluated with ground-based lidar data from the Tropospheric Ozone Lidar Network (TOLNet) at various locations of the US. This study evaluates the TB climatology, GEOS-5 climatology, and 3-hourly GEOS-5 data compared to lower tropospheric observations to demonstrate the accuracy of a priori information to potentially be used in TEMPO O3 algorithms. Here we present our initial analysis and the theoretical impact on TEMPO retrievals in the lower troposphere.
Evaluating A Priori Ozone Profile Information Used in TEMPO Tropospheric Ozone Retrievals
NASA Astrophysics Data System (ADS)
Johnson, M. S.; Sullivan, J. T.; Liu, X.; Newchurch, M.; Kuang, S.; McGee, T. J.; Langford, A. O.; Senff, C. J.; Leblanc, T.; Berkoff, T.; Gronoff, G.; Chen, G.; Strawbridge, K. B.
2016-12-01
Ozone (O3) is a greenhouse gas and toxic pollutant which plays a major role in air quality. Typically, monitoring of surface air quality and O3 mixing ratios is primarily conducted using in situ measurement networks. This is partially due to high-quality information related to air quality being limited from space-borne platforms due to coarse spatial resolution, limited temporal frequency, and minimal sensitivity to lower tropospheric and surface-level O3. The Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite is designed to address these limitations of current space-based platforms and to improve our ability to monitor North American air quality. TEMPO will provide hourly data of total column and vertical profiles of O3 with high spatial resolution to be used as a near-real-time air quality product. TEMPO O3 retrievals will apply the Smithsonian Astrophysical Observatory profile algorithm developed based on work from GOME, GOME-2, and OMI. This algorithm uses a priori O3 profile information from a climatological data-base developed from long-term ozone-sonde measurements (tropopause-based (TB) O3 climatology). It has been shown that satellite O3 retrievals are sensitive to a priori O3 profiles and covariance matrices. During this work we investigate the climatological data to be used in TEMPO algorithms (TB O3) and simulated data from the NASA GMAO Goddard Earth Observing System (GEOS-5) Forward Processing (FP) near-real-time (NRT) model products. These two data products will be evaluated with ground-based lidar data from the Tropospheric Ozone Lidar Network (TOLNet) at various locations of the US. This study evaluates the TB climatology, GEOS-5 climatology, and 3-hourly GEOS-5 data compared to lower tropospheric observations to demonstrate the accuracy of a priori information to potentially be used in TEMPO O3 algorithms. Here we present our initial analysis and the theoretical impact on TEMPO retrievals in the lower troposphere.
Evaluation and optimization of lidar temperature analysis algorithms using simulated data
NASA Technical Reports Server (NTRS)
Leblanc, Thierry; McDermid, I. Stuart; Hauchecorne, Alain; Keckhut, Philippe
1998-01-01
The middle atmosphere (20 to 90 km altitude) ha received increasing interest from the scientific community during the last decades, especially since such problems as polar ozone depletion and climatic change have become so important. Temperature profiles have been obtained in this region using a variety of satellite-, rocket-, and balloon-borne instruments as well as some ground-based systems. One of the more promising of these instruments, especially for long-term high resolution measurements, is the lidar. Measurements of laser radiation Rayleigh backscattered, or Raman scattered, by atmospheric air molecules can be used to determine the relative air density profile and subsequently the temperature profile if it is assumed that the atmosphere is in hydrostatic equilibrium and follows the ideal gas law. The high vertical and spatial resolution make the lidar a well adapted instrument for the study of many middle atmospheric processes and phenomena as well as for the evaluation and validation of temperature measurements from satellites, such as the Upper Atmosphere Research Satellite (UARS). In the Network for Detection of Stratospheric Change (NDSC) lidar is the core instrument for measuring middle atmosphere temperature profiles. Using the best lidar analysis algorithm possible is therefore of crucial importance. In this work, the JPL and CNRS/SA lidar analysis software were evaluated. The results of this evaluation allowed the programs to be corrected and optimized and new production software versions were produced. First, a brief description of the lidar technique and the method used to simulate lidar raw-data profiles from a given temperature profile is presented. Evaluation and optimization of the JPL and CNRS/SA algorithms are then discussed.
Screening for Prenatal Substance Use
Yonkers, Kimberly A.; Gotman, Nathan; Kershaw, Trace; Forray, Ariadna; Howell, Heather B.; Rounsaville, Bruce J.
2011-01-01
OBJECTIVE To report on the development of a questionnaire to screen for hazardous substance use in pregnant women and to compare the performance of the questionnaire with other drug and alcohol measures. METHODS Pregnant women were administered a modified TWEAK (Tolerance, Worried, Eye-openers, Amnesia, K[C] Cut Down) questionnaire, the 4Ps Plus questionnaire, items from the Addiction Severity Index, and two questions about domestic violence (N=2,684). The sample was divided into “training” (n=1,610) and “validation” (n=1,074) subsamples. We applied recursive partitioning class analysis to the responses from individuals in the training subsample that resulted in a three-item Substance Use Risk Profile-Pregnancy scale. We examined sensitivity, specificity, and the fit of logistic regression models in the validation subsample to compare the performance of the Substance Use Risk Profile-Pregnancy scale with the modified TWEAK and various scoring algorithms of the 4Ps. RESULTS The Substance Use Risk Profile-Pregnancy scale is comprised of three informative questions that can be scored for high- or low-risk populations. The Substance Use Risk Profile-Pregnancy scale algorithm for low-risk populations was mostly highly predictive of substance use in the validation subsample (Akaike’s Information Criterion=579.75, Nagelkerke R2=0.27) with high sensitivity (91%) and adequate specificity (67%). The high-risk algorithm had lower sensitivity (57%) but higher specificity (88%). CONCLUSION The Substance Use Risk Profile-Pregnancy scale is simple and flexible with good sensitivity and specificity. The Substance Use Risk Profile-Pregnancy scale can potentially detect a range of substances that may be abused. Clinicians need to further assess women with a positive screen to identify those who require treatment for alcohol or illicit substance use in pregnancy. PMID:20859145
Unified algorithm of cone optics to compute solar flux on central receiver
NASA Astrophysics Data System (ADS)
Grigoriev, Victor; Corsi, Clotilde
2017-06-01
Analytical algorithms to compute flux distribution on central receiver are considered as a faster alternative to ray tracing. They have quite too many modifications, with HFLCAL and UNIZAR being the most recognized and verified. In this work, a generalized algorithm is presented which is valid for arbitrary sun shape of radial symmetry. Heliostat mirrors can have a nonrectangular profile, and the effects of shading and blocking, strong defocusing and astigmatism can be taken into account. The algorithm is suitable for parallel computing and can benefit from hardware acceleration of polygon texturing.
USDA-ARS?s Scientific Manuscript database
In this research, the inverse algorithm for estimating optical properties of food and biological materials from spatially-resolved diffuse reflectance was optimized in terms of data smoothing, normalization and spatial region of reflectance profile for curve fitting. Monte Carlo simulation was used ...
Glacier Frontal Line Extraction from SENTINEL-1 SAR Imagery in Prydz Area
NASA Astrophysics Data System (ADS)
Li, F.; Wang, Z.; Zhang, S.; Zhang, Y.
2018-04-01
Synthetic Aperture Radar (SAR) can provide all-day and all-night observation of the earth in all-weather conditions with high resolution, and it is widely used in polar research including sea ice, sea shelf, as well as the glaciers. For glaciers monitoring, the frontal position of a calving glacier at different moments of time is of great importance, which indicates the estimation of the calving rate and flux of the glaciers. In this abstract, an automatic algorithm for glacier frontal extraction using time series Sentinel-1 SAR imagery is proposed. The technique transforms the amplitude imagery of Sentinel-1 SAR into a binary map using SO-CFAR method, and then frontal points are extracted using profile method which reduces the 2D binary map to 1D binary profiles, the final frontal position of a calving glacier is the optimal profile selected from the different average segmented profiles. The experiment proves that the detection algorithm for SAR data can automatically extract the frontal position of glacier with high efficiency.
Gueddida, Saber; Yan, Zeyin; Kibalin, Iurii; Voufack, Ariste Bolivard; Claiser, Nicolas; Souhassou, Mohamed; Lecomte, Claude; Gillon, Béatrice; Gillet, Jean-Michel
2018-04-28
In this paper, we propose a simple cluster model with limited basis sets to reproduce the unpaired electron distributions in a YTiO 3 ferromagnetic crystal. The spin-resolved one-electron-reduced density matrix is reconstructed simultaneously from theoretical magnetic structure factors and directional magnetic Compton profiles using our joint refinement algorithm. This algorithm is guided by the rescaling of basis functions and the adjustment of the spin population matrix. The resulting spin electron density in both position and momentum spaces from the joint refinement model is in agreement with theoretical and experimental results. Benefits brought from magnetic Compton profiles to the entire spin density matrix are illustrated. We studied the magnetic properties of the YTiO 3 crystal along the Ti-O 1 -Ti bonding. We found that the basis functions are mostly rescaled by means of magnetic Compton profiles, while the molecular occupation numbers are mainly modified by the magnetic structure factors.
NASA Astrophysics Data System (ADS)
Olsen, Kevin S.; Toon, Geoffrey C.; Boone, Chris D.; Strong, Kimberly
2016-03-01
Motivated by the initial selection of a high-resolution solar occultation Fourier transform spectrometer (FTS) to fly to Mars on the ExoMars Trace Gas Orbiter, we have been developing algorithms for retrieving volume mixing ratio vertical profiles of trace gases, the primary component of which is a new algorithm and software for retrieving vertical profiles of temperature and pressure from the spectra. In contrast to Earth-observing instruments, which can rely on accurate meteorological models, a priori information, and spacecraft position, Mars retrievals require a method with minimal reliance on such data. The temperature and pressure retrieval algorithms developed for this work were evaluated using Earth-observing spectra from the Atmospheric Chemistry Experiment (ACE) FTS, a solar occultation instrument in orbit since 2003, and the basis for the instrument selected for a Mars mission. ACE-FTS makes multiple measurements during an occultation, separated in altitude by 1.5-5 km, and we analyse 10 CO2 vibration-rotation bands at each altitude, each with a different usable altitude range. We describe the algorithms and present results of their application and their comparison to the ACE-FTS data products. The Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) provides vertical profiles of temperature up to 40 km with high vertical resolution. Using six satellites and GPS radio occultation, COSMIC's data product has excellent temporal and spatial coverage, allowing us to find coincident measurements with ACE with very tight criteria: less than 1.5 h and 150 km. We present an intercomparison of temperature profiles retrieved from ACE-FTS using our algorithm, that of the ACE Science Team (v3.5), and from COSMIC. When our retrievals are compared to ACE-FTS v3.5, we find mean differences between -5 and +2 K and that our retrieved profiles have no seasonal or zonal biases but do have a warm bias in the stratosphere and a cold bias in the mesosphere. When compared to COSMIC, we do not observe a warm/cool bias and mean differences are between -4 and +1 K. COSMIC comparisons are restricted to below 40 km, where our retrievals have the best agreement with ACE-FTS v3.5. When comparing ACE-FTS v3.5 to COSMIC we observe a cold bias in COSMIC of 0.5 K, and mean differences are between -0.9 and +0.6 K.
2013-03-15
methods as those used for constructing the Generalized Digital Environmental Model ( GDEM ) version 4 (Carnes, Helber, et al. 2010). The purpose of...in the EOF analysis, which is described in Sections 4.2. 1 and 5.2.3. The primary difference between the ISOP climatology and GDEM is that ISOP only...uses paired profiles of T and S whereas GDEM uses all T profiles available. Paired profiles of T and S are required for ISOP because the T and S co
An assessment of 'shuffle algorithm' collision mechanics for particle simulations
NASA Technical Reports Server (NTRS)
Feiereisen, William J.; Boyd, Iain D.
1991-01-01
Among the algorithms for collision mechanics used at present, the 'shuffle algorithm' of Baganoff (McDonald and Baganoff, 1988; Baganoff and McDonald, 1990) not only allows efficient vectorization, but also discretizes the possible outcomes of a collision. To assess the applicability of the shuffle algorithm, a simulation was performed of flows in monoatomic gases and the calculated characteristics of shock waves was compared with those obtained using a commonly employed isotropic scattering law. It is shown that, in general, the shuffle algorithm adequately represents the collision mechanics in cases when the goal of calculations are mean profiles of density and temperature.
Profiling atmospheric water vapor by microwave radiometry
NASA Technical Reports Server (NTRS)
Wang, J. R.; Wilheit, T. T.; Szejwach, G.; Gesell, L. H.; Nieman, R. A.; Niver, D. S.; Krupp, B. M.; Gagliano, J. A.; King, J. L.
1983-01-01
High-altitude microwave radiometric observations at frequencies near 92 and 183.3 GHz were used to study the potential of retrieving atmospheric water vapor profiles over both land and water. An algorithm based on an extended kalman-Bucy filter was implemented and applied for the water vapor retrieval. The results show great promise in atmospheric water vapor profiling by microwave radiometry heretofore not attainable at lower frequencies.
NASA Technical Reports Server (NTRS)
Haddad, Z. S.; Jameson, A. R.; Im, E.; Durden, S. L.
1995-01-01
Several algorithms to calculate a rain-rate profile from a single-frequency air-or spaceborne radar backscatter profile and a given path-integrated attenuation have been proposed. The accuracy of any such algorithm is limited by the ambiguities between the (multiple) exact solutions, which depend on the variability of the parameters in the Z-R and k-R relations used. In this study, coupled Z-R and k-R relations are derived based on the drop size distribution. It is then shown that, because of the coupling, the relative difference between the multiple mutually ambiguous rain-rate profiles solving the problem must remain acceptably low, provided the available path-integrated attenuation value is known to within 0.5 dB.
Real-time MSE measurements for current profile control on KSTAR.
De Bock, M F M; Aussems, D; Huijgen, R; Scheffer, M; Chung, J
2012-10-01
To step up from current day fusion experiments to power producing fusion reactors, it is necessary to control long pulse, burning plasmas. Stability and confinement properties of tokamak fusion reactors are determined by the current or q profile. In order to control the q profile, it is necessary to measure it in real-time. A real-time motional Stark effect diagnostic is being developed at Korean Superconducting Tokamak for Advanced Research for this purpose. This paper focuses on 3 topics important for real-time measurements: minimize the use of ad hoc parameters, minimize external influences and a robust and fast analysis algorithm. Specifically, we have looked into extracting the retardance of the photo-elastic modulators from the signal itself, minimizing the influence of overlapping beam spectra by optimizing the optical filter design and a multi-channel, multiharmonic phase locking algorithm.
NASA Astrophysics Data System (ADS)
Chen, Jun; Zhang, Xiangguang; Xing, Xiaogang; Ishizaka, Joji; Yu, Zhifeng
2017-12-01
Quantifying the diffuse attenuation coefficient of the photosynthetically available radiation (Kpar) can improve our knowledge of euphotic depth (Zeu) and biomass heating effects in the upper layers of oceans. An algorithm to semianalytically derive Kpar from remote sensing reflectance (Rrs) is developed for the global open oceans. This algorithm includes the following two portions: (1) a neural network model for deriving the diffuse attention coefficients (Kd) that considers the residual error in satellite Rrs, and (2) a three band depth-dependent Kpar algorithm (TDKA) for describing the spectrally selective attenuation mechanism of underwater solar radiation in the open oceans. This algorithm is evaluated with both in situ PAR profile data and satellite images, and the results show that it can produce acceptable PAR profile estimations while clearly removing the impacts of satellite residual errors on Kpar estimations. Furthermore, the performance of the TDKA algorithm is evaluated by its applicability in Zeu derivation and mean temperature within a mixed layer depth (TML) simulation, and the results show that it can significantly decrease the uncertainty in both compared with the classical chlorophyll-a concentration-based Kpar algorithm. Finally, the TDKA algorithm is applied in simulating biomass heating effects in the Sargasso Sea near Bermuda, with new Kpar data it is found that the biomass heating effects can lead to a 3.4°C maximum positive difference in temperature in the upper layers but could result in a 0.67°C maximum negative difference in temperature in the deep layers.
NASA Technical Reports Server (NTRS)
Ziemke, J. R.; Kramarova, N. A.; Bhartia, P. K.; Degenstein, D. A.; Deland, M. T.
2016-01-01
Since October 2004 the Ozone Monitoring Instrument (OMI) and Microwave Limb Sounder (MLS) onboard the Aura satellite have provided over 11 years of continuous tropospheric ozone measurements. These OMI/MLS measurements have been used in many studies to evaluate dynamical and photochemical effects caused by ENSO, the Madden-Julian Oscillation (MJO) and shorter timescales, as well as long-term trends and the effects of deep convection on tropospheric ozone. Given that the OMI and MLS instruments have now extended well beyond their expected lifetimes, our goal is to continue their long record of tropospheric ozone using recent Ozone Mapping Profiler Suite (OMPS) measurements. The OMPS onboard the Suomi National Polar-orbiting Partnership NPP satellite was launched on October 28, 2011 and is comprised of three instruments: the nadir mapper, the nadir profiler, and the limb profiler. Our study combines total column ozone from the OMPS nadir mapper with stratospheric column ozone from the OMPS limb profiler to measure tropospheric ozone residual. The time period for the OMPS measurements is March 2012 present. For the OMPS limb profiler retrievals, the OMPS v2 algorithm from Goddard is tested against the University of Saskatchewan (USask) Algorithm. The retrieved ozone profiles from each of these algorithms are evaluated with ozone profiles from both ozonesondes and the Aura Microwave Limb Sounder (MLS). Effects on derived OMPS tropospheric ozone caused by the 2015-2016 El Nino event are highlighted. This recent El Nino produced anomalies in tropospheric ozone throughout the tropical Pacific involving increases of approximately 10 DU over Indonesia and decreases approximately 5-10 DU in the eastern Pacific. These changes in ozone due to El Nino were predominantly dynamically-induced, caused by the eastward shift in sea-surface temperature and convection from the western to the eastern Pacific.
NASA Astrophysics Data System (ADS)
Sullivan, J. T.; McGee, T. J.; Leblanc, T.; Sumnicht, G. K.; Twigg, L. W.
2015-10-01
The main purpose of the NASA Goddard Space Flight Center TROPospheric OZone DIfferential Absorption Lidar (GSFC TROPOZ DIAL) is to measure the vertical distribution of tropospheric ozone for science investigations. Because of the important health and climate impacts of tropospheric ozone, it is imperative to quantify background photochemical ozone concentrations and ozone layers aloft, especially during air quality episodes. For these reasons, this paper addresses the necessary procedures to validate the TROPOZ retrieval algorithm and confirm that it is properly representing ozone concentrations. This paper is focused on ensuring the TROPOZ algorithm is properly quantifying ozone concentrations, and a following paper will focus on a systematic uncertainty analysis. This methodology begins by simulating synthetic lidar returns from actual TROPOZ lidar return signals in combination with a known ozone profile. From these synthetic signals, it is possible to explicitly determine retrieval algorithm biases from the known profile. This was then systematically performed to identify any areas that need refinement for a new operational version of the TROPOZ retrieval algorithm. One immediate outcome of this exercise was that a bin registration error in the correction for detector saturation within the original retrieval was discovered and was subsequently corrected for. Another noticeable outcome was that the vertical smoothing in the retrieval algorithm was upgraded from a constant vertical resolution to a variable vertical resolution to yield a statistical uncertainty of <10 %. This new and optimized vertical-resolution scheme retains the ability to resolve fluctuations in the known ozone profile, but it now allows near-field signals to be more appropriately smoothed. With these revisions to the previous TROPOZ retrieval, the optimized TROPOZ retrieval algorithm (TROPOZopt) has been effective in retrieving nearly 200 m lower to the surface. Also, as compared to the previous version of the retrieval, the TROPOZopt had an overall mean improvement of 3.5 %, and large improvements (upwards of 10-15 % as compared to the previous algorithm) were apparent between 4.5 and 9 km. Finally, to ensure the TROPOZopt retrieval algorithm is robust enough to handle actual lidar return signals, a comparison is shown between four nearby ozonesonde measurements. The ozonesondes are mostly within the TROPOZopt retrieval uncertainty bars, which implies that this exercise was quite successful.
Offshore Wind Measurements Using Doppler Aerosol Wind Lidar (DAWN) at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.
2014-01-01
The latest flight demonstration of Doppler Aerosol Wind Lidar (DAWN) at NASA Langley Research Center (LaRC) is presented. The goal of the campaign was to demonstrate the improvement of DAWN system since the previous flight campaign in 2012 and the capabilities of DAWN and the latest airborne wind profiling algorithm APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) developed at LaRC. The comparisons of APOLO and another algorithm are discussed utilizing two and five line-of-sights (LOSs), respectively. Wind parameters from DAWN were compared with ground-based radar measurements for validation purposes. The campaign period was June - July in 2013 and the flight altitude was 8 km in inland toward Charlotte, NC, and offshores in Virginia Beach, VA and Ocean City, MD. The DAWN system was integrated into a UC12B with two operators onboard during the campaign.
Offshore wind measurements using Doppler aerosol wind lidar (DAWN) at NASA Langley Research Center
NASA Astrophysics Data System (ADS)
Beyon, Jeffrey Y.; Koch, Grady J.; Kavaya, Michael J.
2014-06-01
The latest flight demonstration of Doppler Aerosol Wind Lidar (DAWN) at NASA Langley Research Center (LaRC) is presented. The goal of the campaign was to demonstrate the improvement of DAWN system since the previous flight campaign in 2012 and the capabilities of DAWN and the latest airborne wind profiling algorithm APOLO (Airborne Wind Profiling Algorithm for Doppler Wind Lidar) developed at LaRC. The comparisons of APOLO and another algorithm are discussed utilizing two and five line-of-sights (LOSs), respectively. Wind parameters from DAWN were compared with ground-based radar measurements for validation purposes. The campaign period was June - July in 2013 and the flight altitude was 8 km in inland toward Charlotte, NC, and offshores in Virginia Beach, VA and Ocean City, MD. The DAWN system was integrated into a UC12B with two operators onboard during the campaign.
NASA Astrophysics Data System (ADS)
Hong, Sanghyun; Erdogan, Gurkan; Hedrick, Karl; Borrelli, Francesco
2013-05-01
The estimation of the tyre-road friction coefficient is fundamental for vehicle control systems. Tyre sensors enable the friction coefficient estimation based on signals extracted directly from tyres. This paper presents a tyre-road friction coefficient estimation algorithm based on tyre lateral deflection obtained from lateral acceleration. The lateral acceleration is measured by wireless three-dimensional accelerometers embedded inside the tyres. The proposed algorithm first determines the contact patch using a radial acceleration profile. Then, the portion of the lateral acceleration profile, only inside the tyre-road contact patch, is used to estimate the friction coefficient through a tyre brush model and a simple tyre model. The proposed strategy accounts for orientation-variation of accelerometer body frame during tyre rotation. The effectiveness and performance of the algorithm are demonstrated through finite element model simulations and experimental tests with small tyre slip angles on different road surface conditions.
A new algorithm to create balanced teams promoting more diversity
NASA Astrophysics Data System (ADS)
Dias, Teresa Galvão; Borges, José
2017-11-01
The problem of assigning students to teams can be described as maximising their profiles diversity within teams while minimising the differences among teams. This problem is commonly known as the maximally diverse grouping problem and it is usually formulated as maximising the sum of the pairwise distances among students within teams. We propose an alternative algorithm in which the within group heterogeneity is measured by the attributes' variance instead of by the sum of distances between group members. The proposed algorithm is evaluated by means of two real data sets and the results suggest that it induces better solutions according to two independent evaluation criteria, the Davies-Bouldin index and the number of dominated teams. In conclusion, the results show that it is more adequate to use the attributes' variance to measure the heterogeneity of profiles within the teams and the homogeneity among teams.
Latent Heating Retrievals Using the TRMM Precipitation Radar: A Multi-Seasonal Study
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Lang, S.; Meneghini, B.; Halverson, J.; Johnson, R.; Simpson, J.; Einaudi, Franco (Technical Monitor)
2000-01-01
The Goddard Convective-Stratiform Heating (CSH) algorithm is used to retrieve profiles of latent heating over the global tropics for a period of several months using TRMM precipitation radar data. The seasonal variation of heating over the tropics is then examined. The period of interest also coincides with several TRMM field campaigns that recently occurred over the South China Sea in 1998 (SCSMEX), Brazil in 1999 (TRMM-LBA), and in the central Pacific in 1999 (KWAJEX). Sounding diagnosed Q1 budgets from these experiments could provide a means of validating the retrieved profiles of latent heating from the CSH algorithm.
Optimal Area Profiles for Ideal Single Nozzle Air-Breathing Pulse Detonation Engines
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2003-01-01
The effects of cross-sectional area variation on idealized Pulse Detonation Engine performance are examined numerically. A quasi-one-dimensional, reacting, numerical code is used as the kernel of an algorithm that iteratively determines the correct sequencing of inlet air, inlet fuel, detonation initiation, and cycle time to achieve a limit cycle with specified fuel fraction, and volumetric purge fraction. The algorithm is exercised on a tube with a cross sectional area profile containing two degrees of freedom: overall exit-to-inlet area ratio, and the distance along the tube at which continuous transition from inlet to exit area begins. These two parameters are varied over three flight conditions (defined by inlet total temperature, inlet total pressure and ambient static pressure) and the performance is compared to a straight tube. It is shown that compared to straight tubes, increases of 20 to 35 percent in specific impulse and specific thrust are obtained with tubes of relatively modest area change. The iterative algorithm is described, and its limitations are noted and discussed. Optimized results are presented showing performance measurements, wave diagrams, and area profiles. Suggestions for future investigation are also discussed.
2012-03-22
with performance profiles, Math. Program., 91 (2002), pp. 201–213. [6] P. DRINEAS, R. KANNAN, AND M. W. MAHONEY , Fast Monte Carlo algorithms for matrices...computing invariant subspaces of non-Hermitian matri- ces, Numer. Math., 25 ( 1975 /76), pp. 123–136. [25] , Matrix algorithms Vol. II: Eigensystems
NASA Technical Reports Server (NTRS)
Lee, Jaehwa; Hsu, N. Christina; Bettenhausen, Corey; Sayer, Andrew M.; Seftor, Colin J.; Jeong, Myeong-Jae
2015-01-01
Aerosol Single scattering albedo and Height Estimation (ASHE) algorithm was first introduced in Jeong and Hsu (2008) to provide aerosol layer height as well as single scattering albedo (SSA) for biomass burning smoke aerosols. One of the advantages of this algorithm was that the aerosol layer height can be retrieved over broad areas, which had not been available from lidar observations only. The algorithm utilized aerosol properties from three different satellite sensors, i.e., aerosol optical depth (AOD) and Ångström exponent (AE) from Moderate Resolution Imaging Spectroradiometer (MODIS), UV aerosol index (UVAI) from Ozone Monitoring Instrument (OMI), and aerosol layer height from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP). Here, we extend the application of the algorithm to Visible Infrared Imaging Radiometer Suite (VIIRS) and Ozone Mapping and Profiler Suite (OMPS) data. We also now include dust layers as well as smoke. Other updates include improvements in retrieving the AOD of nonspherical dust from VIIRS, better determination of the aerosol layer height from CALIOP, and more realistic input aerosol profiles in the forward model for better accuracy.
NASA Astrophysics Data System (ADS)
Lipton, A.; Moncet, J. L.; Payne, V.; Lynch, R.; Polonsky, I. N.
2017-12-01
We will present recent results from an algorithm for producing climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. Developments to be presented include the impact of a radiance-based pre-classification method for the atmospheric background. In addition to improving retrieval performance, pre-classification has the potential to reduce the sensitivity of the retrievals to the climatological data from which the background estimate and its error covariance are derived. We will also discuss evaluation of a method for mitigating the effect of clouds on the radiances, and enhancements of the radiative transfer forward model.
STAR Algorithm Integration Team - Facilitating operational algorithm development
NASA Astrophysics Data System (ADS)
Mikles, V. J.
2015-12-01
The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.
Aircraft Route Optimization using the A-Star Algorithm
2014-03-27
Map Cost array allows a search for a route that not only seeks to minimize the distance travelled, but also considers other factors that may impact ...Rules (VFR) flight profile requires aviators to plan a 20-minute fuel reserve into the flight while an Instrument Flight Rules ( IFR ) flight profile
Clustering single cells: a review of approaches on high-and low-depth single-cell RNA-seq data.
Menon, Vilas
2017-12-11
Advances in single-cell RNA-sequencing technology have resulted in a wealth of studies aiming to identify transcriptomic cell types in various biological systems. There are multiple experimental approaches to isolate and profile single cells, which provide different levels of cellular and tissue coverage. In addition, multiple computational strategies have been proposed to identify putative cell types from single-cell data. From a data generation perspective, recent single-cell studies can be classified into two groups: those that distribute reads shallowly over large numbers of cells and those that distribute reads more deeply over a smaller cell population. Although there are advantages to both approaches in terms of cellular and tissue coverage, it is unclear whether different computational cell type identification methods are better suited to one or the other experimental paradigm. This study reviews three cell type clustering algorithms, each representing one of three broad approaches, and finds that PCA-based algorithms appear most suited to low read depth data sets, whereas gene clustering-based and biclustering algorithms perform better on high read depth data sets. In addition, highly related cell classes are better distinguished by higher-depth data, given the same total number of reads; however, simultaneous discovery of distinct and similar types is better served by lower-depth, higher cell number data. Overall, this study suggests that the depth of profiling should be determined by initial assumptions about the diversity of cells in the population, and that the selection of clustering algorithm(s) is subsequently based on the depth of profiling will allow for better identification of putative transcriptomic cell types. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
A study on the application of Fourier series in IMRT treatment planning.
Almeida-Trinidad, R; Garnica-Garza, H M
2007-12-01
In intensity-modulated radiotherapy, a set of x-ray fluence profiles is iteratively adjusted until a desired absorbed dose distribution is obtained. The purpose of this article is to present a method that allows the optimization of fluence profiles based on the Fourier series decomposition of an initial approximation to the profile. The method has the advantage that a new fluence profile can be obtained in a precise and controlled way with the tuning of only two parameters, namely the phase of the sine and cosine terms of one of the Fourier components, in contrast to the point-by-point tuning of the profile. Also, because the method uses analytical functions, the resultant profiles do not exhibit numerical artifacts. A test case consisting of a mathematical phantom with a target wrapped around a critical structure is discussed to illustrate the algorithm. It is shown that the degree of conformality of the absorbed dose distribution can be tailored by varying the number of Fourier terms made available to the optimization algorithm. For the test case discussed here, it is shown that the number of Fourier terms to be modified depends on the number of radiation beams incident on the target but it is in general in the order of 10 terms.
The OMPS Limb Profiler instrument
NASA Astrophysics Data System (ADS)
Rault, D. F.; Xu, P.
2011-12-01
The Ozone Mapping and Profiler Suite (OMPS) will continue the monitoring of the global distribution of the Earth's middle atmosphere ozone and aerosol. OMPS is composed of three instruments, namely the Total Column Mapper (heritage: TOMS, OMI), the Nadir Profiler (heritage: SBUV) and the Limb Profiler (heritage: SOLSE/LORE, OSIRIS, SCIAMACHY, SAGE III). The ultimate goal of the mission is to better understand and quantify the rate of stratospheric ozone recovery. OMPS is scheduled to be launched on the NPOESS Preparatory Project (NPP) platform in October 2011. The focus of the paper will be on the Limb Profiler (LP) instrument. The LP instrument will measure the Earth's limb radiance, from which ozone profile will be retrieved from the upper tropopause uo to 60km. End-to-end studies of the sensor and retrieval algorithm indicate the following expected performance for ozone: accuracy of 5% or better from the tropopause up to 50 km, precision of about 3-5% from 18 to 50 km, and vertical resolution of 1.5-2 km with vertical sampling of 1 km and along-track horizontal sampling of 1 deg latitude. The paper will describe the mission, discuss the retrieval algorithm, and summarize the expected performance. If available, the paper will also present early on-orbit data.
Dobson-Belaire, Wendy; Goodfield, Jason; Borrelli, Richard; Liu, Fei Fei; Khan, Zeba M
2018-01-01
Using diagnosis code-based algorithms is the primary method of identifying patient cohorts for retrospective studies; nevertheless, many databases lack reliable diagnosis code information. To develop precise algorithms based on medication claims/prescriber visits (MCs/PVs) to identify psoriasis (PsO) patients and psoriatic patients with arthritic conditions (PsO-AC), a proxy for psoriatic arthritis, in Canadian databases lacking diagnosis codes. Algorithms were developed using medications with narrow indication profiles in combination with prescriber specialty to define PsO and PsO-AC. For a 3-year study period from July 1, 2009, algorithms were validated using the PharMetrics Plus database, which contains both adjudicated medication claims and diagnosis codes. Positive predictive value (PPV), negative predictive value (NPV), sensitivity, and specificity of the developed algorithms were assessed using diagnosis code as the reference standard. Chosen algorithms were then applied to Canadian drug databases to profile the algorithm-identified PsO and PsO-AC cohorts. In the selected database, 183,328 patients were identified for validation. The highest PPVs for PsO (85%) and PsO-AC (65%) occurred when a predictive algorithm of two or more MCs/PVs was compared with the reference standard of one or more diagnosis codes. NPV and specificity were high (99%-100%), whereas sensitivity was low (≤30%). Reducing the number of MCs/PVs or increasing diagnosis claims decreased the algorithms' PPVs. We have developed an MC/PV-based algorithm to identify PsO patients with a high degree of accuracy, but accuracy for PsO-AC requires further investigation. Such methods allow researchers to conduct retrospective studies in databases in which diagnosis codes are absent. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Inversion method applied to the rotation curves of galaxies
NASA Astrophysics Data System (ADS)
Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.
2017-07-01
We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.
Automated real-time search and analysis algorithms for a non-contact 3D profiling system
NASA Astrophysics Data System (ADS)
Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.
2013-04-01
The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time provides significant opportunities in cost savings in both equipment protection and waste minimization.
Otsuka, Momoka; Uchida, Yuki; Kawaguchi, Takumi; Taniguchi, Eitaro; Kawaguchi, Atsushi; Kitani, Shingo; Itou, Minoru; Oriishi, Tetsuharu; Kakuma, Tatsuyuki; Tanaka, Suiko; Yagi, Minoru; Sata, Michio
2012-10-01
Dietary habits are involved in the development of chronic inflammation; however, the impact of dietary profiles of hepatitis C virus carriers with persistently normal alanine transaminase levels (HCV-PNALT) remains unclear. The decision-tree algorithm is a data-mining statistical technique, which uncovers meaningful profiles of factors from a data collection. We aimed to investigate dietary profiles associated with HCV-PNALT using a decision-tree algorithm. Twenty-seven HCV-PNALT and 41 patients with chronic hepatitis C were enrolled in this study. Dietary habit was assessed using a validated semiquantitative food frequency questionnaire. A decision-tree algorithm was created by dietary variables, and was evaluated by area under the receiver operating characteristic curve analysis (AUROC). In multivariate analysis, fish to meat ratio, dairy product and cooking oils were identified as independent variables associated with HCV-PNALT. The decision-tree algorithm was created with two variables: a fish to meat ratio and cooking oils/ideal bodyweight. When subjects showed a fish to meat ratio of 1.24 or more, 68.8% of the subjects were HCV-PNALT. On the other hand, 11.5% of the subjects were HCV-PNALT when subjects showed a fish to meat ratio of less than 1.24 and cooking oil/ideal bodyweight of less than 0.23 g/kg. The difference in the proportion of HCV-PNALT between these groups are significant (odds ratio 16.87, 95% CI 3.40-83.67, P = 0.0005). Fivefold cross-validation of the decision-tree algorithm showed an AUROC of 0.6947 (95% CI 0.5656-0.8238, P = 0.0067). The decision-tree algorithm disclosed that fish to meat ratio and cooking oil/ideal bodyweight were associated with HCV-PNALT. © 2012 The Japan Society of Hepatology.
NASA Astrophysics Data System (ADS)
Wu, Yerong; de Graaf, Martin; Menenti, Massimo
2017-08-01
Global quantitative aerosol information has been derived from MODerate Resolution Imaging SpectroRadiometer (MODIS) observations for decades since early 2000 and widely used for air quality and climate change research. However, the operational MODIS Aerosol Optical Depth (AOD) products Collection 6 (C6) can still be biased, because of uncertainty in assumed aerosol optical properties and aerosol vertical distribution. This study investigates the impact of aerosol vertical distribution on the AOD retrieval. We developed a new algorithm by considering dynamic vertical profiles, which is an adaptation of MODIS C6 Dark Target (C6_DT) algorithm over land. The new algorithm makes use of the aerosol vertical profile extracted from Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements to generate an accurate top of the atmosphere (TOA) reflectance for the AOD retrieval, where the profile is assumed to be a single layer and represented as a Gaussian function with the mean height as single variable. To test the impact, a comparison was made between MODIS DT and Aerosol Robotic Network (AERONET) AOD, over dust and smoke regions. The results show that the aerosol vertical distribution has a strong impact on the AOD retrieval. The assumed aerosol layers close to the ground can negatively bias the retrievals in C6_DT. Regarding the evaluated smoke and dust layers, the new algorithm can improve the retrieval by reducing the negative biases by 3-5%.
Rayleigh wave nonlinear inversion based on the Firefly algorithm
NASA Astrophysics Data System (ADS)
Zhou, Teng-Fei; Peng, Geng-Xin; Hu, Tian-Yue; Duan, Wen-Sheng; Yao, Feng-Chang; Liu, Yi-Mou
2014-06-01
Rayleigh waves have high amplitude, low frequency, and low velocity, which are treated as strong noise to be attenuated in reflected seismic surveys. This study addresses how to identify useful shear wave velocity profile and stratigraphic information from Rayleigh waves. We choose the Firefly algorithm for inversion of surface waves. The Firefly algorithm, a new type of particle swarm optimization, has the advantages of being robust, highly effective, and allows global searching. This algorithm is feasible and has advantages for use in Rayleigh wave inversion with both synthetic models and field data. The results show that the Firefly algorithm, which is a robust and practical method, can achieve nonlinear inversion of surface waves with high resolution.
Thaden, Joshua T; Mogno, Ilaria; Wierzbowski, Jamey; Cottarel, Guillaume; Kasif, Simon; Collins, James J; Gardner, Timothy S
2007-01-01
Machine learning approaches offer the potential to systematically identify transcriptional regulatory interactions from a compendium of microarray expression profiles. However, experimental validation of the performance of these methods at the genome scale has remained elusive. Here we assess the global performance of four existing classes of inference algorithms using 445 Escherichia coli Affymetrix arrays and 3,216 known E. coli regulatory interactions from RegulonDB. We also developed and applied the context likelihood of relatedness (CLR) algorithm, a novel extension of the relevance networks class of algorithms. CLR demonstrates an average precision gain of 36% relative to the next-best performing algorithm. At a 60% true positive rate, CLR identifies 1,079 regulatory interactions, of which 338 were in the previously known network and 741 were novel predictions. We tested the predicted interactions for three transcription factors with chromatin immunoprecipitation, confirming 21 novel interactions and verifying our RegulonDB-based performance estimates. CLR also identified a regulatory link providing central metabolic control of iron transport, which we confirmed with real-time quantitative PCR. The compendium of expression data compiled in this study, coupled with RegulonDB, provides a valuable model system for further improvement of network inference algorithms using experimental data. PMID:17214507
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
The Re-Analysis of Ozone Profile Data from a 41-Year Series of SBUV Instruments
NASA Technical Reports Server (NTRS)
Kramarova, Natalya; Frith, Stacey; Bhartia, Pawan K.; McPeters, Richard; Labow, Gordon; Taylor, Steven; Fisher, Bradford
2012-01-01
In this study we present the validation of ozone profiles from a number of Solar Back Scattered Ultra Violet (SBUV) and SBUV/2 instruments that were recently reprocessed using an updated (Version 8.6) algorithm. The SBUV dataset provides the longest available record of global ozone profiles, spanning a 41-year period from 1970 to 2011 (except a 5-year gap in the 1970s) and includes ozone profile records obtained from the Nimbus-4 BUV and Nimbus-7 SBUV instruments, and a series of SBUV(/2) instruments launched on NOAA operational satellites (NOAA 09, 11, 14, 16, 17, 18, 19). Although modifications in instrument design were made in the evolution from the BUV instrument to the modern SBUV(/2) model, the basic principles of the measurement technique and retrieval algorithm remain the same. The long term SBUV data record allows us to create a consistent, calibrated dataset of ozone profiles that can be used for climate studies and trend analyses. In particular, we focus on estimating the various sources of error in the SBUV profile ozone retrievals using independent observations and analysis of the algorithm itself. For the first time we include in the metadata a quantitative estimate of the smoothing error, defined as the error due to profile variability that the SBUV observing system cannot inherently measure. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. Between 10 and 1 hPa the smoothing errors for the SBUV monthly zonal mean retrievals are of the order of 1 %, but start to increase above and below this layer. The largest smoothing errors, as large as 15-20%, were detected in in the troposphere. The SBUV averaging kernels, provided with the ozone profiles in version 8.6, help to eliminate the smoothing effect when comparing the SBUV profiles with high vertical resolution measurements, and make it convenient to use the SBUV ozone profiles for data assimilation and model validation purposes. The smoothing error can also be minimized by combining layers of data, and we will discuss recommendations for this approach as well. The SBUV ozone profiles have been intensively validated against satellite profile measurements obtained from the Microwave Limb Sounders (MLS) (on board the UARS and AURA satellites), Stratospheric Aerosol and Gas Experiment (SAGE) and Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). Also, we compare coincident and collocated SBUV ozone retrievals with observations made by ground-based instruments, such as microwave spectrometers, lidars, Umkehr instruments and balloon-borne ozonosondes. Finally, we compare the SBUV ozone profiles with output from the NASA GSFC GEOS-CCM model. In the stratosphere between 25 and 1 hPa the mean biases and standard deviations are within 5% for monthly mean ozone profiles. Above and below this layer the vertical resolution of the SBUV algorithm decreases and the effects of vertical smoothing should be taken into account. Though the SBUV algorithm has a coarser vertical resolution in the lower stratosphere and troposphere, it is capable of precisely estimating the integrated ozone column between the surface and 25 hPa. The time series of the tropospheric - lower stratospheric ozone column derived from SBUV agrees within 5% with the corresponding values observed by an ensemble of ozone sonde stations in North Hemisphere. Drift of the ozone time series obtained from each SBUV(/2) instrument relative to ground based and satellite measurements are evaluated and some features of individual SBUV(l2) instruments are discussed. In addition to evaluating individual instruments against independent observations, we also focus on the instrument to instrument consistency in the series. Overall, Version 8.6 ozone profiles obtained from two different SBUV(l2) instruments compare within a couple of percent during overlap periods and are consistently varying in time, with some exceptions. Some of the noted discrepancies might bssociated with ozone diurnal variations, since the difference in the local time of the observations for a pair of SBUV(l2) instruments could be several hours. Other issues include the potential short-term drift in measurements as the instrument orbit drifts, and measurements are obtained at high solar zenith angles (>85 ). Based on the results of the validation, a consistent, calibrated dataset of SBUV ozone profiles has been created based on internal calibration only.
NASA Technical Reports Server (NTRS)
Kato, S.; Smith, G. L.; Barker, H. W.
2001-01-01
An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.
Planning, Execution, and Assessment of Effects-Based Operations (EBO)
2006-05-01
time of execution that would maximize the likelihood of achieving a desired effect. GMU has developed a methodology, named ECAD -EA (Effective...Algorithm EBO Effects Based Operations ECAD -EA Effective Course of Action-Evolutionary Algorithm GMU George Mason University GUI Graphical...Probability Profile Generation ........................................................72 A.2.11 Running ECAD -EA (Effective Courses of Action Determination
2012-08-15
Environmental Model ( GDEM ) 72 levels) was conserved in the interpolated profiles and small variations in the vertical field may have lead to large...Planner ETKF Ensemble Transform Kalman Filter G8NCOM 1/8⁰ Global NCOM GA Genetic Algorithm GDEM Generalized Digital Environmental Model GOST
Design Criteria for Low Profile Flange Calculations
NASA Technical Reports Server (NTRS)
Leimbach, K. R.
1973-01-01
An analytical method and a design procedure to develop flanged separable pipe connectors are discussed. A previously established algorithm is the basis for calculating low profile flanges. The characteristics and advantages of the low profile flange are analyzed. The use of aluminum, titanium, and plastics for flange materials is described. Mathematical models are developed to show the mechanical properties of various flange configurations. A computer program for determining the structural stability of the flanges is described.
NASA Astrophysics Data System (ADS)
Han, Xu; Xie, Guangping; Laflen, Brandon; Jia, Ming; Song, Guiju; Harding, Kevin G.
2015-05-01
In the real application environment of field engineering, a large variety of metrology tools are required by the technician to inspect part profile features. However, some of these tools are burdensome and only address a sole application or measurement. In other cases, standard tools lack the capability of accessing irregular profile features. Customers of field engineering want the next generation metrology devices to have the ability to replace the many current tools with one single device. This paper will describe a method based on the ring optical gage concept to the measurement of numerous kinds of profile features useful for the field technician. The ring optical system is composed of a collimated laser, a conical mirror and a CCD camera. To be useful for a wide range of applications, the ring optical system requires profile feature extraction algorithms and data manipulation directed toward real world applications in field operation. The paper will discuss such practical applications as measuring the non-ideal round hole with both off-centered and oblique axes. The algorithms needed to analyze other features such as measuring the width of gaps, radius of transition fillets, fall of step surfaces, and surface parallelism will also be discussed in this paper. With the assistance of image processing and geometric algorithms, these features can be extracted with a reasonable performance. Tailoring the feature extraction analysis to this specific gage offers the potential for a wider application base beyond simple inner diameter measurements. The paper will present experimental results that are compared with standard gages to prove the performance and feasibility of the analysis in real world field engineering. Potential accuracy improvement methods, a new dual ring design and future work will be discussed at the end of this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahlin, J.-E.; Scheffel, J.
2005-06-15
In the advanced reversed-field pinch (RFP), the current density profile is externally controlled to diminish tearing instabilities. Thus the scaling of energy confinement time with plasma current and density is improved substantially as compared to the conventional RFP. This may be numerically simulated by introducing an ad hoc electric field, adjusted to generate a tearing mode stable parallel current density profile. In the present work a current profile control algorithm, based on feedback of the fluctuating electric field in Ohm's law, is introduced into the resistive magnetohydrodynamic code DEBSP [D. D. Schnack and D. C. Baxter, J. Comput. Phys. 55,more » 485 (1984); D. D. Schnack, D. C. Barnes, Z. Mikic, D. S. Marneal, E. J. Caramana, and R. A. Nebel, Comput. Phys. Commun. 43, 17 (1986)]. The resulting radial magnetic field is decreased considerably, causing an increase in energy confinement time and poloidal {beta}. It is found that the parallel current density profile spontaneously becomes hollow, and that a formation, being related to persisting resistive g modes, appears close to the reversal surface.« less
Airline Passenger Profiling Based on Fuzzy Deep Machine Learning.
Zheng, Yu-Jun; Sheng, Wei-Guo; Sun, Xing-Ming; Chen, Sheng-Yong
2017-12-01
Passenger profiling plays a vital part of commercial aviation security, but classical methods become very inefficient in handling the rapidly increasing amounts of electronic records. This paper proposes a deep learning approach to passenger profiling. The center of our approach is a Pythagorean fuzzy deep Boltzmann machine (PFDBM), whose parameters are expressed by Pythagorean fuzzy numbers such that each neuron can learn how a feature affects the production of the correct output from both the positive and negative sides. We propose a hybrid algorithm combining a gradient-based method and an evolutionary algorithm for training the PFDBM. Based on the novel learning model, we develop a deep neural network (DNN) for classifying normal passengers and potential attackers, and further develop an integrated DNN for identifying group attackers whose individual features are insufficient to reveal the abnormality. Experiments on data sets from Air China show that our approach provides much higher learning ability and classification accuracy than existing profilers. It is expected that the fuzzy deep learning approach can be adapted for a variety of complex pattern analysis tasks.
Data descriptions are provided at the following urls:GADEP Continuous PM2.5 mass concentration data - https://aqs.epa.gov/aqsweb/documents/data_mart_welcome.htmlhttps://www3.epa.gov/ttn/amtic/files/ambient/pm25/qa/QA-Handbook-Vol-II.pdfVIIRS Day Night Band SDR (SVDNB) http://www.class.ngdc.noaa.gov/saa/products/search?datatype_family=VIIRS_SDRMODIS Terra Level 2 water vapor profiles (infrared algorithm for atmospheric profiles for both day and night -MOD0&_L2; http://modis-atmos.gsfc.nasa.gov/MOD07_L2/index.html NWS surface meteorological data - https://www.ncdc.noaa.gov/isdThis dataset is associated with the following publication:Wang, J., C. Aegerter, and J. Szykman. Potential Application of VIIRS Day/Night Band for Monitoring Nighttime Surface PM2.5 Air Quality From Space. ATMOSPHERIC ENVIRONMENT. Elsevier Science Ltd, New York, NY, USA, 124(0): 55-63, (2016).
Movements of Diadromous Fish in Large Unregulated Tropical Rivers Inferred from Geochemical Tracers
Walther, Benjamin D.; Dempster, Tim; Letnic, Mike; McCulloch, Malcolm T.
2011-01-01
Patterns of migration and habitat use in diadromous fishes can be highly variable among individuals. Most investigations into diadromous movement patterns have been restricted to populations in regulated rivers, and little information exists for those in unregulated catchments. We quantified movements of migratory barramundi Lates calcarifer (Bloch) in two large unregulated rivers in northern Australia using both elemental (Sr/Ba) and isotope (87Sr/86Sr) ratios in aragonitic ear stones, or otoliths. Chemical life history profiles indicated significant individual variation in habitat use, particularly among chemically distinct freshwater habitats within a catchment. A global zoning algorithm was used to quantify distinct changes in chemical signatures across profiles. This algorithm identified between 2 and 6 distinct chemical habitats in individual profiles, indicating variable movement among habitats. Profiles of 87Sr/86Sr ratios were notably distinct among individuals, with highly radiogenic values recorded in some otoliths. This variation suggested that fish made full use of habitats across the entire catchment basin. Our results show that unrestricted movement among freshwater habitats is an important component of diadromous life histories for populations in unregulated systems. PMID:21494693
NASA Technical Reports Server (NTRS)
Stauffer, Ryan M.; Thompson, Anne M.; Young, George S.
2016-01-01
Sonde-based climatologies of tropospheric ozone (O3) are vital for developing satellite retrieval algorithms and evaluating chemical transport model output. Typical O3 climatologies average measurements by latitude or region, and season. A recent analysis using self-organizing maps (SOM) to cluster ozonesondes from two tropical sites found that clusters of O3 mixing ratio profiles are an excellent way to capture O3variability and link meteorological influences to O3 profiles. Clusters correspond to distinct meteorological conditions, e.g., convection, subsidence, cloud cover, and transported pollution. Here the SOM technique is extended to four long-term U.S. sites (Boulder, CO; Huntsville, AL; Trinidad Head, CA; and Wallops Island, VA) with4530 total profiles. Sensitivity tests on k-means algorithm and SOM justify use of 3 3 SOM (nine clusters). Ateach site, SOM clusters together O3 profiles with similar tropopause height, 500 hPa height temperature, and amount of tropospheric and total column O3. Cluster means are compared to monthly O3 climatologies.For all four sites, near-tropopause O3 is double (over +100 parts per billion by volume; ppbv) the monthly climatological O3 mixing ratio in three clusters that contain 1316 of profiles, mostly in winter and spring.Large midtropospheric deviations from monthly means (6 ppbv, +710 ppbv O3 at 6 km) are found in two of the most populated clusters (combined 3639 of profiles). These two clusters contain distinctly polluted(summer) and clean O3 (fall-winter, high tropopause) profiles, respectively. As for tropical profiles previously analyzed with SOM, O3 averages are often poor representations of U.S. O3 profile statistics.
Stauffer, Ryan M.; Thompson, Anne M.; Young, George S.
2018-01-01
Sonde-based climatologies of tropospheric ozone (O3) are vital for developing satellite retrieval algorithms and evaluating chemical transport model output. Typical O3 climatologies average measurements by latitude or region, and season. Recent analysis using self-organizing maps (SOM) to cluster ozonesondes from two tropical sites found clusters of O3 mixing ratio profiles are an excellent way to capture O3 variability and link meteorological influences to O3 profiles. Clusters correspond to distinct meteorological conditions, e.g. convection, subsidence, cloud cover, and transported pollution. Here, the SOM technique is extended to four long-term U.S. sites (Boulder, CO; Huntsville, AL; Trinidad Head, CA; Wallops Island, VA) with 4530 total profiles. Sensitivity tests on k-means algorithm and SOM justify use of 3×3 SOM (nine clusters). At each site, SOM clusters together O3 profiles with similar tropopause height, 500 hPa height/temperature, and amount of tropospheric and total column O3. Cluster means are compared to monthly O3 climatologies. For all four sites, near-tropopause O3 is double (over +100 parts per billion by volume; ppbv) the monthly climatological O3 mixing ratio in three clusters that contain 13 – 16% of profiles, mostly in winter and spring. Large mid-tropospheric deviations from monthly means (−6 ppbv, +7 – 10 ppbv O3 at 6 km) are found in two of the most populated clusters (combined 36 – 39% of profiles). These two clusters contain distinctly polluted (summer) and clean O3 (fall-winter, high tropopause) profiles, respectively. As for tropical profiles previously analyzed with SOM, O3 averages are often poor representations of U.S. O3 profile statistics. PMID:29619288
Stauffer, Ryan M; Thompson, Anne M; Young, George S
2016-02-16
Sonde-based climatologies of tropospheric ozone (O 3 ) are vital for developing satellite retrieval algorithms and evaluating chemical transport model output. Typical O 3 climatologies average measurements by latitude or region, and season. Recent analysis using self-organizing maps (SOM) to cluster ozonesondes from two tropical sites found clusters of O 3 mixing ratio profiles are an excellent way to capture O 3 variability and link meteorological influences to O 3 profiles. Clusters correspond to distinct meteorological conditions, e.g. convection, subsidence, cloud cover, and transported pollution. Here, the SOM technique is extended to four long-term U.S. sites (Boulder, CO; Huntsville, AL; Trinidad Head, CA; Wallops Island, VA) with 4530 total profiles. Sensitivity tests on k-means algorithm and SOM justify use of 3×3 SOM (nine clusters). At each site, SOM clusters together O 3 profiles with similar tropopause height, 500 hPa height/temperature, and amount of tropospheric and total column O 3 . Cluster means are compared to monthly O 3 climatologies. For all four sites, near-tropopause O 3 is double (over +100 parts per billion by volume; ppbv) the monthly climatological O 3 mixing ratio in three clusters that contain 13 - 16% of profiles, mostly in winter and spring. Large mid-tropospheric deviations from monthly means (-6 ppbv, +7 - 10 ppbv O 3 at 6 km) are found in two of the most populated clusters (combined 36 - 39% of profiles). These two clusters contain distinctly polluted (summer) and clean O 3 (fall-winter, high tropopause) profiles, respectively. As for tropical profiles previously analyzed with SOM, O 3 averages are often poor representations of U.S. O 3 profile statistics.
A Predictive Algorithm to Detect Opioid Use Disorder
Lee, Chee; Sharma, Maneesh; Kantorovich, Svetlana
2018-01-01
Purpose: The purpose of this study was to determine the clinical utility of an algorithm-based decision tool designed to assess risk associated with opioid use in the primary care setting. Methods: A prospective, longitudinal study was conducted to assess the utility of precision medicine testing in 1822 patients across 18 family medicine/primary care clinics in the United States. Using the profile, patients were categorized into low, moderate, and high risk for opioid use. Physicians who ordered testing were asked to complete patient evaluations and document their actions, decisions, and perceptions regarding the utility of the precision medicine tests. Results: Approximately 47% of primary care physicians surveyed used the profile to guide clinical decision-making. These physicians rated the benefit of the profile on patient care an average of 3.6 on a 5-point scale (1 indicating no benefit and 5 indicating significant benefit). Eighty-eight percent of all clinicians surveyed felt the test exhibited some benefit to their patient care. The most frequent utilization for the profile was to guide a change in opioid prescribed. Physicians reported greater benefit of profile utilization for minority patients. Patients whose treatment was guided by the profile had pain levels that were reduced, on average, 2.7 levels on the numeric rating scale. Conclusions: The profile provided primary care physicians with a useful tool to stratify the risk of opioid use disorder and was rated as beneficial for decision-making and patient improvement by the majority of physicians surveyed. Physicians reported the profile resulted in greater clinical improvement for minorities, highlighting the objective use of this profile to guide judicial use of opioids in high-risk patients. Significantly, when physicians used the profile to guide treatment decisions, patient-reported pain was greatly reduced. PMID:29383324
A Predictive Algorithm to Detect Opioid Use Disorder: What Is the Utility in a Primary Care Setting?
Lee, Chee; Sharma, Maneesh; Kantorovich, Svetlana; Brenton, Ashley
2018-01-01
The purpose of this study was to determine the clinical utility of an algorithm-based decision tool designed to assess risk associated with opioid use in the primary care setting. A prospective, longitudinal study was conducted to assess the utility of precision medicine testing in 1822 patients across 18 family medicine/primary care clinics in the United States. Using the profile, patients were categorized into low, moderate, and high risk for opioid use. Physicians who ordered testing were asked to complete patient evaluations and document their actions, decisions, and perceptions regarding the utility of the precision medicine tests. Approximately 47% of primary care physicians surveyed used the profile to guide clinical decision-making. These physicians rated the benefit of the profile on patient care an average of 3.6 on a 5-point scale (1 indicating no benefit and 5 indicating significant benefit). Eighty-eight percent of all clinicians surveyed felt the test exhibited some benefit to their patient care. The most frequent utilization for the profile was to guide a change in opioid prescribed. Physicians reported greater benefit of profile utilization for minority patients. Patients whose treatment was guided by the profile had pain levels that were reduced, on average, 2.7 levels on the numeric rating scale. The profile provided primary care physicians with a useful tool to stratify the risk of opioid use disorder and was rated as beneficial for decision-making and patient improvement by the majority of physicians surveyed. Physicians reported the profile resulted in greater clinical improvement for minorities, highlighting the objective use of this profile to guide judicial use of opioids in high-risk patients. Significantly, when physicians used the profile to guide treatment decisions, patient-reported pain was greatly reduced.
Luo, Y.; Xu, Y.; Liu, Q.; Xia, J.
2008-01-01
In recent years, multichannel analysis of surface waves (MASW) has been increasingly used for obtaining vertical shear-wave velocity profiles within near-surface materials. MASW uses a multichannel recording approach to capture the time-variant, full-seismic wavefield where dispersive surface waves can be used to estimate near-surface S-wave velocity. The technique consists of (1) acquisition of broadband, high-frequency ground roll using a multichannel recording system; (2) efficient and accurate algorithms that allow the extraction and analysis of 1D Rayleigh-wave dispersion curves; (3) stable and efficient inversion algorithms for estimating S-wave velocity profiles; and (4) construction of the 2D S-wave velocity field map.
Rómoli, Santiago; Serrano, Mario Emanuel; Ortiz, Oscar Alberto; Vega, Jorge Rubén; Eduardo Scaglia, Gustavo Juan
2015-07-01
Based on a linear algebra approach, this paper aims at developing a novel control law able to track reference profiles that were previously-determined in the literature. A main advantage of the proposed strategy is that the control actions are obtained by solving a system of linear equations. The optimal controller parameters are selected through Monte Carlo Randomized Algorithm in order to minimize a proposed cost index. The controller performance is evaluated through several tests, and compared with other controller reported in the literature. Finally, a Monte Carlo Randomized Algorithm is conducted to assess the performance of the proposed controller. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kang, S; Suh, T; Chung, J
2015-06-15
Purpose: To verify the dose accuracy of Acuros XB (AXB) dose calculation algorithm at air-tissue interface using inhomogeneous phantom for 6-MV flattening filter-free (FFF) beams. Methods: An inhomogeneous phantom included air cavity was manufactured for verifying dose accuracy at the air-tissue interface. The phantom was composed with 1 and 3 cm thickness of air cavity. To evaluate the central axis doses (CAD) and dose profiles of the interface, the dose calculations were performed for 3 × 3 and 4 × 4 cm{sup 2} fields of 6 MV FFF beams with AAA and AXB in Eclipse treatment plainning system. Measurements inmore » this region were performed with Gafchromic film. The root mean square errors (RMSE) were analyzed with calculated and measured dose profile. Dose profiles were divided into inner-dose profile (>80%) and penumbra (20% to 80%) region for evaluating RMSE. To quantify the distribution difference, gamma evaluation was used and determined the agreement with 3%/3mm criteria. Results: The percentage differences (%Diffs) between measured and calculated CAD in the interface, AXB shows more agreement than AAA. The %Diffs were increased with increasing the thickness of air cavity size and it is similar for both algorithms. In RMSEs of inner-profile, AXB was more accurate than AAA. The difference was up to 6 times due to overestimation by AAA. RMSEs of penumbra appeared to high difference for increasing the measurement depth. Gamma agreement also presented that the passing rates decreased in penumbra. Conclusion: This study demonstrated that the dose calculation with AXB shows more accurate than with AAA for the air-tissue interface. The 2D dose distributions with AXB for both inner-profile and penumbra showed better agreement than with AAA relative to variation of the measurement depths and air cavity sizes.« less
On Study of Air/Space-borne Dual-Wavelength Radar for Estimates of Rain Profiles
NASA Technical Reports Server (NTRS)
Liao, Liang; Meneghini, Robert
2004-01-01
In this study, a framework is discussed to apply air/space-borne dual-wavelength radar for the estimation of characteristic parameters of hydrometeors. The focus of our study is on the Global Precipitation Measurements (GPM) precipitation radar, a dual-wavelength radar that operates at Ku (13.8 GHz) and Ka (35 GHz) bands. As the droplet size distributions (DSD) of rain are expressed as the Gamma function, a procedure is described to derive the median volume diameter (D(sub 0)) and particle number concentration (N(sub T)) of rain. The correspondences of an important quantity of dual-wavelength radar, defined as deferential frequency ratio (DFR), to the D(sub 0) in the melting region are given as a function of the distance from the 0 C isotherm. A self-consistent iterative algorithm that shows a promising to account for rain attenuation of radar and infer the DSD without use of surface reference technique (SRT) is examined by applying it to the apparent radar reflectivity profiles simulated from the DSD model and then comparing the estimates with the model (true) results. For light to moderate rain the self-consistent rain profiling approach converges to unique and correct solutions only if the same shape factors of Gamma functions are used both to generate and retrieve the rain profiles, but does not converges to the true solutions if the DSD form is not chosen correctly. To further examine the dual-wavelength techniques, the self-consistent algorithm, along with forward and backward rain profiling algorithms, is then applied to the measurements taken from the 2nd generation Precipitation Radar (PR-2) built by Jet Propulsion Laboratory. It is found that rain profiles estimated from the forward and backward approaches are not sensitive to shape factor of DSD Gamma distribution, but the self-consistent method is.
Brenton, Ashley; Lee, Chee; Lewis, Katrina; Sharma, Maneesh; Kantorovich, Svetlana; Smith, Gregory A; Meshkin, Brian
2018-01-01
The purpose of this study was to determine the clinical utility of an algorithm-based decision tool designed to assess risk associated with opioid use. Specifically, we sought to assess how physicians were using the profile in patient care and how its use affected patient outcomes. A prospective, longitudinal study was conducted to assess the utility of precision medicine testing in 5,397 patients across 100 clinics in the USA. Using a patent-protected, validated algorithm combining specific genetic risk factors with phenotypic traits, patients were categorized into low-, moderate-, and high-risk patients for opioid abuse. Physicians who ordered precision medicine testing were asked to complete patient evaluations and document their actions, decisions, and perceptions regarding the utility of the precision medicine tests. The patient outcomes associated with each treatment action were carefully documented. Physicians used the profile to guide treatment decisions for over half of the patients. Of those, guided treatment decisions for 24.5% of the patients were opioid related, including changing the opioid prescribed, starting an opioid, or titrating a patient off the opioid. Treatment guidance was strongly influenced by profile-predicted opioid use disorder (OUD) risk. Most importantly, patients whose physicians used the profile to guide opioid-related treatment decisions had improved clinical outcomes, including better pain management by medication adjustments, with an average pain decrease of 3.4 points on a scale of 1-10. Patients whose physicians used the profile to guide opioid-related treatment decisions had improved clinical outcomes, as measured by decreased pain levels resulting from better pain management with prescribed medications. The clinical utility of the profile is twofold. It provides clinically actionable recommendations that can be used to 1) prevent OUD through limiting initial opioid prescriptions and 2) reduce pain in patients at low risk of developing OUD.
Brenton, Ashley; Lee, Chee; Lewis, Katrina; Sharma, Maneesh; Kantorovich, Svetlana; Smith, Gregory A; Meshkin, Brian
2018-01-01
Purpose The purpose of this study was to determine the clinical utility of an algorithm-based decision tool designed to assess risk associated with opioid use. Specifically, we sought to assess how physicians were using the profile in patient care and how its use affected patient outcomes. Patients and methods A prospective, longitudinal study was conducted to assess the utility of precision medicine testing in 5,397 patients across 100 clinics in the USA. Using a patent-protected, validated algorithm combining specific genetic risk factors with phenotypic traits, patients were categorized into low-, moderate-, and high-risk patients for opioid abuse. Physicians who ordered precision medicine testing were asked to complete patient evaluations and document their actions, decisions, and perceptions regarding the utility of the precision medicine tests. The patient outcomes associated with each treatment action were carefully documented. Results Physicians used the profile to guide treatment decisions for over half of the patients. Of those, guided treatment decisions for 24.5% of the patients were opioid related, including changing the opioid prescribed, starting an opioid, or titrating a patient off the opioid. Treatment guidance was strongly influenced by profile-predicted opioid use disorder (OUD) risk. Most importantly, patients whose physicians used the profile to guide opioid-related treatment decisions had improved clinical outcomes, including better pain management by medication adjustments, with an average pain decrease of 3.4 points on a scale of 1–10. Conclusion Patients whose physicians used the profile to guide opioid-related treatment decisions had improved clinical outcomes, as measured by decreased pain levels resulting from better pain management with prescribed medications. The clinical utility of the profile is twofold. It provides clinically actionable recommendations that can be used to 1) prevent OUD through limiting initial opioid prescriptions and 2) reduce pain in patients at low risk of developing OUD. PMID:29379313
Lukashin, A V; Fuchs, R
2001-05-01
Cluster analysis of genome-wide expression data from DNA microarray hybridization studies has proved to be a useful tool for identifying biologically relevant groupings of genes and samples. In the present paper, we focus on several important issues related to clustering algorithms that have not yet been fully studied. We describe a simple and robust algorithm for the clustering of temporal gene expression profiles that is based on the simulated annealing procedure. In general, this algorithm guarantees to eventually find the globally optimal distribution of genes over clusters. We introduce an iterative scheme that serves to evaluate quantitatively the optimal number of clusters for each specific data set. The scheme is based on standard approaches used in regular statistical tests. The basic idea is to organize the search of the optimal number of clusters simultaneously with the optimization of the distribution of genes over clusters. The efficiency of the proposed algorithm has been evaluated by means of a reverse engineering experiment, that is, a situation in which the correct distribution of genes over clusters is known a priori. The employment of this statistically rigorous test has shown that our algorithm places greater than 90% genes into correct clusters. Finally, the algorithm has been tested on real gene expression data (expression changes during yeast cell cycle) for which the fundamental patterns of gene expression and the assignment of genes to clusters are well understood from numerous previous studies.
Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang
2016-01-01
Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter. PMID:27212938
NASA Technical Reports Server (NTRS)
Petot, D.; Loiseau, H.
1982-01-01
Unsteady aerodynamic methods adopted for the study of aeroelasticity in helicopters are considered with focus on the development of a semiempirical model of unsteady aerodynamic forces acting on an oscillating profile at high incidence. The successive smoothing algorithm described leads to the model's coefficients in a very satisfactory manner.
NASA Technical Reports Server (NTRS)
Wang, Zhien; Heymsfield, Gerald M.; Li, Lihua; Heymsfield, Andrew J.
2005-01-01
An algorithm to retrieve optically thick ice cloud microphysical property profiles is developed by using the GSFC 9.6 GHz ER-2 Doppler Radar (EDOP) and the 94 GHz Cloud Radar System (CRS) measurements aboard the high-altitude ER-2 aircraft. In situ size distribution and total water content data from the CRYSTAL-FACE field campaign are used for the algorithm development. To reduce uncertainty in calculated radar reflectivity factors (Ze) at these wavelengths, coincident radar measurements and size distribution data are used to guide the selection of mass-length relationships and to deal with the density and non-spherical effects of ice crystals on the Ze calculations. The algorithm is able to retrieve microphysical property profiles of optically thick ice clouds, such as, deep convective and anvil clouds, which are very challenging for single frequency radar and lidar. Examples of retrieved microphysical properties for a deep convective clouds are presented, which show that EDOP and CRS measurements provide rich information to study cloud structure and evolution. Good agreement between IWPs derived from an independent submillimeter-wave radiometer, CoSSIR, and dual-wavelength radar measurements indicates accuracy of the IWC retrieved from the two-frequency radar algorithm.
Improved retrieval of cloud base heights from ceilometer using a non-standard instrument method
NASA Astrophysics Data System (ADS)
Wang, Yang; Zhao, Chuanfeng; Dong, Zipeng; Li, Zhanqing; Hu, Shuzhen; Chen, Tianmeng; Tao, Fa; Wang, Yuzhao
2018-04-01
Cloud-base height (CBH) is a basic cloud parameter but has not been measured accurately, especially under polluted conditions due to the interference of aerosol. Taking advantage of a comprehensive field experiment in northern China in which a variety of advanced cloud probing instruments were operated, different methods of detecting CBH are assessed. The Micro-Pulse Lidar (MPL) and the Vaisala ceilometer (CL51) provided two types of backscattered profiles. The latter has been employed widely as a standard means of measuring CBH using the manufacturer's operational algorithm to generate standard CBH products (CL51 MAN) whose quality is rigorously assessed here, in comparison with a research algorithm that we developed named value distribution equalization (VDE) algorithm. It was applied to both the profiles of lidar backscattering data from the two instruments. The VDE algorithm is found to produce more accurate estimates of CBH for both instruments and can cope with heavy aerosol loading conditions well. By contrast, CL51 MAN overestimates CBH by 400 m and misses many low level clouds under such conditions. These findings are important given that CL51 has been adopted operationally by many meteorological stations in China.
An algorithm to extract more accurate stream longitudinal profiles from unfilled DEMs
NASA Astrophysics Data System (ADS)
Byun, Jongmin; Seong, Yeong Bae
2015-08-01
Morphometric features observed from a stream longitudinal profile (SLP) reflect channel responses to lithological variation and changes in uplift or climate; therefore, they constitute essential indicators in the studies for the dynamics between tectonics, climate, and surface processes. The widespread availability of digital elevation models (DEMs) and their processing enable semi-automatic extraction of SLPs as well as additional stream profile parameters, thus reducing the time spent for extracting them and simultaneously allowing regional-scale studies of SLPs. However, careful consideration is required to extract SLPs directly from a DEM, because the DEM must be altered by depression filling process to ensure the continuity of flows across it. Such alteration inevitably introduces distortions to the SLP, such as stair steps, bias of elevation values, and inaccurate stream paths. This paper proposes a new algorithm, called maximum depth tracing algorithm (MDTA), to extract more accurate SLPs using depression-unfilled DEMs. The MDTA supposes that depressions in DEMs are not necessarily artifacts to be removed, and that elevation values within them are useful to represent more accurately the real landscape. To ensure the continuity of flows even across the unfilled DEM, the MDTA first determines the outlet of each depression and then reverses flow directions of the cells on the line of maximum depth within each depression, beginning from the outlet and toward the sink. It also calculates flow accumulation without disruption across the unfilled DEM. Comparative analysis with the profiles extracted by the hydrologic functions implemented in the ArcGIS™ was performed to illustrate the benefits from the MDTA. It shows that the MDTA provides more accurate stream paths on depression areas, and consequently reduces distortions of the SLPs derived from the paths, such as exaggerated elevation values and negatively biased slopes that are commonly observed in the SLPs built using the ArcGIS™. The algorithm proposed here, therefore, could aid all the studies requiring more reliable stream paths and SLPs from DEMs.
SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Tian, Z; Song, T
Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less
3D Cloud Field Prediction using A-Train Data and Machine Learning Techniques
NASA Astrophysics Data System (ADS)
Johnson, C. L.
2017-12-01
Validation of cloud process parameterizations used in global climate models (GCMs) would greatly benefit from observed 3D cloud fields at the size comparable to that of a GCM grid cell. For the highest resolution simulations, surface grid cells are on the order of 100 km by 100 km. CloudSat/CALIPSO data provides 1 km width of detailed vertical cloud fraction profile (CFP) and liquid and ice water content (LWC/IWC). This work utilizes four machine learning algorithms to create nonlinear regressions of CFP, LWC, and IWC data using radiances, surface type and location of measurement as predictors and applies the regression equations to off-track locations generating 3D cloud fields for 100 km by 100 km domains. The CERES-CloudSat-CALIPSO-MODIS (C3M) merged data set for February 2007 is used. Support Vector Machines, Artificial Neural Networks, Gaussian Processes and Decision Trees are trained on 1000 km of continuous C3M data. Accuracy is computed using existing vertical profiles that are excluded from the training data and occur within 100 km of the training data. Accuracy of the four algorithms is compared. Average accuracy for one day of predicted data is 86% for the most successful algorithm. The methodology for training the algorithms, determining valid prediction regions and applying the equations off-track is discussed. Predicted 3D cloud fields are provided as inputs to the Ed4 NASA LaRC Fu-Liou radiative transfer code and resulting TOA radiances compared to observed CERES/MODIS radiances. Differences in computed radiances using predicted profiles and observed radiances are compared.
NASA Technical Reports Server (NTRS)
Hlavka, Dennis L.; Palm, S. P.; Welton, E. J.; Hart, W. D.; Spinhirne, J. D.; McGill, M.; Mahesh, A.; Starr, David OC. (Technical Monitor)
2001-01-01
The Geoscience Laser Altimeter System (GLAS) is scheduled for launch on the ICESat satellite as part of the NASA EOS mission in 2002. GLAS will be used to perform high resolution surface altimetry and will also provide a continuously operating atmospheric lidar to profile clouds, aerosols, and the planetary boundary layer with horizontal and vertical resolution of 175 and 76.8 m, respectively. GLAS is the first active satellite atmospheric profiler to provide global coverage. Data products include direct measurements of the heights of aerosol and cloud layers, and the optical depth of transmissive layers. In this poster we provide an overview of the GLAS atmospheric data products, present a simulated GLAS data set, and show results from the simulated data set using the GLAS data processing algorithm. Optical results from the ER-2 Cloud Physics Lidar (CPL), which uses many of the same processing algorithms as GLAS, show algorithm performance with real atmospheric conditions during the Southern African Regional Science Initiative (SAFARI 2000).
Daytime O/N2 Retrieval Algorithm for the Ionospheric Connection Explorer (ICON)
NASA Astrophysics Data System (ADS)
Stephan, Andrew W.; Meier, R. R.; England, Scott L.; Mende, Stephen B.; Frey, Harald U.; Immel, Thomas J.
2018-02-01
The NASA Ionospheric Connection Explorer Far-Ultraviolet spectrometer, ICON FUV, will measure altitude profiles of the daytime far-ultraviolet (FUV) OI 135.6 nm and N2 Lyman-Birge-Hopfield (LBH) band emissions that are used to determine thermospheric density profiles and state parameters related to thermospheric composition; specifically the thermospheric column O/N2 ratio (symbolized as ΣO/N2). This paper describes the algorithm concept that has been adapted and updated from one previously applied with success to limb data from the Global Ultraviolet Imager (GUVI) on the NASA Thermosphere Ionosphere Mesosphere Energetics and Dynamics (TIMED) mission. We also describe the requirements that are imposed on the ICON FUV to measure ΣO/N2 over any 500-km sample in daytime with a precision of better than 8.7%. We present results from orbit-simulation testing that demonstrates that the ICON FUV and our thermospheric composition retrieval algorithm can meet these requirements and provide the measurements necessary to address ICON science objectives.
Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)
NASA Technical Reports Server (NTRS)
Niewoehner, Kevin R.; Carter, John (Technical Monitor)
2001-01-01
The research accomplishments for the cooperative agreement 'Online Learning Flight Control for Intelligent Flight Control Systems (IFCS)' include the following: (1) previous IFC program data collection and analysis; (2) IFC program support site (configured IFC systems support network, configured Tornado/VxWorks OS development system, made Configuration and Documentation Management Systems Internet accessible); (3) Airborne Research Test Systems (ARTS) II Hardware (developed hardware requirements specification, developing environmental testing requirements, hardware design, and hardware design development); (4) ARTS II software development laboratory unit (procurement of lab style hardware, configured lab style hardware, and designed interface module equivalent to ARTS II faceplate); (5) program support documentation (developed software development plan, configuration management plan, and software verification and validation plan); (6) LWR algorithm analysis (performed timing and profiling on algorithm); (7) pre-trained neural network analysis; (8) Dynamic Cell Structures (DCS) Neural Network Analysis (performing timing and profiling on algorithm); and (9) conducted technical interchange and quarterly meetings to define IFC research goals.
Ayaz, Shirazi Muhammad; Kim, Min Young
2018-01-01
In this article, a multi-view registration approach for the 3D handheld profiling system based on the multiple shot structured light technique is proposed. The multi-view registration approach is categorized into coarse registration and point cloud refinement using the iterative closest point (ICP) algorithm. Coarse registration of multiple point clouds was performed using relative orientation and translation parameters estimated via homography-based visual navigation. The proposed system was evaluated using an artificial human skull and a paper box object. For the quantitative evaluation of the accuracy of a single 3D scan, a paper box was reconstructed, and the mean errors in its height and breadth were found to be 9.4 μm and 23 μm, respectively. A comprehensive quantitative evaluation and comparison of proposed algorithm was performed with other variants of ICP. The root mean square error for the ICP algorithm to register a pair of point clouds of the skull object was also found to be less than 1 mm. PMID:29642552
Reducing Surface Clutter in Cloud Profiling Radar Data
NASA Technical Reports Server (NTRS)
Tanelli, Simone; Pak, Kyung; Durden, Stephen; Im, Eastwood
2008-01-01
An algorithm has been devised to reduce ground clutter in the data products of the CloudSat Cloud Profiling Radar (CPR), which is a nadir-looking radar instrument, in orbit around the Earth, that measures power backscattered by clouds as a function of distance from the instrument. Ground clutter contaminates the CPR data in the lowest 1 km of the atmospheric profile, heretofore making it impossible to use CPR data to satisfy the scientific interest in studying clouds and light rainfall at low altitude. The algorithm is based partly on the fact that the CloudSat orbit is such that the geodetic altitude of the CPR varies continuously over a range of approximately 25 km. As the geodetic altitude changes, the radar timing parameters are changed at intervals defined by flight software in order to keep the troposphere inside a data-collection time window. However, within each interval, the surface of the Earth continuously "scans through" (that is, it moves across) a few range bins of the data time window. For each radar profile, only few samples [one for every range-bin increment ((Delta)r = 240 m)] of the surface-clutter signature are available around the range bin in which the peak of surface return is observed, but samples in consecutive radar profiles are offset slightly (by amounts much less than (Delta)r) with respect to each other according to the relative change in geodetic altitude. As a consequence, in a case in which the surface area under examination is homogenous (e.g., an ocean surface), a sequence of consecutive radar profiles of the surface in that area contains samples of the surface response with range resolution (Delta)p much finer than the range-bin increment ((Delta)p << r). Once the high-resolution surface response has thus become available, the profile of surface clutter can be accurately estimated by use of a conventional maximum-correlation scheme: A translated and scaled version of the high-resolution surface response is fitted to the observed low-resolution profile. The translation and scaling factors that optimize the fit in a maximum-correlation sense represent (1) the true position of the surface relative to the sampled surface peak and (2) the magnitude of the surface backscatter. The performance of this algorithm has been tested on CloudSat data acquired over an ocean surface. A preliminary analysis of the test data showed a surface-clutter-rejection ratio over flat surfaces of >10 dB and a reduction of the contaminated altitude over ocean from about 1 km to about 0.5 km (over the ocean). The algorithm has been embedded in CloudSat L1B processing as of Release 04 (July 2007), and the estimated flat surface clutter is removed in L2B-GEOPROF product from the observed profile of reflectivity (see CloudSat product documentation for details and performance at http://www.cloudsat.cira.colostate.edu/ dataSpecs.php?prodid=1).
Profile and Instrumentation Driven Methods for Embedded Signal Processing
2015-01-01
applications,” Computers, IEEE Transactions on, vol. 37, no. 9, pp. 1088–1098, Sep 1988. [7] Massimo Ravasi and Marco Mattavelli, “High-level algorithmic...profiling,” in Digital Media and its Application in Museum Heritages, Second Workshop on, Dec 2007, pp. 353–358. [15] H. Hubert , B. Stabernack, and K.-I. Wels
Validating the AIRS Version 5 CO Retrieval with DACOM In Situ Measurements During INTEX-A and -B
NASA Technical Reports Server (NTRS)
McMillan, Wallace W.; Evans, Keith D.; Barnet, Christopher D.; Maddy, Eric; Sachse, Glen W.; Diskin, Glenn S.
2011-01-01
Herein we provide a description of the atmospheric infrared sounder (AIRS) version 5 (v5) carbon monoxide (CO) retrieval algorithm and its validation with the DACOM in situ measurements during the INTEX-A and -B campaigns. All standard and support products in the AIRS v5 CO retrieval algorithm are documented. Building on prior publications, we describe the convolution of in situ measurements with the AIRS v5 CO averaging kernel and first-guess CO profile as required for proper validation. Validation is accomplished through comparison of AIRS CO retrievals with convolved in situ CO profiles acquired during the NASA Intercontinental Chemical Transport Experiments (INTEX) in 2004 and 2006. From 143 profiles in the northern mid-latitudes during these two experiments, we find AIRS v5 CO retrievals are biased high by 6% 10% between 900 and 300 hPa with a root-mean-square error of 8% 12%. No significant differences were found between validation using spiral profiles coincident with AIRS overpasses and in-transit profiles under the satellite track but up to 13 h off in time. Similarly, no significant differences in validation results were found for ocean versus land, day versus night, or with respect to retrieved cloud top pressure or cloud fraction.
Barton, Justin E.; Boyer, Mark D.; Shi, Wenyu; ...
2015-07-30
DIII-D experimental results are reported to demonstrate the potential of physics-model-based safety factor profile control for robust and reproducible sustainment of advanced scenarios. In the absence of feedback control, variability in wall conditions and plasma impurities, as well as drifts due to external disturbances, can limit the reproducibility of discharges with simple pre-programmed scenario trajectories. The control architecture utilized is a feedforward + feedback scheme where the feedforward commands are computed off-line and the feedback commands are computed on-line. In this work, firstly a first-principles-driven (FPD), physics-based model of the q profile and normalized beta (β N) dynamics is embeddedmore » into a numerical optimization algorithm to design feedforward actuator trajectories that sheer the plasma through the tokamak operating space to reach a desired stationary target state that is characterized by the achieved q profile and β N. Good agreement between experimental results and simulations demonstrates the accuracy of the models employed for physics-model-based control design. Secondly, a feedback algorithm for q profile control is designed following a FPD approach, and the ability of the controller to achieve and maintain a target q profile evolution is tested in DIII-D high confinement (H-mode) experiments. The controller is shown to be able to effectively control the q profile when β N is relatively close to the target, indicating the need for integrated q profile and β N control to further enhance the ability to achieve robust scenario execution. Furthermore, the ability of an integrated q profile + β N feedback controller to track a desired target is demonstrated through simulation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grzetic, S; Weldon, M; Noa, K
Purpose: This study compares the newly released MaxFOV Revision 1 EFOV reconstruction algorithm for GE RT590 to the older WideView EFOV algorithm. Two radiotherapy overlays from Q-fix and Diacor, are included in our analysis. Hounsfield Units (HU) generated with the WideView algorithm varied in the extended field (beyond 50cm) and the scanned object’s border varied from slice to slice. A validation of HU consistency between the two reconstruction algorithms is performed. Methods: A CatPhan 504 and CIRS062 Electron Density Phantom were scanned on a GE RT590 CT-Simulator. The phantoms were positioned in multiple locations within the scan field of viewmore » so some of the density plugs were outside the 50cm reconstruction circle. Images were reconstructed using both the WideView and MaxFOV algorithms. The HU for each scan were characterized both in average over a volume and in profile. Results: HU values are consistent between the two algorithms. Low-density material will have a slight increase in HU value and high-density material will have a slight decrease in HU value as the distance from the sweet spot increases. Border inconsistencies and shading artifacts are still present with the MaxFOV reconstruction on the Q-fix overlay but not the Diacor overlay (It should be noted that the Q-fix overlay is not currently GE-certified). HU values for water outside the 50cm FOV are within 40HU of reconstructions at the sweet spot of the scanner. CatPhan HU profiles show improvement with the MaxFOV algorithm as it approaches the scanner edge. Conclusion: The new MaxFOV algorithm improves the contour border for objects outside of the standard FOV when using a GE-approved tabletop. Air cavities outside of the standard FOV create inconsistent object borders. HU consistency is within GE specifications and the accuracy of the phantom edge improves. Further adjustments to the algorithm are being investigated by GE.« less
Characterizing isolated attosecond pulses with angular streaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.
Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Characterizing isolated attosecond pulses with angular streaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.
We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Charge scheduling of an energy storage system under time-of-use pricing and a demand charge.
Yoon, Yourim; Kim, Yong-Hyuk
2014-01-01
A real-coded genetic algorithm is used to schedule the charging of an energy storage system (ESS), operated in tandem with renewable power by an electricity consumer who is subject to time-of-use pricing and a demand charge. Simulations based on load and generation profiles of typical residential customers show that an ESS scheduled by our algorithm can reduce electricity costs by approximately 17%, compared to a system without an ESS and by 8% compared to a scheduling algorithm based on net power.
Characterizing isolated attosecond pulses with angular streaking
Li, Siqi; Guo, Zhaoheng; Coffee, Ryan N.; ...
2018-02-12
Here, we present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Characterizing isolated attosecond pulses with angular streaking
Li, Sigi; Guo, Zhaoheng; Coffee, Ryan N.; ...
2018-02-13
We present a reconstruction algorithm for isolated attosecond pulses, which exploits the phase dependent energy modulation of a photoelectron ionized in the presence of a strong laser field. The energy modulation due to a circularly polarized laser field is manifest strongly in the angle-resolved photoelectron momentum distribution, allowing for complete reconstruction of the temporal and spectral profile of an attosecond burst. We show that this type of reconstruction algorithm is robust against counting noise and suitable for single-shot experiments. This algorithm holds potential for a variety of applications for attosecond pulse sources.
Charge Scheduling of an Energy Storage System under Time-of-Use Pricing and a Demand Charge
Yoon, Yourim
2014-01-01
A real-coded genetic algorithm is used to schedule the charging of an energy storage system (ESS), operated in tandem with renewable power by an electricity consumer who is subject to time-of-use pricing and a demand charge. Simulations based on load and generation profiles of typical residential customers show that an ESS scheduled by our algorithm can reduce electricity costs by approximately 17%, compared to a system without an ESS and by 8% compared to a scheduling algorithm based on net power. PMID:25197720
NASA Astrophysics Data System (ADS)
Suleiman, R. M.; Chance, K.; Liu, X.; Kurosu, T. P.; Gonzalez Abad, G.
2014-12-01
We present and discuss a detailed description of the retrieval algorithms for the OMI BrO product. The BrO algorithms are based on direct fitting of radiances from 319.0-347.5 nm. Radiances are modeled from the solar irradiance, attenuated and adjusted by contributions from the target gas and interfering gases, rotational Raman scattering, undersampling, additive and multiplicative closure polynomials and a common mode spectrum. The version of the algorithm used for both BrO includes relevant changes with respect to the operational code, including the fit of the O2-O2 collisional complex, updates in the high resolution solar reference spectrum, updates in spectroscopy, an updated Air Mass Factor (AMF) calculation scheme, and the inclusion of scattering weights and vertical profiles in the level 2 products. Updates to the algorithms include accurate scattering weights and air mass factor calculations, scattering weights and profiles in outputs and available cross sections. We include retrieval parameter and window optimization to reduce the interference from O3, HCHO, O2-O2, SO2, improve fitting accuracy and uncertainty, reduce striping, and improve the long-term stability. We validate OMI BrO with ground-based measurements from Harestua and with chemical transport model simulations. We analyze the global distribution and seasonal variation of BrO and investigate BrO emissions from volcanoes and salt lakes.
A comparison of hydrographically and optically derived mixed layer depths
Zawada, D.G.; Zaneveld, J.R.V.; Boss, E.; Gardner, W.D.; Richardson, M.J.; Mishonov, A.V.
2005-01-01
Efforts to understand and model the dynamics of the upper ocean would be significantly advanced given the ability to rapidly determine mixed layer depths (MLDs) over large regions. Remote sensing technologies are an ideal choice for achieving this goal. This study addresses the feasibility of estimating MLDs from optical properties. These properties are strongly influenced by suspended particle concentrations, which generally reach a maximum at pycnoclines. The premise therefore is to use a gradient in beam attenuation at 660 nm (c660) as a proxy for the depth of a particle-scattering layer. Using a global data set collected during World Ocean Circulation Experiment cruises from 1988-1997, six algorithms were employed to compute MLDs from either density or temperature profiles. Given the absence of published optically based MLD algorithms, two new methods were developed that use c660 profiles to estimate the MLD. Intercomparison of the six hydrographically based algorithms revealed some significant disparities among the resulting MLD values. Comparisons between the hydrographical and optical approaches indicated a first-order agreement between the MLDs based on the depths of gradient maxima for density and c660. When comparing various hydrographically based algorithms, other investigators reported that inherent fluctuations of the mixed layer depth limit the accuracy of its determination to 20 m. Using this benchmark, we found a ???70% agreement between the best hydrographical-optical algorithm pairings. Copyright 2005 by the American Geophysical Union.
NASA Astrophysics Data System (ADS)
Sullivan, J. T.; McGee, T. J.; Leblanc, T.; Sumnicht, G. K.; Twigg, L. W.
2015-04-01
The main purpose of the NASA Goddard Space Flight Center TROPospheric OZone DIfferential Absorption Lidar (GSFC TROPOZ DIAL) is to measure the vertical distribution of tropospheric ozone for science investigations. Because of the important health and climate impacts of tropospheric ozone, it is imperative to quantify background photochemical and aloft ozone concentrations, especially during air quality episodes. To better characterize tropospheric ozone, the Tropospheric Ozone Lidar Network (TOLNet) has recently been developed, which currently consists of five different ozone DIAL instruments, including the TROPOZ. This paper addresses the necessary procedures to validate the TROPOZ retrieval algorithm and develops a primary standard for retrieval consistency and optimization within TOLNet. This paper is focused on ensuring the TROPOZ and future TOLNet algorithms are properly quantifying ozone concentrations and the following paper will focus on defining a systematic uncertainty analysis standard for all TOLNet instruments. Although this paper is used to optimize the TROPOZ retrieval, the methodology presented may be extended and applied to most other DIAL instruments, even if the atmospheric product of interest is not tropospheric ozone (e.g. temperature or water vapor). The analysis begins by computing synthetic lidar returns from actual TROPOZ lidar return signals in combination with a known ozone profile. From these synthetic signals, it is possible to explicitly determine retrieval algorithm biases from the known profile, thereby identifying any areas that may need refinement for a new operational version of the TROPOZ retrieval algorithm. A new vertical resolution scheme is presented, which was upgraded from a constant vertical resolution to a variable vertical resolution, in order to yield a statistical uncertainty of <10%. The optimized vertical resolution scheme retains the ability to resolve fluctuations in the known ozone profile and now allows near field signals to be more appropriately smoothed. With these revisions, the optimized TROPOZ retrieval algorithm (TROPOZopt) has been effective in retrieving nearly 200 m lower to the surface. Also, as compared to the previous version of the retrieval, the TROPOZopt has reduced the mean profile bias by 3.5% and large reductions in bias (near 15 %) were apparent above 4.5 km. Finally, to ensure the TROPOZopt retrieval algorithm is robust enough to handle actual lidar return signals, a comparison is shown between four nearby ozonesonde measurements. The ozonesondes agree well with the retrieval and are mostly within the TROPOZopt retrieval uncertainty bars (which implies that this exercise was quite successful). A final mean percent difference plot is shown between the TROPOZopt and ozonesondes, which indicates that the new operational retrieval is mostly within 10% of the ozonesonde measurement and no systematic biases are present. The authors believe that this analysis has significantly added to the confidence in the TROPOZ instrument and provides a standard for current and future TOLNet algorithms.
Evaluation of six TPS algorithms in computing entrance and exit doses.
Tan, Yun I; Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun; Elliott, Alex
2014-05-08
Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%-3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison.
NASA Astrophysics Data System (ADS)
Kurugol, Sila; Dy, Jennifer G.; Rajadhyaksha, Milind; Gossage, Kirk W.; Weissmann, Jesse; Brooks, Dana H.
2011-03-01
The examination of the dermis/epidermis junction (DEJ) is clinically important for skin cancer diagnosis. Reflectance confocal microscopy (RCM) is an emerging tool for detection of skin cancers in vivo. However, visual localization of the DEJ in RCM images, with high accuracy and repeatability, is challenging, especially in fair skin, due to low contrast, heterogeneous structure and high inter- and intra-subject variability. We recently proposed a semi-automated algorithm to localize the DEJ in z-stacks of RCM images of fair skin, based on feature segmentation and classification. Here we extend the algorithm to dark skin. The extended algorithm first decides the skin type and then applies the appropriate DEJ localization method. In dark skin, strong backscatter from the pigment melanin causes the basal cells above the DEJ to appear with high contrast. To locate those high contrast regions, the algorithm operates on small tiles (regions) and finds the peaks of the smoothed average intensity depth profile of each tile. However, for some tiles, due to heterogeneity, multiple peaks in the depth profile exist and the strongest peak might not be the basal layer peak. To select the correct peak, basal cells are represented with a vector of texture features. The peak with most similar features to this feature vector is selected. The results show that the algorithm detected the skin types correctly for all 17 stacks tested (8 fair, 9 dark). The DEJ detection algorithm achieved an average distance from the ground truth DEJ surface of around 4.7μm for dark skin and around 7-14μm for fair skin.
An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories
NASA Technical Reports Server (NTRS)
Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.
2014-01-01
NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.
Accurate 3-D Profile Extraction of Skull Bone Using an Ultrasound Matrix Array.
Hajian, Mehdi; Gaspar, Robert; Maev, Roman Gr
2017-12-01
The present study investigates the feasibility, accuracy, and precision of 3-D profile extraction of the human skull bone using a custom-designed ultrasound matrix transducer in Pulse-Echo. Due to the attenuative scattering properties of the skull, the backscattered echoes from the inner surface of the skull are severely degraded, attenuated, and at some points overlapped. Furthermore, the speed of sound (SOS) in the skull varies significantly in different zones and also from case to case; if considered constant, it introduces significant error to the profile measurement. A new method for simultaneous estimation of the skull profiles and the sound speed value is presented. The proposed method is a two-folded procedure: first, the arrival times of the backscattered echoes from the skull bone are estimated using multi-lag phase delay (MLPD) and modified space alternating generalized expectation maximization (SAGE) algorithms. Next, these arrival times are fed into an adaptive sound speed estimation algorithm to compute the optimal SOS value and subsequently, the skull bone thickness. For quantitative evaluation, the estimated bone phantom thicknesses were compared with the mechanical measurements. The accuracies of the bone thickness measurements using MLPD and modified SAGE algorithms combined with the adaptive SOS estimation were 7.93% and 4.21%, respectively. These values were 14.44% and 10.75% for the autocorrelation and cross-correlation methods. Additionally, the Bland-Altman plots showed the modified SAGE outperformed the other methods with -0.35 and 0.44 mm limits of agreement. No systematic error that could be related to the skull bone thickness was observed for this method.
Doppler Radar Profiler for Launch Winds at the Kennedy Space Center (Phase 1a)
NASA Technical Reports Server (NTRS)
Murri, Daniel G.
2011-01-01
The NASA Engineering and Safety Center (NESC) received a request from the, NASA Technical Fellow for Flight Mechanics at Langley Research Center (LaRC), to develop a database from multiple Doppler radar wind profiler (DRWP) sources and develop data processing algorithms to construct high temporal resolution DRWP wind profiles for day-of-launch (DOL) vehicle assessment. This document contains the outcome of Phase 1a of the assessment including Findings, Observations, NESC Recommendations, and Lessons Learned.
NASA Technical Reports Server (NTRS)
Stephens, J. B.
1976-01-01
The National Aeronautics and Space Administration/Marshall Space Flight Center multilayer diffusion algorithms have been specialized for the prediction of the surface impact for the dispersive transport of the exhaust effluents from the launch of a Delta-Thor vehicle. This specialization permits these transport predictions to be made at the launch range in real time so that the effluent monitoring teams can optimize their monitoring grids. Basically, the data reduction routine requires only the meteorology profiles for the thermodynamics and kinematics of the atmosphere as an input. These profiles are graphed along with the resulting exhaust cloud rise history, the centerline concentrations and dosages, and the hydrogen chloride isopleths.
Uchida, Y.; Takada, E.; Fujisaki, A.; Isobe, M.; Shinohara, K.; Tomita, H.; Kawarabayashi, J.; Iguchi, T.
2014-01-01
Neutron and γ-ray (n-γ) discrimination with a digital signal processing system has been used to measure the neutron emission profile in magnetic confinement fusion devices. However, a sampling rate must be set low to extend the measurement time because the memory storage is limited. Time jitter decreases a discrimination quality due to a low sampling rate. As described in this paper, a new charge comparison method was developed. Furthermore, automatic n-γ discrimination method was examined using a probabilistic approach. Analysis results were investigated using the figure of merit. Results show that the discrimination quality was improved. Automatic discrimination was applied using the EM algorithm and k-means algorithm. PMID:25430297
Ho, Derek; Kim, Sanghoon; Drake, Tyler K.; Eldridge, Will J.; Wax, Adam
2014-01-01
We present a fast approach for size determination of spherical scatterers using the continuous wavelet transform of the angular light scattering profile to address the computational limitations of previously developed sizing techniques. The potential accuracy, speed, and robustness of the algorithm were determined in simulated models of scattering by polystyrene beads and cells. The algorithm was tested experimentally on angular light scattering data from polystyrene bead phantoms and MCF-7 breast cancer cells using a 2D a/LCI system. Theoretical sizing of simulated profiles of beads and cells produced strong fits between calculated and actual size (r2 = 0.9969 and r2 = 0.9979 respectively), and experimental size determinations were accurate to within one micron. PMID:25360350
Digital Oblique Remote Ionospheric Sensing (DORIS) Program Development
1992-04-01
waveforms. A new with the ARTIST software (Reinisch and Iluang. autoscaling technique for oblique ionograms 1983, Gamache et al., 1985) which is...development and performance of a complete oblique ionogram autoscaling and inversion algorithm is presented. The inver.i-,n algorithm uses a three...OTIH radar. 14. SUBJECT TERMS 15. NUMBER OF PAGES Oblique Propagation; Oblique lonogram Autoscaling ; i Electron Density Profile Inversion; Simulated 16
Social Circles Detection from Ego Network and Profile Information
2014-12-19
response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing... algorithm used to infer k-clique communities is expo- nential, which makes this technique unfeasible when treating egonets with a large number of users...atic when considering RBMs. This inconvenient was positively solved implementing a sparsity treatment with the RBM algorithm . (ii) The ground truth was
A simple algorithm for large-scale mapping of evergreen forests in tropical America, Africa and Asia
Xiangming Xiao; Chandrashekhar M. Biradar; Christina Czarnecki; Tunrayo Alabi; Michael Keller
2009-01-01
The areal extent and spatial distribution of evergreen forests in the tropical zones are important for the study of climate, carbon cycle and biodiversity. However, frequent cloud cover in the tropical regions makes mapping evergreen forests a challenging task. In this study we developed a simple and novel mapping algorithm that is based on the temporal profile...
An improved algorithm for the modeling of vapor flow in heat pipes
NASA Technical Reports Server (NTRS)
Tower, Leonard K.; Hainley, Donald C.
1989-01-01
A heat pipe vapor flow algorithm suitable for use in codes on microcomputers is presented. The incompressible heat pipe vapor flow studies of Busse are extended to incorporate compressibility effects. The Busse velocity profile factor is treated as a function of temperature and pressure. The assumption of a uniform saturated vapor temperature determined by the local pressure at each cross section of the pipe is not made. Instead, a mean vapor temperature, defined by an energy integral, is determined in the course of the solution in addition to the pressure, saturation temperature at the wall, and the Busse velocity profile factor. For alkali metal working fluids, local species equilibrium is assumed. Temperature and pressure profiles are presented for several cases involving sodium heat pipes. An example for a heat pipe with an adiabatic section and two evaporators in sequence illustrates the ability to handle axially varying heat input. A sonic limit plot for a short evaporator falls between curves for the Busse and Levy inviscid sonic limits.
Gilles, L; Ellerbroek, B L
2010-11-01
Real-time turbulence profiling is necessary to tune tomographic wavefront reconstruction algorithms for wide-field adaptive optics (AO) systems on large to extremely large telescopes, and to perform a variety of image post-processing tasks involving point-spread function reconstruction. This paper describes a computationally efficient and accurate numerical technique inspired by the slope detection and ranging (SLODAR) method to perform this task in real time from properly selected Shack-Hartmann wavefront sensor measurements accumulated over a few hundred frames from a pair of laser guide stars, thus eliminating the need for an additional instrument. The algorithm is introduced, followed by a theoretical influence function analysis illustrating its impulse response to high-resolution turbulence profiles. Finally, its performance is assessed in the context of the Thirty Meter Telescope multi-conjugate adaptive optics system via end-to-end wave optics Monte Carlo simulations.
An improved algorithm for the modeling of vapor flow in heat pipes
NASA Astrophysics Data System (ADS)
Tower, Leonard K.; Hainley, Donald C.
1989-12-01
A heat pipe vapor flow algorithm suitable for use in codes on microcomputers is presented. The incompressible heat pipe vapor flow studies of Busse are extended to incorporate compressibility effects. The Busse velocity profile factor is treated as a function of temperature and pressure. The assumption of a uniform saturated vapor temperature determined by the local pressure at each cross section of the pipe is not made. Instead, a mean vapor temperature, defined by an energy integral, is determined in the course of the solution in addition to the pressure, saturation temperature at the wall, and the Busse velocity profile factor. For alkali metal working fluids, local species equilibrium is assumed. Temperature and pressure profiles are presented for several cases involving sodium heat pipes. An example for a heat pipe with an adiabatic section and two evaporators in sequence illustrates the ability to handle axially varying heat input. A sonic limit plot for a short evaporator falls between curves for the Busse and Levy inviscid sonic limits.
NASA Technical Reports Server (NTRS)
Phinney, D. E. (Principal Investigator)
1980-01-01
An algorithm for estimating spectral crop calendar shifts of spring small grains was applied to 1978 spring wheat fields. The algorithm provides estimates of the date of peak spectral response by maximizing the cross correlation between a reference profile and the observed multitemporal pattern of Kauth-Thomas greenness for a field. A methodology was developed for estimation of crop development stage from the date of peak spectral response. Evaluation studies showed that the algorithm provided stable estimates with no geographical bias. Crop development stage estimates had a root mean square error near 10 days. The algorithm was recommended for comparative testing against other models which are candidates for use in AgRISTARS experiments.
Retrieval of ozone profiles from OMPS limb scattering observations
NASA Astrophysics Data System (ADS)
Arosio, Carlo; Rozanov, Alexei; Malinina, Elizaveta; Eichmann, Kai-Uwe; von Clarmann, Thomas; Burrows, John P.
2018-04-01
This study describes a retrieval algorithm developed at the University of Bremen to obtain vertical profiles of ozone from limb observations performed by the Ozone Mapper and Profiler Suite (OMPS). This algorithm is based on the technique originally developed for use with data from the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) instrument. As both instruments make limb measurements of the scattered solar radiation in the ultraviolet (UV) and visible (Vis) spectral ranges, an underlying objective of the study is to obtain consolidated and consistent ozone profiles from the two satellites and to produce a combined data set. The retrieval algorithm uses radiances in the UV and Vis wavelength ranges normalized to the radiance at an upper tangent height to obtain ozone concentrations in the altitude range of 12-60 km. Measurements at altitudes contaminated by clouds in the instrument field of view are identified and filtered out. An independent aerosol retrieval is performed beforehand and its results are used to account for the stratospheric aerosol load in the ozone inversion. The typical vertical resolution of the retrieved profiles varies from ˜ 2.5 km at lower altitudes ( < 30 km) to ˜ 1.5 km (about 45 km) and becomes coarser at upper altitudes. The retrieval errors resulting from the measurement noise are estimated to be 1-4 % above 25 km, increasing to 10-30 % in the upper troposphere. OMPS data are processed for the whole of 2016. The results are compared with the NASA product and validated against profiles derived from passive satellite observations or measured in situ by balloon-borne sondes. Between 20 and 60 km, OMPS ozone profiles typically agree with data from the Microwave Limb Sounder (MLS) v4.2 within 5-10 %, whereas in the lower altitude range the bias becomes larger, especially in the tropics. The comparison of OMPS profiles with ozonesonde measurements shows differences within ±5 % between 13 and 30 km at northern middle and high latitudes. At southern middle and high latitudes, an agreement within 5-7 % is also achieved in the same altitude range. An unexpected bias of approximately 10-20 % is detected in the lower tropical stratosphere. The processing of the 2013 data set using the same retrieval settings and its validation against ozonesondes reveals a much smaller bias; a possible reason for this behaviour is discussed.
Runway Scheduling for Charlotte Douglas International Airport
NASA Technical Reports Server (NTRS)
Malik, Waqar A.; Lee, Hanbong; Jung, Yoon C.
2016-01-01
This paper describes the runway scheduler that was used in the 2014 SARDA human-in-the-loop simulations for CLT. The algorithm considers multiple runways and computes optimal runway times for departures and arrivals. In this paper, we plan to run additional simulation on the standalone MRS algorithm and compare the performance of the algorithm against a FCFS heuristic where aircraft avail of runway slots based on a priority given by their positions in the FCFS sequence. Several traffic scenarios corresponding to current day traffic level and demand profile will be generated. We also plan to examine the effect of increase in traffic level (1.2x and 1.5x) and observe trends in algorithm performance.
NASA Astrophysics Data System (ADS)
Banakh, V. A.; Marakasov, D. A.
2008-04-01
An algorithm for the wind profile recovery from spatiotemporal spectra of a laser beam reflected in a turbulent atmosphere is presented. The cases of a spherical wave incident on a diffuse reflector of finite size and a spatially limited beam reflected from an infinite random surface are considered.
Reconstructing surface wave profiles from reflected acoustic pulses using multiple receivers.
Walstead, Sean P; Deane, Grant B
2014-08-01
Surface wave shapes are determined by analyzing underwater reflected acoustic signals collected at multiple receivers. The transmitted signals are of nominal frequency 300 kHz and are reflected off surface gravity waves that are paddle-generated in a wave tank. An inverse processing algorithm reconstructs 50 surface wave shapes over a length span of 2.10 m. The inverse scheme uses a broadband forward scattering model based on Kirchhoff's diffraction formula to determine wave shapes. The surface reconstruction algorithm is self-starting in that source and receiver geometry and initial estimates of wave shape are determined from the same acoustic signals used in the inverse processing. A high speed camera provides ground-truth measurements of the surface wave field for comparison with the acoustically derived surface waves. Within Fresnel zone regions the statistical confidence of the inversely optimized surface profile exceeds that of the camera profile. Reconstructed surfaces are accurate to a resolution of about a quarter-wavelength of the acoustic pulse only within Fresnel zones associated with each source and receiver pair. Multiple isolated Fresnel zones from multiple receivers extend the spatial extent of accurate surface reconstruction while overlapping Fresnel zones increase confidence in the optimized profiles there.
Zhang, Ao; Tian, Suyan
2018-05-01
Pathway-based feature selection algorithms, which utilize biological information contained in pathways to guide which features/genes should be selected, have evolved quickly and become widespread in the field of bioinformatics. Based on how the pathway information is incorporated, we classify pathway-based feature selection algorithms into three major categories-penalty, stepwise forward, and weighting. Compared to the first two categories, the weighting methods have been underutilized even though they are usually the simplest ones. In this article, we constructed three different genes' connectivity information-based weights for each gene and then conducted feature selection upon the resulting weighted gene expression profiles. Using both simulations and a real-world application, we have demonstrated that when the data-driven connectivity information constructed from the data of specific disease under study is considered, the resulting weighted gene expression profiles slightly outperform the original expression profiles. In summary, a big challenge faced by the weighting method is how to estimate pathway knowledge-based weights more accurately and precisely. Only until the issue is conquered successfully will wide utilization of the weighting methods be impossible. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Accurate reconstruction of the thermal conductivity depth profile in case hardened steel
NASA Astrophysics Data System (ADS)
Celorrio, Ricardo; Apiñaniz, Estibaliz; Mendioroz, Arantza; Salazar, Agustín; Mandelis, Andreas
2010-04-01
The problem of retrieving a nonhomogeneous thermal conductivity profile from photothermal radiometry data is addressed from the perspective of a stabilized least square fitting algorithm. We have implemented an inversion method with several improvements: (a) a renormalization of the experimental data which removes not only the instrumental factor, but the constants affecting the amplitude and the phase as well, (b) the introduction of a frequency weighting factor in order to balance the contribution of high and low frequencies in the inversion algorithm, (c) the simultaneous fitting of amplitude and phase data, balanced according to their experimental noises, (d) a modified Tikhonov regularization procedure has been introduced to stabilize the inversion, and (e) the Morozov discrepancy principle has been used to stop the iterative process automatically, according to the experimental noise, to avoid "overfitting" of the experimental data. We have tested this improved method by fitting theoretical data generated from a known conductivity profile. Finally, we have applied our method to real data obtained in a hardened stainless steel plate. The reconstructed in-depth thermal conductivity profile exhibits low dispersion, even at the deepest locations, and is in good anticorrelation with the hardness indentation test.
NASA Technical Reports Server (NTRS)
Decker, Ryan; Barbre, Robert E., Jr.
2011-01-01
Impact of winds to space launch vehicle include Design, Certification Day-of-launch (DOL) steering commands (1)Develop "knockdowns" of load indicators (2) Temporal uncertainty of flight winds. Currently use databases from weather balloons. Includes discrete profiles and profile pair datasets. Issues are : (1)Larger vehicles operate near design limits during ascent 150 discrete profiles per month 110-217 seasonal 2.0 and 3.5-hour pairs Balloon rise time (one hour) and drift (up to 100 n mi) Advantages of the Alternative approach using Doppler Radar Wind Profiler (DRWP) are: (1) Obtain larger sample size (2) Provide flexibility for assessing trajectory changes due to winds (3) Better representation of flight winds.
Jothi, R; Mohanty, Sraban Kumar; Ojha, Aparajita
2016-04-01
Gene expression data clustering is an important biological process in DNA microarray analysis. Although there have been many clustering algorithms for gene expression analysis, finding a suitable and effective clustering algorithm is always a challenging problem due to the heterogeneous nature of gene profiles. Minimum Spanning Tree (MST) based clustering algorithms have been successfully employed to detect clusters of varying shapes and sizes. This paper proposes a novel clustering algorithm using Eigenanalysis on Minimum Spanning Tree based neighborhood graph (E-MST). As MST of a set of points reflects the similarity of the points with their neighborhood, the proposed algorithm employs a similarity graph obtained from k(') rounds of MST (k(')-MST neighborhood graph). By studying the spectral properties of the similarity matrix obtained from k(')-MST graph, the proposed algorithm achieves improved clustering results. We demonstrate the efficacy of the proposed algorithm on 12 gene expression datasets. Experimental results show that the proposed algorithm performs better than the standard clustering algorithms. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rogasch, Julian Mm; Hofheinz, Frank; Lougovski, Alexandr; Furth, Christian; Ruf, Juri; Großer, Oliver S; Mohnike, Konrad; Hass, Peter; Walke, Mathias; Amthauer, Holger; Steffen, Ingo G
2014-12-01
F18-fluorodeoxyglucose positron-emission tomography (FDG-PET) reconstruction algorithms can have substantial influence on quantitative image data used, e.g., for therapy planning or monitoring in oncology. We analyzed radial activity concentration profiles of differently reconstructed FDG-PET images to determine the influence of varying signal-to-background ratios (SBRs) on the respective spatial resolution, activity concentration distribution, and quantification (standardized uptake value [SUV], metabolic tumor volume [MTV]). Measurements were performed on a Siemens Biograph mCT 64 using a cylindrical phantom containing four spheres (diameter, 30 to 70 mm) filled with F18-FDG applying three SBRs (SBR1, 16:1; SBR2, 6:1; SBR3, 2:1). Images were reconstructed employing six algorithms (filtered backprojection [FBP], FBP + time-of-flight analysis [FBP + TOF], 3D-ordered subset expectation maximization [3D-OSEM], 3D-OSEM + TOF, point spread function [PSF], PSF + TOF). Spatial resolution was determined by fitting the convolution of the object geometry with a Gaussian point spread function to radial activity concentration profiles. MTV delineation was performed using fixed thresholds and semiautomatic background-adapted thresholding (ROVER, ABX, Radeberg, Germany). The pairwise Wilcoxon test revealed significantly higher spatial resolutions for PSF + TOF (up to 4.0 mm) compared to PSF, FBP, FBP + TOF, 3D-OSEM, and 3D-OSEM + TOF at all SBRs (each P < 0.05) with the highest differences for SBR1 decreasing to the lowest for SBR3. Edge elevations in radial activity profiles (Gibbs artifacts) were highest for PSF and PSF + TOF declining with decreasing SBR (PSF + TOF largest sphere; SBR1, 6.3%; SBR3, 2.7%). These artifacts induce substantial SUVmax overestimation compared to the reference SUV for PSF algorithms at SBR1 and SBR2 leading to substantial MTV underestimation in threshold-based segmentation. In contrast, both PSF algorithms provided the lowest deviation of SUVmean from reference SUV at SBR1 and SBR2. At high contrast, the PSF algorithms provided the highest spatial resolution and lowest SUVmean deviation from the reference SUV. In contrast, both algorithms showed the highest deviations in SUVmax and threshold-based MTV definition. At low contrast, all investigated reconstruction algorithms performed approximately equally. The use of PSF algorithms for quantitative PET data, e.g., for target volume definition or in serial PET studies, should be performed with caution - especially if comparing SUV of lesions with high and low contrasts.
Estimating the D-Region Ionospheric Electron Density Profile Using VLF Narrowband Transmitters
NASA Astrophysics Data System (ADS)
Gross, N. C.; Cohen, M.
2016-12-01
The D-region ionospheric electron density profile plays an important role in many applications, including long-range and transionospheric communications, and coupling between the lower atmosphere and the upper ionosphere occurs, and estimation of very low frequency (VLF) wave propagation within the earth-ionosphere waveguide. However, measuring the D-region ionospheric density profile has been a challenge. The D-region is about 60 to 90 [km] in altitude, which is higher than planes and balloons can fly but lower than satellites can orbit. Researchers have previously used VLF remote sensing techniques, from either narrowband transmitters or sferics, to estimate the density profile, but these estimations are typically during a short time frame and over a single propagation path.We report on an effort to construct estimates of the D-region ionospheric electron density profile over multiple narrowband transmission paths for long periods of time. Measurements from multiple transmitters at multiple receivers are analyzed concurrently to minimize false solutions and improve accuracy. Likewise, time averaging is used to remove short transient noise at the receivers. The cornerstone of the algorithm is an artificial neural network (ANN), where input values are the received amplitude and phase for the narrowband transmitters and the outputs are the commonly known h' and beta two parameter exponential electron density profile. Training data for the ANN is generated using the Navy's Long-Wavelength Propagation Capability (LWPC) model. Results show the algorithm performs well under smooth ionospheric conditions and when proper geometries for the transmitters and receivers are used.
Regression Analysis of Long-term Profile Ozone Data Set from BUV Instruments
NASA Technical Reports Server (NTRS)
Frith, Stacey; Taylor, Steve; DeLand, Matt; Ahn, Chang-Woo; Stolarski, Richard S.
2005-01-01
We have produced a profile merged ozone data set (MOD) based on the SBUV/SBUV2 series of nadir-viewing satellite backscatter instruments, covering the period from November 1978 - December 2003. In 2004, data from the Nimbus 7 SBUV and NOAA 9,11, and 16 SBUV/2 instruments were reprocessed using the Version 8 (V8) algorithm and most recent calibrations. More recently, data from the Nimbus 4 BUV instrument, which operated from 1970 - 1977, were also reprocessed using the V8 algorithm. As part of the V8 profile calibration, the Nimbus 7 and NOAA 9 (1993-1997 only) instrument calibrations have been adjusted to match the NOAA 11 calibration, which was established from comparisons with SSBUV shuttle flight data. Given the level of agreement between the data sets, we simply average the ozone values during periods of instrument overlap to produce the MOD profile data set. We use statistical time-series analysis of the MOD profile data set (1978-2003) to estimate the change in profile ozone due to changing stratospheric chlorine levels. The Nimbus 4 BUV data offer an opportunity to test the physical properties of our statistical model. We extrapolate our statistical model fit backwards in time and compare to the Nimbus 4 data. We compare the statistics of the residuals from the fit for the Nimbus 4 period to those obtained from the 1978-2003 period over which the statistical model coefficients were estimated.
Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm
Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang
2012-01-01
Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565
An optimized resistor pattern for temperature gradient control in microfluidics
NASA Astrophysics Data System (ADS)
Selva, Bertrand; Marchalot, Julien; Jullien, Marie-Caroline
2009-06-01
In this paper, we demonstrate the possibility of generating high-temperature gradients with a linear temperature profile when heating is provided in situ. Thanks to improved optimization algorithms, the shape of resistors, which constitute the heating source, is optimized by applying the genetic algorithm NSGA-II (acronym for the non-dominated sorting genetic algorithm) (Deb et al 2002 IEEE Trans. Evol. Comput. 6 2). Experimental validation of the linear temperature profile within the cavity is carried out using a thermally sensitive fluorophore, called Rhodamine B (Ross et al 2001 Anal. Chem. 73 4117-23, Erickson et al 2003 Lab Chip 3 141-9). The high level of agreement obtained between experimental and numerical results serves to validate the accuracy of this method for generating highly controlled temperature profiles. In the field of actuation, such a device is of potential interest since it allows for controlling bubbles or droplets moving by means of thermocapillary effects (Baroud et al 2007 Phys. Rev. E 75 046302). Digital microfluidics is a critical area in the field of microfluidics (Dreyfus et al 2003 Phys. Rev. Lett. 90 14) as well as in the so-called lab-on-a-chip technology. Through an example, the large application potential of such a technique is demonstrated, which entails handling a single bubble driven along a cavity using simple and tunable embedded resistors.
Mandal, Abhijit; Ram, Chhape; Mourya, Ankur; Singh, Navin
2017-01-01
To establish trends of estimation error of dose calculation by anisotropic analytical algorithm (AAA) with respect to dose measured by thermoluminescent dosimeters (TLDs) in air-water heterogeneity for small field size photon. TLDs were irradiated along the central axis of the photon beam in four different solid water phantom geometries using three small field size single beams. The depth dose profiles were estimated using AAA calculation model for each field sizes. The estimated and measured depth dose profiles were compared. The over estimation (OE) within air cavity were dependent on field size (f) and distance (x) from solid water-air interface and formulated as OE = - (0.63 f + 9.40) x2+ (-2.73 f + 58.11) x + (0.06 f2 - 1.42 f + 15.67). In postcavity adjacent point and distal points from the interface have dependence on field size (f) and equations are OE = 0.42 f2 - 8.17 f + 71.63, OE = 0.84 f2 - 1.56 f + 17.57, respectively. The trend of estimation error of AAA dose calculation algorithm with respect to measured value have been formulated throughout the radiation path length along the central axis of 6 MV photon beam in air-water heterogeneity combination for small field size photon beam generated from a 6 MV linear accelerator.
Halo mass and weak galaxy-galaxy lensing profiles in rescaled cosmological N-body simulations
NASA Astrophysics Data System (ADS)
Renneby, Malin; Hilbert, Stefan; Angulo, Raúl E.
2018-05-01
We investigate 3D density and weak lensing profiles of dark matter haloes predicted by a cosmology-rescaling algorithm for N-body simulations. We extend the rescaling method of Angulo & White (2010) and Angulo & Hilbert (2015) to improve its performance on intra-halo scales by using models for the concentration-mass-redshift relation based on excursion set theory. The accuracy of the method is tested with numerical simulations carried out with different cosmological parameters. We find that predictions for median density profiles are more accurate than ˜5 % for haloes with masses of 1012.0 - 1014.5h-1 M⊙ for radii 0.05 < r/r200m < 0.5, and for cosmologies with Ωm ∈ [0.15, 0.40] and σ8 ∈ [0.6, 1.0]. For larger radii, 0.5 < r/r200m < 5, the accuracy degrades to ˜20 %, due to inaccurate modelling of the cosmological and redshift dependence of the splashback radius. For changes in cosmology allowed by current data, the residuals decrease to ≲ 2 % up to scales twice the virial radius. We illustrate the usefulness of the method by estimating the mean halo mass of a mock galaxy group sample. We find that the algorithm's accuracy is sufficient for current data. Improvements in the algorithm, particularly in the modelling of baryons, are likely required for interpreting future (dark energy task force stage IV) experiments.
A system for automatic aorta sections measurements on chest CT
NASA Astrophysics Data System (ADS)
Pfeffer, Yitzchak; Mayer, Arnaldo; Zholkover, Adi; Konen, Eli
2016-03-01
A new method is proposed for caliber measurement of the ascending aorta (AA) and descending aorta (DA). A key component of the method is the automatic detection of the carina, as an anatomical landmark around which an axial volume of interest (VOI) can be defined to observe the aortic caliber. For each slice in the VOI, a linear profile line connecting the AA with the DA is found by pattern matching on the underlying intensity profile. Next, the aortic center position is found using Hough transform on the best linear segment candidate. Finally, region growing around the center provides an accurate segmentation and caliber measurement. We evaluated the algorithm on 113 sequential chest CT scans, slice thickness of 0.75 - 3.75mm, 90 with contrast agent injected. The algorithm success rates were computed as the percentage of scans in which the center of the AA was found. Automated measurements of AA caliber were compared with independent measurements of two experienced chest radiologists, comparing the absolute difference between the two radiologists with the absolute difference between the algorithm and each of the radiologists. The measurement stability was demonstrated by computing the STD of the absolute difference between the radiologists, and between the algorithm and the radiologists. Results: Success rates of 93% and 74% were achieved, for contrast injected cases and non-contrast cases, respectively. These results indicate that the algorithm can be robust in large variability of image quality, such as the cases in a realworld clinical setting. The average absolute difference between the algorithm and the radiologists was 1.85mm, lower than the average absolute difference between the radiologists, which was 2.1mm. The STD of the absolute difference between the algorithm and the radiologists was 1.5mm vs 1.6mm between the two radiologists. These results demonstrate the clinical relevance of the algorithm measurements.
Murphy, Malia S Q; Hawken, Steven; Atkinson, Katherine M; Milburn, Jennifer; Pervin, Jesmin; Gravett, Courtney; Stringer, Jeffrey S A; Rahman, Anisur; Lackritz, Eve; Chakraborty, Pranesh; Wilson, Kumanan
2017-01-01
Background Knowledge of gestational age (GA) is critical for guiding neonatal care and quantifying regional burdens of preterm birth. In settings where access to ultrasound dating is limited, postnatal estimates are frequently used despite the issues of accuracy associated with postnatal approaches. Newborn metabolic profiles are known to vary by severity of preterm birth. Recent work by our group and others has highlighted the accuracy of postnatal GA estimation algorithms derived from routinely collected newborn screening profiles. This protocol outlines the validation of a GA model originally developed in a North American cohort among international newborn cohorts. Methods Our primary objective is to use blood spot samples collected from infants born in Zambia and Bangladesh to evaluate our algorithm’s capacity to correctly classify GA within 1, 2, 3 and 4 weeks. Secondary objectives are to 1) determine the algorithm's accuracy in small-for-gestational-age and large-for-gestational-age infants, 2) determine its ability to correctly discriminate GA of newborns across dichotomous thresholds of preterm birth (≤34 weeks, <37 weeks GA) and 3) compare the relative performance of algorithms derived from newborn screening panels including all available analytes and those restricted to analyte subsets. The study population will consist of infants born to mothers already enrolled in one of two preterm birth cohorts in Lusaka, Zambia, and Matlab, Bangladesh. Dried blood spot samples will be collected and sent for analysis in Ontario, Canada, for model validation. Discussion This study will determine the validity of a GA estimation algorithm across ethnically diverse infant populations and assess population specific variations in newborn metabolic profiles. PMID:29104765
NASA Technical Reports Server (NTRS)
Barbre, Robert E., Jr.
2015-01-01
This paper describes in detail the QC and splicing methodology for KSC's 50- and 915-MHz DRWP measurements that generates an extensive archive of vertically complete profiles from 0.20-18.45 km. The concurrent POR from each archive extends from April 2000 to December 2009. MSFC NE applies separate but similar QC processes to each of the 50- and 915-MHz DRWP archives. DRWP literature and data examination provide the basis for developing and applying the automated and manual QC processes on both archives. Depending on the month, the QC'ed 50- and 915-MHz DRWP archives retain 52-65% and 16-30% of the possible data, respectively. The 50- and 915-MHz DRWP QC archives retain 84-91% and 85-95%, respectively, of all the available data provided that data exist in the non- QC'ed archives. Next, MSFC NE applies an algorithm to splice concurrent measurements from both DRWP sources. Last, MSFC NE generates a composite profile from the (up to) five available spliced profiles to effectively characterize boundary layer winds and to utilize all possible 915-MHz DRWP measurements at each timestamp. During a given month, roughly 23,000-32,000 complete profiles exist from 0.25-18.45 km from the composite profiles' archive, and approximately 5,000- 27,000 complete profiles exist from an archive utilizing an individual 915-MHz DRWP. One can extract a variety of profile combinations (pairs, triplets, etc.) from this sample for a given application. The sample of vertically complete DRWP wind measurements not only gives launch vehicle customers greater confidence in loads and trajectory assessments versus using balloon output, but also provides flexibility to simulate different DOL situations across applicable altitudes. In addition to increasing sample size and providing more flexibility for DOL simulations in the vehicle design phase, the spliced DRWP database provides any upcoming launch vehicle program with the capability to utilize DRWP profiles on DOL to compute vehicle steering commands, provided the program applies the procedures that this report describes to new DRWP data on DOL. Decker et al. (2015) details how SLS is proposing to use DRWP data and splicing techniques on DOL. Although automation could enhance the current DOL 50-MHz DRWP QC process and could streamline any future DOL 915-MHz DRWP QC and splicing process, the DOL community would still require manual intervention to ensure that the vehicle only uses valid profiles. If a program desires to use high spatial resolution profiles, then the algorithm could randomly add high-frequency components to the DRWP profiles. The spliced DRWP database provides lots of flexibility in how one performs DOL simulations, and the algorithms that this report provides will assist the aerospace and atmospheric communities that are interested in utilizing the DRWP.
Use of treatment algorithms for depression.
Trivedi, Madhukar H; Fava, Maurizio; Marangell, Lauren B; Osser, David N; Shelton, Richard C
2006-01-01
Depression continues to be a treatment challenge for many physicians-psychiatrists and primary care physicians alike-in part because of the nature of the disorder, but also because of the wide variety of medications and other treatments available, each with a distinct efficacy and safety profile. One way of negotiating treatment decisions is to use treatment guidelines and algorithms. This Commentary, which appears in the September 2006 issue of The Journal of Clinical Psychiatry (2006;67:1458-1465), provides the primary care clinician with insight into the pros and cons of using treatment algorithms to guide the treatment of depression. -Larry Culpepper, M.D.
Blind color isolation for color-channel-based fringe pattern profilometry using digital projection
NASA Astrophysics Data System (ADS)
Hu, Yingsong; Xi, Jiangtao; Chicharo, Joe; Yang, Zongkai
2007-08-01
We present an algorithm for estimating the color demixing matrix based on the color fringe patterns captured from the reference plane or the surface of the object. The advantage of this algorithm is that it is a blind approach to calculating the demixing matrix in the sense that no extra images are required for color calibration before performing profile measurement. Simulation and experimental results convince us that the proposed algorithm can significantly reduce the influence of the color cross talk and at the same time improve the measurement accuracy of the color-channel-based phase-shifting profilometry.
NASA Technical Reports Server (NTRS)
Remsberg, E. E.; Marshall, B. T.; Garcia-Comas, M.; Krueger, D.; Lingenfelser, G. S.; Martin-Torres, J.; Mlynczak, M. G.; Russell, J. M., III; Smith, A. K.; Zhao, Y.;
2008-01-01
The quality of the retrieved temperature-versus-pressure (or T(p)) profiles is described for the middle atmosphere for the publicly available Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) Version 1.07 (V1.07) data set. The primary sources of systematic error for the SABER results below about 70 km are (1) errors in the measured radiances, (2) biases in the forward model, and (3) uncertainties in the corrections for ozone and in the determination of the reference pressure for the retrieved profiles. Comparisons with other correlative data sets indicate that SABER T(p) is too high by 1-3 K in the lower stratosphere but then too low by 1 K near the stratopause and by 2 K in the middle mesosphere. There is little difference between the local thermodynamic equilibrium (LTE) algorithm results below about 70 km from V1.07 and V1.06, but there are substantial improvements/differences for the non-LTE results of V1.07 for the upper mesosphere and lower thermosphere (UMLT) region. In particular, the V1.07 algorithm uses monthly, diurnally averaged CO2 profiles versus latitude from the Whole Atmosphere Community Climate Model. This change has improved the consistency of the character of the tides in its kinetic temperature (T(sub k)). The T(sub k) profiles agree with UMLT values obtained from ground-based measurements of column-averaged OH and O2 emissions and of the Na lidar returns, at least within their mutual uncertainties. SABER T(sub k) values obtained near the mesopause with its daytime algorithm also agree well with the falling sphere climatology at high northern latitudes in summer. It is concluded that the SABER data set can be the basis for improved, diurnal-to-interannual-scale temperatures for the middle atmosphere and especially for its UMLT region.
Grötzinger, Stefan W.; Alam, Intikhab; Ba Alawi, Wail; Bajic, Vladimir B.; Stingl, Ulrich; Eppinger, Jörg
2014-01-01
Reliable functional annotation of genomic data is the key-step in the discovery of novel enzymes. Intrinsic sequencing data quality problems of single amplified genomes (SAGs) and poor homology of novel extremophile's genomes pose significant challenges for the attribution of functions to the coding sequences identified. The anoxic deep-sea brine pools of the Red Sea are a promising source of novel enzymes with unique evolutionary adaptation. Sequencing data from Red Sea brine pool cultures and SAGs are annotated and stored in the Integrated Data Warehouse of Microbial Genomes (INDIGO) data warehouse. Low sequence homology of annotated genes (no similarity for 35% of these genes) may translate into false positives when searching for specific functions. The Profile and Pattern Matching (PPM) strategy described here was developed to eliminate false positive annotations of enzyme function before progressing to labor-intensive hyper-saline gene expression and characterization. It utilizes InterPro-derived Gene Ontology (GO)-terms (which represent enzyme function profiles) and annotated relevant PROSITE IDs (which are linked to an amino acid consensus pattern). The PPM algorithm was tested on 15 protein families, which were selected based on scientific and commercial potential. An initial list of 2577 enzyme commission (E.C.) numbers was translated into 171 GO-terms and 49 consensus patterns. A subset of INDIGO-sequences consisting of 58 SAGs from six different taxons of bacteria and archaea were selected from six different brine pool environments. Those SAGs code for 74,516 genes, which were independently scanned for the GO-terms (profile filter) and PROSITE IDs (pattern filter). Following stringent reliability filtering, the non-redundant hits (106 profile hits and 147 pattern hits) are classified as reliable, if at least two relevant descriptors (GO-terms and/or consensus patterns) are present. Scripts for annotation, as well as for the PPM algorithm, are available through the INDIGO website. PMID:24778629
International Roughness Index (IRI) measurement using Hilbert-Huang transform
NASA Astrophysics Data System (ADS)
Zhang, Wenjin; Wang, Ming L.
2018-03-01
International Roughness Index (IRI) is an important metric to measure condition of roadways. This index is usually used to justify the maintenance priority and scheduling for roadways. Various inspection methods and algorithms are used to assess this index through the use of road profiles. This study proposes to calculate IRI values using Hilbert-Huang Transform (HHT) algorithm. In particular, road profile data is provided using surface radar attached to a vehicle driving at highway speed. Hilbert-Huang transform (HHT) is used in this study because of its superior properties for nonstationary and nonlinear data. Empirical mode decomposition (EMD) processes the raw data into a set of intrinsic mode functions (IMFs), representing various dominating frequencies. These various frequencies represent noises from the body of the vehicle, sensor location, and the excitation induced by nature frequency of the vehicle, etc. IRI calculation can be achieved by eliminating noises that are not associated with the road profile including vehicle inertia effect. The resulting IRI values are compared favorably to the field IRI values, where the filtered IMFs captures the most characteristics of road profile while eliminating noises from the vehicle and the vehicle inertia effect. Therefore, HHT is an effect method for road profile analysis and for IRI measurement. Furthermore, the application of HHT method has the potential to eliminate the use of accelerometers attached to the vehicle as part of the displacement measurement used to offset the inertia effect.
Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui
2017-01-01
Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli, and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs. PMID:29113310
Yu, Bin; Xu, Jia-Meng; Li, Shan; Chen, Cheng; Chen, Rui-Xin; Wang, Lei; Zhang, Yan; Wang, Ming-Hui
2017-10-06
Gene regulatory networks (GRNs) research reveals complex life phenomena from the perspective of gene interaction, which is an important research field in systems biology. Traditional Bayesian networks have a high computational complexity, and the network structure scoring model has a single feature. Information-based approaches cannot identify the direction of regulation. In order to make up for the shortcomings of the above methods, this paper presents a novel hybrid learning method (DBNCS) based on dynamic Bayesian network (DBN) to construct the multiple time-delayed GRNs for the first time, combining the comprehensive score (CS) with the DBN model. DBNCS algorithm first uses CMI2NI (conditional mutual inclusive information-based network inference) algorithm for network structure profiles learning, namely the construction of search space. Then the redundant regulations are removed by using the recursive optimization algorithm (RO), thereby reduce the false positive rate. Secondly, the network structure profiles are decomposed into a set of cliques without loss, which can significantly reduce the computational complexity. Finally, DBN model is used to identify the direction of gene regulation within the cliques and search for the optimal network structure. The performance of DBNCS algorithm is evaluated by the benchmark GRN datasets from DREAM challenge as well as the SOS DNA repair network in Escherichia coli , and compared with other state-of-the-art methods. The experimental results show the rationality of the algorithm design and the outstanding performance of the GRNs.
Lifetime Prediction of IGBT in a STATCOM Using Modified-Graphical Rainflow Counting Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopi Reddy, Lakshmi Reddy; Tolbert, Leon M; Ozpineci, Burak
Rainflow algorithms are one of the best counting methods used in fatigue and failure analysis [17]. There have been many approaches to the rainflow algorithm, some proposing modifications. Graphical Rainflow Method (GRM) was proposed recently with a claim of faster execution times [10]. However, the steps of the graphical method of rainflow algorithm, when implemented, do not generate the same output as the four-point or ASTM standard algorithm. A modified graphical method is presented and discussed in this paper to overcome the shortcomings of graphical rainflow algorithm. A fast rainflow algorithm based on four-point algorithm but considering point comparison thanmore » range comparison is also presented. A comparison between the performances of the common rainflow algorithms [6-10], including the proposed methods, in terms of execution time, memory used, and efficiency, complexity, and load sequences is presented. Finally, the rainflow algorithm is applied to temperature data of an IGBT in assessing the lifetime of a STATCOM operating for power factor correction of the load. From 5-minute data load profiles available, the lifetime is estimated to be at 3.4 years.« less
Resolution of Transverse Electron Beam Measurements using Optical Transition Radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ischebeck, Rasmus; Decker, Franz-Josef; Hogan, Mark
2005-06-22
In the plasma wakefield acceleration experiment E-167, optical transition radiation is used to measure the transverse profile of the electron bunches before and after the plasma acceleration. The distribution of the electric field from a single electron does not give a point-like distribution on the detector, but has a certain extension. Additionally, the resolution of the imaging system is affected by aberrations. The transverse profile of the bunch is thus convolved with a point spread function (PSF). Algorithms that deconvolve the image can help to improve the resolution. Imaged test patterns are used to determine the modulation transfer function ofmore » the lens. From this, the PSF can be reconstructed. The Lucy-Richardson algorithm is used to deconvolute this PSF from test images.« less
Vectorization of a penalty function algorithm for well scheduling
NASA Technical Reports Server (NTRS)
Absar, I.
1984-01-01
In petroleum engineering, the oil production profiles of a reservoir can be simulated by using a finite gridded model. This profile is affected by the number and choice of wells which in turn is a result of various production limits and constraints including, for example, the economic minimum well spacing, the number of drilling rigs available and the time required to drill and complete a well. After a well is available it may be shut in because of excessive water or gas productions. In order to optimize the field performance a penalty function algorithm was developed for scheduling wells. For an example with some 343 wells and 15 different constraints, the scheduling routine vectorized for the CYBER 205 averaged 560 times faster performance than the scalar version.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tian, Y. X.; Jin, X. L., E-mail: jinxiaolin@uestc.edu.cn; Yan, W. Z.
The model of photon and pair production in strong field quantum electrodynamics is implemented into our 1D3V particle-in-cell code with Monte Carlo algorithm. Using this code, the evolution of the particles in ultrahigh intensity laser (∼10{sup 23} W/cm{sup 2}) interaction with aluminum foil target is observed. Four different initial plasma profiles are considered in the simulations. The effects of initial plasma profiles on photon and pair production, energy spectra, and energy evolution are analyzed. The results imply that one can set an optimal initial plasma profile to obtain the desired photon distributions.
Computer Recognition of Facial Profiles
1974-08-01
facial recognition 20. ABSTRACT (Continue on reverse side It necessary and Identify by block number) A system for the recognition of human faces from...21 2.6 Classification Algorithms ........... ... 32 III FACIAL RECOGNITION AND AUTOMATIC TRAINING . . . 37 3.1 Facial Profile Recognition...provide a fair test of the classification system. The work of Goldstein, Harmon, and Lesk [81 indicates, however, that for facial recognition , a ten class
Dupl'áková, Nikoleta; Renák, David; Hovanec, Patrik; Honysová, Barbora; Twell, David; Honys, David
2007-07-23
Microarray technologies now belong to the standard functional genomics toolbox and have undergone massive development leading to increased genome coverage, accuracy and reliability. The number of experiments exploiting microarray technology has markedly increased in recent years. In parallel with the rapid accumulation of transcriptomic data, on-line analysis tools are being introduced to simplify their use. Global statistical data analysis methods contribute to the development of overall concepts about gene expression patterns and to query and compose working hypotheses. More recently, these applications are being supplemented with more specialized products offering visualization and specific data mining tools. We present a curated gene family-oriented gene expression database, Arabidopsis Gene Family Profiler (aGFP; http://agfp.ueb.cas.cz), which gives the user access to a large collection of normalised Affymetrix ATH1 microarray datasets. The database currently contains NASC Array and AtGenExpress transcriptomic datasets for various tissues at different developmental stages of wild type plants gathered from nearly 350 gene chips. The Arabidopsis GFP database has been designed as an easy-to-use tool for users needing an easily accessible resource for expression data of single genes, pre-defined gene families or custom gene sets, with the further possibility of keyword search. Arabidopsis Gene Family Profiler presents a user-friendly web interface using both graphic and text output. Data are stored at the MySQL server and individual queries are created in PHP script. The most distinguishable features of Arabidopsis Gene Family Profiler database are: 1) the presentation of normalized datasets (Affymetrix MAS algorithm and calculation of model-based gene-expression values based on the Perfect Match-only model); 2) the choice between two different normalization algorithms (Affymetrix MAS4 or MAS5 algorithms); 3) an intuitive interface; 4) an interactive "virtual plant" visualizing the spatial and developmental expression profiles of both gene families and individual genes. Arabidopsis GFP gives users the possibility to analyze current Arabidopsis developmental transcriptomic data starting with simple global queries that can be expanded and further refined to visualize comparative and highly selective gene expression profiles.
NEUTRON STAR MASS–RADIUS CONSTRAINTS USING EVOLUTIONARY OPTIMIZATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A. L.; Morsink, S. M.; Fiege, J. D.
The equation of state of cold supra-nuclear-density matter, such as in neutron stars, is an open question in astrophysics. A promising method for constraining the neutron star equation of state is modeling pulse profiles of thermonuclear X-ray burst oscillations from hot spots on accreting neutron stars. The pulse profiles, constructed using spherical and oblate neutron star models, are comparable to what would be observed by a next-generation X-ray timing instrument like ASTROSAT , NICER , or a mission similar to LOFT . In this paper, we showcase the use of an evolutionary optimization algorithm to fit pulse profiles to determinemore » the best-fit masses and radii. By fitting synthetic data, we assess how well the optimization algorithm can recover the input parameters. Multiple Poisson realizations of the synthetic pulse profiles, constructed with 1.6 million counts and no background, were fitted with the Ferret algorithm to analyze both statistical and degeneracy-related uncertainty and to explore how the goodness of fit depends on the input parameters. For the regions of parameter space sampled by our tests, the best-determined parameter is the projected velocity of the spot along the observer’s line of sight, with an accuracy of ≤3% compared to the true value and with ≤5% statistical uncertainty. The next best determined are the mass and radius; for a neutron star with a spin frequency of 600 Hz, the best-fit mass and radius are accurate to ≤5%, with respective uncertainties of ≤7% and ≤10%. The accuracy and precision depend on the observer inclination and spot colatitude, with values of ∼1% achievable in mass and radius if both the inclination and colatitude are ≳60°.« less
NASA Astrophysics Data System (ADS)
Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.; Guglielmo, Magda; Kafle, Prajwal R.; Wilkinson, Mark I.; Power, Chris
2017-09-01
Dwarf galaxies, among the most dark matter dominated structures of our Universe, are excellent test-beds for dark matter theories. Unfortunately, mass modelling of these systems suffers from the well-documented mass-velocity anisotropy degeneracy. For the case of spherically symmetric systems, we describe a method for non-parametric modelling of the radial and tangential velocity moments. The method is a numerical velocity anisotropy 'inversion', with parametric mass models, where the radial velocity dispersion profile, σrr2, is modelled as a B-spline, and the optimization is a three-step process that consists of (I) an evolutionary modelling to determine the mass model form and the best B-spline basis to represent σrr2; (II) an optimization of the smoothing parameters and (III) a Markov chain Monte Carlo analysis to determine the physical parameters. The mass-anisotropy degeneracy is reduced into mass model inference, irrespective of kinematics. We test our method using synthetic data. Our algorithm constructs the best kinematic profile and discriminates between competing dark matter models. We apply our method to the Fornax dwarf spheroidal galaxy. Using a King brightness profile and testing various dark matter mass models, our model inference favours a simple mass-follows-light system. We find that the anisotropy profile of Fornax is tangential (β(r) < 0) and we estimate a total mass of M_{tot} = 1.613^{+0.050}_{-0.075} × 10^8 M_{⊙}, and a mass-to-light ratio of Υ_V = 8.93 ^{+0.32}_{-0.47} (M_{⊙}/L_{⊙}). The algorithm we present is a robust and computationally inexpensive method for non-parametric modelling of spherical clusters independent of the mass-anisotropy degeneracy.
The beam stop array method to measure object scatter in digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lee, Haeng-hwa; Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Jae-Gu; Choi, Young-Wook
2014-03-01
Scattered radiation is inevitably generated in the object. The distribution of the scattered radiation is influenced by object thickness, filed size, object-to-detector distance, and primary energy. One of the investigations to measure scatter intensities involves measuring the signal detected under the shadow of the lead discs of a beam-stop array (BSA). The measured scatter by BSA includes not only the scattered radiation within the object (object scatter), but also the external scatter source. The components of external scatter source include the X-ray tube, detector, collimator, x-ray filter, and BSA. Excluding background scattered radiation can be applied to different scanner geometry by simple parameter adjustments without prior knowledge of the scanned object. In this study, a method using BSA to differentiate scatter in phantom (object scatter) from external background was used. Furthermore, this method was applied to BSA algorithm to correct the object scatter. In order to confirm background scattered radiation, we obtained the scatter profiles and scatter fraction (SF) profiles in the directions perpendicular to the chest wall edge (CWE) with and without scattering material. The scatter profiles with and without the scattering material were similar in the region between 127 mm and 228 mm from chest wall. This result indicated that the measured scatter by BSA included background scatter. Moreover, the BSA algorithm with the proposed method could correct the object scatter because the total radiation profiles of object scatter correction corresponded to original image in the region between 127 mm and 228 mm from chest wall. As a result, the BSA method to measure object scatter could be used to remove background scatter. This method could apply for different scanner geometry after background scatter correction. In conclusion, the BSA algorithm with the proposed method is effective to correct object scatter.
NASA Technical Reports Server (NTRS)
Olson, William S.; Tian, Lin; Grecu, Mircea; Kuo, Kwo-Sen; Johnson, Benjamin; Heymsfield, Andrew J.; Bansemer, Aaron; Heymsfield, Gerald M.; Wang, James R.; Meneghini, Robert
2016-01-01
In this study, two different particle models describing the structure and electromagnetic properties of snow are developed and evaluated for potential use in satellite combined radar-radiometer precipitation estimation algorithms. In the first model, snow particles are assumed to be homogeneous ice-air spheres with single-scattering properties derived from Mie theory. In the second model, snow particles are created by simulating the self-collection of pristine ice crystals into aggregate particles of different sizes, using different numbers and habits of the collected component crystals. Single-scattering properties of the resulting nonspherical snow particles are determined using the discrete dipole approximation. The size-distribution-integrated scattering properties of the spherical and nonspherical snow particles are incorporated into a dual-wavelength radar profiling algorithm that is applied to 14- and 34-GHz observations of stratiform precipitation from the ER-2 aircraft-borne High-Altitude Imaging Wind and Rain Airborne Profiler (HIWRAP) radar. The retrieved ice precipitation profiles are then input to a forward radiative transfer calculation in an attempt to simulate coincident radiance observations from the Conical Scanning Millimeter-Wave Imaging Radiometer (CoSMIR). Much greater consistency between the simulated and observed CoSMIR radiances is obtained using estimated profiles that are based upon the nonspherical crystal/aggregate snow particle model. Despite this greater consistency, there remain some discrepancies between the higher moments of the HIWRAP-retrieved precipitation size distributions and in situ distributions derived from microphysics probe observations obtained from Citation aircraft underflights of the ER-2. These discrepancies can only be eliminated if a subset of lower-density crystal/aggregate snow particles is assumed in the radar algorithm and in the interpretation of the in situ data.
EARLINET Single Calculus Chain - technical - Part 2: Calculation of optical products
NASA Astrophysics Data System (ADS)
Mattis, Ina; D'Amico, Giuseppe; Baars, Holger; Amodeo, Aldo; Madonna, Fabio; Iarlori, Marco
2016-07-01
In this paper we present the automated software tool ELDA (EARLINET Lidar Data Analyzer) for the retrieval of profiles of optical particle properties from lidar signals. This tool is one of the calculus modules of the EARLINET Single Calculus Chain (SCC) which allows for the analysis of the data of many different lidar systems of EARLINET in an automated, unsupervised way. ELDA delivers profiles of particle extinction coefficients from Raman signals as well as profiles of particle backscatter coefficients from combinations of Raman and elastic signals or from elastic signals only. Those analyses start from pre-processed signals which have already been corrected for background, range dependency and hardware specific effects. An expert group reviewed all algorithms and solutions for critical calculus subsystems which are used within EARLINET with respect to their applicability for automated retrievals. Those methods have been implemented in ELDA. Since the software was designed in a modular way, it is possible to add new or alternative methods in future. Most of the implemented algorithms are well known and well documented, but some methods have especially been developed for ELDA, e.g., automated vertical smoothing and temporal averaging or the handling of effective vertical resolution in the case of lidar ratio retrievals, or the merging of near-range and far-range products. The accuracy of the retrieved profiles was tested following the procedure of the EARLINET-ASOS algorithm inter-comparison exercise which is based on the analysis of synthetic signals. Mean deviations, mean relative deviations, and normalized root-mean-square deviations were calculated for all possible products and three height layers. In all cases, the deviations were clearly below the maximum allowed values according to the EARLINET quality requirements.
'Extremotaxis': computing with a bacterial-inspired algorithm.
Nicolau, Dan V; Burrage, Kevin; Nicolau, Dan V; Maini, Philip K
2008-01-01
We present a general-purpose optimization algorithm inspired by "run-and-tumble", the biased random walk chemotactic swimming strategy used by the bacterium Escherichia coli to locate regions of high nutrient concentration The method uses particles (corresponding to bacteria) that swim through the variable space (corresponding to the attractant concentration profile). By constantly performing temporal comparisons, the particles drift towards the minimum or maximum of the function of interest. We illustrate the use of our method with four examples. We also present a discrete version of the algorithm. The new algorithm is expected to be useful in combinatorial optimization problems involving many variables, where the functional landscape is apparently stochastic and has local minima, but preserves some derivative structure at intermediate scales.
Real time algorithms for sharp wave ripple detection.
Sethi, Ankit; Kemere, Caleb
2014-01-01
Neural activity during sharp wave ripples (SWR), short bursts of co-ordinated oscillatory activity in the CA1 region of the rodent hippocampus, is implicated in a variety of memory functions from consolidation to recall. Detection of these events in an algorithmic framework, has thus far relied on simple thresholding techniques with heuristically derived parameters. This study is an investigation into testing and improving the current methods for detection of SWR events in neural recordings. We propose and profile methods to reduce latency in ripple detection. Proposed algorithms are tested on simulated ripple data. The findings show that simple realtime algorithms can improve upon existing power thresholding methods and can detect ripple activity with latencies in the range of 10-20 ms.
Pernice, W H; Payne, F P; Gallagher, D F
2007-09-03
We present a novel numerical scheme for the simulation of the field enhancement by metal nano-particles in the time domain. The algorithm is based on a combination of the finite-difference time-domain method and the pseudo-spectral time-domain method for dispersive materials. The hybrid solver leads to an efficient subgridding algorithm that does not suffer from spurious field spikes as do FDTD schemes. Simulation of the field enhancement by gold particles shows the expected exponential field profile. The enhancement factors are computed for single particles and particle arrays. Due to the geometry conforming mesh the algorithm is stable for long integration times and thus suitable for the simulation of resonance phenomena in coupled nano-particle structures.
An Analysis of Gamma-ray Burst Time Profiles from the Burst and Transient Source Experiment
NASA Technical Reports Server (NTRS)
Lestrade, John Patrick
1996-01-01
This proposal requested funding to measure the durations of gamma-ray bursts (GRB) in the 4B catalog as well as to study the structure of GRB time profiles returned by the Burst And Transient Source Experiment (BATSE) on board the Compton Gamma-Ray Observatory. The duration (T90) was to be measured using the same techniques and algorithms developed by the principal investigator for the 3B data. The profile structure studies fall into the two categories of variability and fractal analyses.
Comparison of trend analyses for Umkehr data using new and previous inversion algorithms
NASA Technical Reports Server (NTRS)
Reinsel, Gregory C.; Tam, Wing-Kuen; Ying, Lisa H.
1994-01-01
Ozone vertical profile Umkehr data for layers 3-9 obtained from 12 stations, using both previous and new inversion algorithms, were analyzed for trends. The trends estimated for the Umkehr data from the two algorithms were compared using two data periods, 1968-1991 and 1977-1991. Both nonseasonal and seasonal trend models were fitted. The overall annual trends are found to be significantly negative, of the order of -5% per decade, for layers 7 and 8 using both inversion algorithms. The largest negative trends occur in these layers under the new algorithm, whereas in the previous algorithm the most negative trend occurs in layer 9. The trend estimates, both annual and seasonal, are substantially different between the two algorithms mainly for layers 3, 4, and 9, where trends from the new algorithm data are about 2% per decade less negative, with less appreciable differences in layers 7 and 8. The trend results from the two data periods are similar, except for layer 3 where trends become more negative, by about -2% per decade, for 1977-1991.
NASA Astrophysics Data System (ADS)
Sheng, Zheng
2013-02-01
The estimation of lower atmospheric refractivity from radar sea clutter (RFC) is a complicated nonlinear optimization problem. This paper deals with the RFC problem in a Bayesian framework. It uses the unbiased Markov Chain Monte Carlo (MCMC) sampling technique, which can provide accurate posterior probability distributions of the estimated refractivity parameters by using an electromagnetic split-step fast Fourier transform terrain parabolic equation propagation model within a Bayesian inversion framework. In contrast to the global optimization algorithm, the Bayesian—MCMC can obtain not only the approximate solutions, but also the probability distributions of the solutions, that is, uncertainty analyses of solutions. The Bayesian—MCMC algorithm is implemented on the simulation radar sea-clutter data and the real radar sea-clutter data. Reference data are assumed to be simulation data and refractivity profiles are obtained using a helicopter. The inversion algorithm is assessed (i) by comparing the estimated refractivity profiles from the assumed simulation and the helicopter sounding data; (ii) the one-dimensional (1D) and two-dimensional (2D) posterior probability distribution of solutions.
PANDA: Protein function prediction using domain architecture and affinity propagation.
Wang, Zheng; Zhao, Chenguang; Wang, Yiheng; Sun, Zheng; Wang, Nan
2018-02-22
We developed PANDA (Propagation of Affinity and Domain Architecture) to predict protein functions in the format of Gene Ontology (GO) terms. PANDA at first executes profile-profile alignment algorithm to search against PfamA, KOG, COG, and SwissProt databases, and then launches PSI-BLAST against UniProt for homologue search. PANDA integrates a domain architecture inference algorithm based on the Bayesian statistics that calculates the probability of having a GO term. All the candidate GO terms are pooled and filtered based on Z-score. After that, the remaining GO terms are clustered using an affinity propagation algorithm based on the GO directed acyclic graph, followed by a second round of filtering on the clusters of GO terms. We benchmarked the performance of all the baseline predictors PANDA integrates and also for every pooling and filtering step of PANDA. It can be found that PANDA achieves better performances in terms of area under the curve for precision and recall compared to the baseline predictors. PANDA can be accessed from http://dna.cs.miami.edu/PANDA/ .
NASA Astrophysics Data System (ADS)
Son, Min; Ko, Sangho; Koo, Jaye
2014-06-01
A genetic algorithm was used to develop optimal design methods for the regenerative cooled combustor and fuel-rich gas generator of a liquid rocket engine. For the combustor design, a chemical equilibrium analysis was applied, and the profile was calculated using Rao's method. One-dimensional heat transfer was assumed along the profile, and cooling channels were designed. For the gas-generator design, non-equilibrium properties were derived from a counterflow analysis, and a vaporization model for the fuel droplet was adopted to calculate residence time. Finally, a genetic algorithm was adopted to optimize the designs. The combustor and gas generator were optimally designed for 30-tonf, 75-tonf, and 150-tonf engines. The optimized combustors demonstrated superior design characteristics when compared with previous non-optimized results. Wall temperatures at the nozzle throat were optimized to satisfy the requirement of 800 K, and specific impulses were maximized. In addition, the target turbine power and a burned-gas temperature of 1000 K were obtained from the optimized gas-generator design.
A new method to identify the foot of continental slope based on an integrated profile analysis
NASA Astrophysics Data System (ADS)
Wu, Ziyin; Li, Jiabiao; Li, Shoujun; Shang, Jihong; Jin, Xiaobin
2017-06-01
A new method is proposed to identify automatically the foot of the continental slope (FOS) based on the integrated analysis of topographic profiles. Based on the extremum points of the second derivative and the Douglas-Peucker algorithm, it simplifies the topographic profiles, then calculates the second derivative of the original profiles and the D-P profiles. Seven steps are proposed to simplify the original profiles. Meanwhile, multiple identification methods are proposed to determine the FOS points, including gradient, water depth and second derivative values of data points, as well as the concave and convex, continuity and segmentation of the topographic profiles. This method can comprehensively and intelligently analyze the topographic profiles and their derived slopes, second derivatives and D-P profiles, based on which, it is capable to analyze the essential properties of every single data point in the profile. Furthermore, it is proposed to remove the concave points of the curve and in addition, to implement six FOS judgment criteria.
NASA Astrophysics Data System (ADS)
Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan
2017-08-01
The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.
Improved Soundings and Error Estimates using AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2006-01-01
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.
Moteghaed, Niloofar Yousefi; Maghooli, Keivan; Garshasbi, Masoud
2018-01-01
Background: Gene expression data are characteristically high dimensional with a small sample size in contrast to the feature size and variability inherent in biological processes that contribute to difficulties in analysis. Selection of highly discriminative features decreases the computational cost and complexity of the classifier and improves its reliability for prediction of a new class of samples. Methods: The present study used hybrid particle swarm optimization and genetic algorithms for gene selection and a fuzzy support vector machine (SVM) as the classifier. Fuzzy logic is used to infer the importance of each sample in the training phase and decrease the outlier sensitivity of the system to increase the ability to generalize the classifier. A decision-tree algorithm was applied to the most frequent genes to develop a set of rules for each type of cancer. This improved the abilities of the algorithm by finding the best parameters for the classifier during the training phase without the need for trial-and-error by the user. The proposed approach was tested on four benchmark gene expression profiles. Results: Good results have been demonstrated for the proposed algorithm. The classification accuracy for leukemia data is 100%, for colon cancer is 96.67% and for breast cancer is 98%. The results show that the best kernel used in training the SVM classifier is the radial basis function. Conclusions: The experimental results show that the proposed algorithm can decrease the dimensionality of the dataset, determine the most informative gene subset, and improve classification accuracy using the optimal parameters of the classifier with no user interface. PMID:29535919
Derivative component analysis for mass spectral serum proteomic profiles.
Han, Henry
2014-01-01
As a promising way to transform medicine, mass spectrometry based proteomics technologies have seen a great progress in identifying disease biomarkers for clinical diagnosis and prognosis. However, there is a lack of effective feature selection methods that are able to capture essential data behaviors to achieve clinical level disease diagnosis. Moreover, it faces a challenge from data reproducibility, which means that no two independent studies have been found to produce same proteomic patterns. Such reproducibility issue causes the identified biomarker patterns to lose repeatability and prevents it from real clinical usage. In this work, we propose a novel machine-learning algorithm: derivative component analysis (DCA) for high-dimensional mass spectral proteomic profiles. As an implicit feature selection algorithm, derivative component analysis examines input proteomics data in a multi-resolution approach by seeking its derivatives to capture latent data characteristics and conduct de-noising. We further demonstrate DCA's advantages in disease diagnosis by viewing input proteomics data as a profile biomarker via integrating it with support vector machines to tackle the reproducibility issue, besides comparing it with state-of-the-art peers. Our results show that high-dimensional proteomics data are actually linearly separable under proposed derivative component analysis (DCA). As a novel multi-resolution feature selection algorithm, DCA not only overcomes the weakness of the traditional methods in subtle data behavior discovery, but also suggests an effective resolution to overcoming proteomics data's reproducibility problem and provides new techniques and insights in translational bioinformatics and machine learning. The DCA-based profile biomarker diagnosis makes clinical level diagnostic performances reproducible across different proteomic data, which is more robust and systematic than the existing biomarker discovery based diagnosis. Our findings demonstrate the feasibility and power of the proposed DCA-based profile biomarker diagnosis in achieving high sensitivity and conquering the data reproducibility issue in serum proteomics. Furthermore, our proposed derivative component analysis suggests the subtle data characteristics gleaning and de-noising are essential in separating true signals from red herrings for high-dimensional proteomic profiles, which can be more important than the conventional feature selection or dimension reduction. In particular, our profile biomarker diagnosis can be generalized to other omics data for derivative component analysis (DCA)'s nature of generic data analysis.
How Can TOLNet Help to Better Understand Tropospheric Ozone? A Satellite Perspective
NASA Technical Reports Server (NTRS)
Johnson, Matthew S.
2018-01-01
Potential sources of a priori ozone (O3) profiles for use in Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite tropospheric O3 retrievals are evaluated with observations from multiple Tropospheric Ozone Lidar Network (TOLNet) systems in North America. An O3 profile climatology (tropopause-based O3 climatology (TB-Clim), currently proposed for use in the TEMPO O3 retrieval algorithm) derived from ozonesonde observations and O3 profiles from three separate models (operational Goddard Earth Observing System (GEOS-5) Forward Processing (FP) product, reanalysis product from Modern-Era Retrospective analysis for Research and Applications version 2 (MERRA2), and the GEOS-Chem chemical transport model (CTM)) were: 1) evaluated with TOLNet measurements on various temporal scales (seasonally, daily, hourly) and 2) implemented as a priori information in theoretical TEMPO tropospheric O3 retrievals in order to determine how each a priori impacts the accuracy of retrieved tropospheric (0-10 km) and lowermost tropospheric (LMT, 0-2 km) O3 columns. We found that all sources of a priori O3 profiles evaluated in this study generally reproduced the vertical structure of summer-averaged observations. However, larger differences between the a priori profiles and lidar observations were observed when evaluating inter-daily and diurnal variability of tropospheric O3. The TB-Clim O3 profile climatology was unable to replicate observed inter-daily and diurnal variability of O3 while model products, in particular GEOS-Chem simulations, displayed more skill in reproducing these features. Due to the ability of models, primarily the CTM used in this study, on average to capture the inter-daily and diurnal variability of tropospheric and LMT O3 columns, using a priori profiles from CTM simulations resulted in TEMPO retrievals with the best statistical comparison with lidar observations. Furthermore, important from an air quality perspective, when high LMT O3 values were observed, using CTM a priori profiles resulted in TEMPO LMT O3 retrievals with the least bias. The application of time-specific (non-climatological) hourly/daily model predictions as the a priori profile in TEMPO O3 retrievals will be best suited when applying this data to study air quality or event-based processes as the standard retrieval algorithm will still need to use a climatology product. Follow-on studies to this work are currently being conducted to investigate the application of different CTM-predicted O3 climatology products in the standard TEMPO retrieval algorithm. Finally, similar methods to those used in this study can be easily applied by TEMPO data users to recalculate tropospheric O3 profiles provided from the standard retrieval using a different source of a priori.
GLACiAR: GaLAxy survey Completeness AlgoRithm
NASA Astrophysics Data System (ADS)
Carrasco, Daniela; Trenti, Michele; Mutch, Simon; Oesch, Pascal
2018-05-01
GLACiAR (GaLAxy survey Completeness AlgoRithm) estimates the completeness and selection functions in galaxy surveys. Tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman Break technique, the code can nevertheless be applied broadly. GLACiAR generates artificial galaxies that follow Sérsic profiles with different indexes and with customizable size, redshift and spectral energy distribution properties, adds them to input images, and measures the recovery rate.
Evaluating cloud retrieval algorithms with the ARM BBHRP framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mlawer,E.; Dunn,M.; Mlawer, E.
2008-03-10
Climate and weather prediction models require accurate calculations of vertical profiles of radiative heating. Although heating rate calculations cannot be directly validated due to the lack of corresponding observations, surface and top-of-atmosphere measurements can indirectly establish the quality of computed heating rates through validation of the calculated irradiances at the atmospheric boundaries. The ARM Broadband Heating Rate Profile (BBHRP) project, a collaboration of all the working groups in the program, was designed with these heating rate validations as a key objective. Given the large dependence of radiative heating rates on cloud properties, a critical component of BBHRP radiative closure analysesmore » has been the evaluation of cloud microphysical retrieval algorithms. This evaluation is an important step in establishing the necessary confidence in the continuous profiles of computed radiative heating rates produced by BBHRP at the ARM Climate Research Facility (ACRF) sites that are needed for modeling studies. This poster details the continued effort to evaluate cloud property retrieval algorithms within the BBHRP framework, a key focus of the project this year. A requirement for the computation of accurate heating rate profiles is a robust cloud microphysical product that captures the occurrence, height, and phase of clouds above each ACRF site. Various approaches to retrieve the microphysical properties of liquid, ice, and mixed-phase clouds have been processed in BBHRP for the ACRF Southern Great Plains (SGP) and the North Slope of Alaska (NSA) sites. These retrieval methods span a range of assumptions concerning the parameterization of cloud location, particle density, size, shape, and involve different measurement sources. We will present the radiative closure results from several different retrieval approaches for the SGP site, including those from Microbase, the current 'reference' retrieval approach in BBHRP. At the NSA, mixed-phase clouds and cloud with a low optical depth are prevalent; the radiative closure studies using Microbase demonstrated significant residuals. As an alternative to Microbase at NSA, the Shupe-Turner cloud property retrieval algorithm, aimed at improving the partitioning of cloud phase and incorporating more constrained, conditional microphysics retrievals, also has been evaluated using the BBHRP data set.« less
A simple biota removal algorithm for 35 GHz cloud radar measurements
NASA Astrophysics Data System (ADS)
Kalapureddy, Madhu Chandra R.; Sukanya, Patra; Das, Subrata K.; Deshpande, Sachin M.; Pandithurai, Govindan; Pazamany, Andrew L.; Ambuj K., Jha; Chakravarty, Kaustav; Kalekar, Prasad; Krishna Devisetty, Hari; Annam, Sreenivas
2018-03-01
Cloud radar reflectivity profiles can be an important measurement for the investigation of cloud vertical structure (CVS). However, extracting intended meteorological cloud content from the measurement often demands an effective technique or algorithm that can reduce error and observational uncertainties in the recorded data. In this work, a technique is proposed to identify and separate cloud and non-hydrometeor echoes using the radar Doppler spectral moments profile measurements. The point and volume target-based theoretical radar sensitivity curves are used for removing the receiver noise floor and identified radar echoes are scrutinized according to the signal decorrelation period. Here, it is hypothesized that cloud echoes are observed to be temporally more coherent and homogenous and have a longer correlation period than biota. That can be checked statistically using ˜ 4 s sliding mean and standard deviation value of reflectivity profiles. The above step helps in screen out clouds critically by filtering out the biota. The final important step strives for the retrieval of cloud height. The proposed algorithm potentially identifies cloud height solely through the systematic characterization of Z variability using the local atmospheric vertical structure knowledge besides to the theoretical, statistical and echo tracing tools. Thus, characterization of high-resolution cloud radar reflectivity profile measurements has been done with the theoretical echo sensitivity curves and observed echo statistics for the true cloud height tracking (TEST). TEST showed superior performance in screening out clouds and filtering out isolated insects. TEST constrained with polarimetric measurements was found to be more promising under high-density biota whereas TEST combined with linear depolarization ratio and spectral width perform potentially to filter out biota within the highly turbulent shallow cumulus clouds in the convective boundary layer (CBL). This TEST technique is promisingly simple in realization but powerful in performance due to the flexibility in constraining, identifying and filtering out the biota and screening out the true cloud content, especially the CBL clouds. Therefore, the TEST algorithm is superior for screening out the low-level clouds that are strongly linked to the rainmaking mechanism associated with the Indian Summer Monsoon region's CVS.
NASA Astrophysics Data System (ADS)
Al-Ansary, Mariam Luay Y.
Ultrasound Imaging has been favored by clinicians for its safety, affordability, accessibility, and speed compared to other imaging modalities. However, the trade-offs to these benefits are a relatively lower image quality and interpretability, which can be addressed by, for example, post-processing methods. One particularly difficult imaging case is associated with the presence of a barrier, such as a human skull, with significantly different acoustical properties than the brain tissue as the target medium. Some methods were proposed in the literature to account for this structure if the skull's geometry is known. Measuring the skull's geometry is therefore an important task that requires attention. In this work, a new edge detection method for accurate human skull profile extraction via post-processing of ultrasonic A-Scans is introduced. This method, referred to as the Selective Echo Extraction algorithm, SEE, processes each A-Scan separately and determines the outermost and innermost boundaries of the skull by means of adaptive filtering. The method can also be used to determine the average attenuation coefficient of the skull. When applied to simulated B-Mode images of the skull profile, promising results were obtained. The profiles obtained from the proposed process in simulations were found to be within 0.15lambda +/- 0.11lambda or 0.09 +/- 0.07mm from the actual profiles. Experiments were also performed to test SEE on skull mimicking phantoms with major acoustical properties similar to those of the actual human skull. With experimental data, the profiles obtained with the proposed process were within 0.32lambda +/- 0.25lambda or 0.19 +/- 0.15mm from the actual profile.
NASA Astrophysics Data System (ADS)
Huang, Yin; Chen, Jianhua; Xiong, Shaojun
2009-07-01
Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.
Automatic genetic optimization approach to two-dimensional blade profile design for steam turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trigg, M.A.; Tubby, G.R.; Sheard, A.G.
1999-01-01
In this paper a systematic approach to the optimization of two-dimensional blade profiles is presented. A genetic optimizer has been developed that modifies the blade profile and calculates its profile loss. This process is automatic, producing profile designs significantly faster and with significantly lower loss than has previously been possible. The optimizer developed uses a genetic algorithm to optimize a two-dimensional profile, defined using 17 parameters, for minimum loss with a given flow condition. The optimizer works with a population of two-dimensional profiles with varied parameters. A CFD mesh is generated for each profile, and the result is analyzed usingmore » a two-dimensional blade-to-blade solver, written for steady viscous compressible flow, to determine profile loss. The loss is used as the measure of a profile`s fitness. The optimizer uses this information to select the members of the next population, applying crossovers, mutations, and elitism in the process. Using this method, the optimizer tends toward the best values for the parameters defining the profile with minimum loss.« less
NASA Astrophysics Data System (ADS)
Massaro, G.; Stiperski, I.; Pospichal, B.; Rotach, M. W.
2015-03-01
Within the Innsbruck Box project, a ground-based microwave radiometer (RPG-HATPRO) was operated in the Inn Valley (Austria), in very complex terrain, between September 2012 and May 2013 to obtain temperature and humidity vertical profiles of the full troposphere with a specific focus on the valley boundary layer. The profiles obtained by the radiometer with different retrieval algorithms based on different climatologies, are compared to local radiosonde data. A retrieval that is improved with respect to the one provided by the manufacturer, based on better resolved data, shows a significantly smaller root mean square error (RMSE), both for the temperature and humidity profiles. The improvement is particularly substantial at the heights close to the mountaintop level and in the upper troposphere. Lower level inversions, common in an alpine valley, are resolved to a satisfactory degree. On the other hand, upper level inversions (above 1200 m) still pose a significant challenge for retrieval. For this purpose, specialized retrieval algorithms were developed by classifying the radiosonde climatologies into specialized categories according to different criteria (seasons, daytime, nighttime) and using additional regressors (e.g., measurements from mountain stations). The training and testing on the radiosonde data for these specialized categories suggests that a classification of profiles that reproduces meaningful physical characteristics can yield improved targeted specialized retrievals. A really new and very promising method of improving the profile retrieval in a mountain region is adding further information in the retrieval, such as the surface temperature at fixed levels along a topographic slope or from nearby mountain tops.
NASA Technical Reports Server (NTRS)
Welton, Ellsworth J.; Campbell, James R.; Spinhirne, James D.; Berkoff, Timothy A.; Holben, Brent; Tsay, Si-Chee
2004-01-01
We present the formation of a new global-ground based eye-safe lidar network, the NASA Micro-Pulse Lidar Network (MPLNET). The aim of MPLNET is to acquire long- term observations of aerosol and cloud vertical profiles at unique geographic sites within the NASA Aerosol Robotic Network (AERONET). MPLNET utilizes standard instrumentation and data processing algorithms for efficient network operations and direct comparison of data between each site. The micro-pulse lidar is eye-safe, compact, and commercially available, and most easily allows growth of the network without sacrificing standardized instrumentation goals. Network growth follows a federated approach, pioneered by AERONET, wherein independent research groups may join MPLNET with their own instrument and site. MPLNET sites produce not only vertical profile data, but also column-averaged products already available from AERONET (aerosol optical depth, sky radiance, size distributions). Algorithms are presented for each MPLNET data product. Real-time Level 1 data products (next-day) include daily lidar signal images from the surface to -2Okm, and Level 1.5 aerosol extinction profiles at times co-incident with AERONET observations. Quality assured Level 2 aerosol extinction profiles are generated after screening the Level 1.5 results and removing bad data. Level 3 products include continuous day/night aerosol extinction profiles, and are produced using Level 2 calibration data. Rigorous uncertainty calculations are presented for all data products. Analysis of MPLNET data show the MPL and our analysis routines are capable of successfully retrieving aerosol profiles, with the strenuous accounting of uncertainty necessary for accurate interpretation of the results.
A comparative study of surface waves inversion techniques at strong motion recording sites in Greece
Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.
2015-01-01
Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.
Chen, Peng; Li, Jinyan
2010-05-17
Prediction of long-range inter-residue contacts is an important topic in bioinformatics research. It is helpful for determining protein structures, understanding protein foldings, and therefore advancing the annotation of protein functions. In this paper, we propose a novel ensemble of genetic algorithm classifiers (GaCs) to address the long-range contact prediction problem. Our method is based on the key idea called sequence profile centers (SPCs). Each SPC is the average sequence profiles of residue pairs belonging to the same contact class or non-contact class. GaCs train on multiple but different pairs of long-range contact data (positive data) and long-range non-contact data (negative data). The negative data sets, having roughly the same sizes as the positive ones, are constructed by random sampling over the original imbalanced negative data. As a result, about 21.5% long-range contacts are correctly predicted. We also found that the ensemble of GaCs indeed makes an accuracy improvement by around 5.6% over the single GaC. Classifiers with the use of sequence profile centers may advance the long-range contact prediction. In line with this approach, key structural features in proteins would be determined with high efficiency and accuracy.
Retrieval of NO2 stratospheric profiles from ground-based zenith-sky uv-visible measurements at 60°N
NASA Astrophysics Data System (ADS)
Hendrick, F.; van Roozendael, M.; Lambert, J.-C.; Fayt, C.; Hermans, C.; de Mazière, M.
2003-04-01
Nitrogen dioxide (NO_2) plays an important role in controlling ozone abundances in the stratosphere, either directly through the NOx (NO+NO_2) catalytic cycle, either indirectly by reaction with the radical ClO to form the reservoir species ClONO_2. In this presentation, NO_2 stratospheric profiles are retrieved from ground-based UV-visible NO_2 slant column abundances measured since 1998 at the complementary NDSC station of Harestua (Norway, 60^oN). The retrieval algorithm is based on the Rodgers optimal estimation inversion method and a forward model consisting in the IASB-BIRA stacked box photochemical model PSCBOX coupled to the radiative transfer package UVspec/DISORT. This algorithm has been applied to a set of about 50 sunrises and sunsets for which spatially and temporally coincident NO_2 measurements made by the HALOE (Halogen Occultation Experiment) instrument on board the Upper Atmosphere Research Satellite (UARS) are available. The consistency between retrieved and HALOE profiles is discussed in term of the different seasonal conditions investigated which are spring with and without chlorine activation, summer, and fall.
NASA Astrophysics Data System (ADS)
Khamatnurova, M. Yu.; Gribanov, K. G.; Zakharov, V. I.; Rokotyan, N. V.; Imasu, R.
2017-11-01
The algorithm for atmospheric methane distribution retrieval in atmosphere from IASI spectra has been developed. The feasibility of Levenberg-Marquardt method for atmospheric methane total column amount retrieval from the spectra measured by IASI/METOP modified for the case of lack of a priori covariance matrices for methane vertical profiles is studied in this paper. Method and algorithm were implemented into software package together with iterative estimation of a posteriori covariance matrices and averaging kernels for each individual retrieval. This allows retrieval quality selection using the properties of both types of matrices. Methane (XCH4) retrieval by Levenberg-Marquardt method from IASI/METOP spectra is presented in this work. NCEP/NCAR reanalysis data provided by ESRL (NOAA, Boulder, USA) were taken as initial guess. Surface temperature, air temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval. The data retrieved from ground-based measurements at the Ural Atmospheric Station and data of L2/IASI standard product were used for the verification of the method and results of methane retrieval from IASI/METOP spectra.
NASA Astrophysics Data System (ADS)
Xu, J.; Heue, K.-P.; Coldewey-Egbers, M.; Romahn, F.; Doicu, A.; Loyola, D.
2018-04-01
Characterizing vertical distributions of ozone from nadir-viewing satellite measurements is known to be challenging, particularly the ozone information in the troposphere. A novel retrieval algorithm called Full-Physics Inverse Learning Machine (FP-ILM), has been developed at DLR in order to estimate ozone profile shapes based on machine learning techniques. In contrast to traditional inversion methods, the FP-ILM algorithm formulates the profile shape retrieval as a classification problem. Its implementation comprises a training phase to derive an inverse function from synthetic measurements, and an operational phase in which the inverse function is applied to real measurements. This paper extends the ability of the FP-ILM retrieval to derive tropospheric ozone columns from GOME- 2 measurements. Results of total and tropical tropospheric ozone columns are compared with the ones using the official GOME Data Processing (GDP) product and the convective-cloud-differential (CCD) method, respectively. Furthermore, the FP-ILM framework will be used for the near-real-time processing of the new European Sentinel sensors with their unprecedented spectral and spatial resolution and corresponding large increases in the amount of data.
NASA Astrophysics Data System (ADS)
Venkateswara Rao, B.; Kumar, G. V. Nagesh; Chowdary, D. Deepak; Bharathi, M. Aruna; Patra, Stutee
2017-07-01
This paper furnish the new Metaheuristic algorithm called Cuckoo Search Algorithm (CSA) for solving optimal power flow (OPF) problem with minimization of real power generation cost. The CSA is found to be the most efficient algorithm for solving single objective optimal power flow problems. The CSA performance is tested on IEEE 57 bus test system with real power generation cost minimization as objective function. Static VAR Compensator (SVC) is one of the best shunt connected device in the Flexible Alternating Current Transmission System (FACTS) family. It has capable of controlling the voltage magnitudes of buses by injecting the reactive power to system. In this paper SVC is integrated in CSA based Optimal Power Flow to optimize the real power generation cost. SVC is used to improve the voltage profile of the system. CSA gives better results as compared to genetic algorithm (GA) in both without and with SVC conditions.
Automatic macroscopic characterization of diesel sprays by means of a new image processing algorithm
NASA Astrophysics Data System (ADS)
Rubio-Gómez, Guillermo; Martínez-Martínez, S.; Rua-Mojica, Luis F.; Gómez-Gordo, Pablo; de la Garza, Oscar A.
2018-05-01
A novel algorithm is proposed for the automatic segmentation of diesel spray images and the calculation of their macroscopic parameters. The algorithm automatically detects each spray present in an image, and therefore it is able to work with diesel injectors with a different number of nozzle holes without any modification. The main characteristic of the algorithm is that it splits each spray into three different regions and then segments each one with an individually calculated binarization threshold. Each threshold level is calculated from the analysis of a representative luminosity profile of each region. This approach makes it robust to irregular light distribution along a single spray and between different sprays of an image. Once the sprays are segmented, the macroscopic parameters of each one are calculated. The algorithm is tested with two sets of diesel spray images taken under normal and irregular illumination setups.
Assessment of Cardiovascular Disease Risk in South Asian Populations
Hussain, S. Monira; Oldenburg, Brian; Zoungas, Sophia; Tonkin, Andrew M.
2013-01-01
Although South Asian populations have high cardiovascular disease (CVD) burden in the world, their patterns of individual CVD risk factors have not been fully studied. None of the available algorithms/scores to assess CVD risk have originated from these populations. To explore the relevance of CVD risk scores for these populations, literature search and qualitative synthesis of available evidence were performed. South Asians usually have higher levels of both “classical” and nontraditional CVD risk factors and experience these at a younger age. There are marked variations in risk profiles between South Asian populations. More than 100 risk algorithms are currently available, with varying risk factors. However, no available algorithm has included all important risk factors that underlie CVD in these populations. The future challenge is either to appropriately calibrate current risk algorithms or ideally to develop new risk algorithms that include variables that provide an accurate estimate of CVD risk. PMID:24163770
Mueller, David S.
2016-06-21
The software program, QRev applies common and consistent computational algorithms combined with automated filtering and quality assessment of the data to improve the quality and efficiency of streamflow measurements and helps ensure that U.S. Geological Survey streamflow measurements are consistent, accurate, and independent of the manufacturer of the instrument used to make the measurement. Software from different manufacturers uses different algorithms for various aspects of the data processing and discharge computation. The algorithms used by QRev to filter data, interpolate data, and compute discharge are documented and compared to the algorithms used in the manufacturers’ software. QRev applies consistent algorithms and creates a data structure that is independent of the data source. QRev saves an extensible markup language (XML) file that can be imported into databases or electronic field notes software. This report is the technical manual for version 2.8 of QRev.
NASA Technical Reports Server (NTRS)
Nyangweso, Emmanuel; Bole, Brian
2014-01-01
Successful prediction and management of battery life using prognostic algorithms through ground and flight tests is important for performance evaluation of electrical systems. This paper details the design of test beds suitable for replicating loading profiles that would be encountered in deployed electrical systems. The test bed data will be used to develop and validate prognostic algorithms for predicting battery discharge time and battery failure time. Online battery prognostic algorithms will enable health management strategies. The platform used for algorithm demonstration is the EDGE 540T electric unmanned aerial vehicle (UAV). The fully designed test beds developed and detailed in this paper can be used to conduct battery life tests by controlling current and recording voltage and temperature to develop a model that makes a prediction of end-of-charge and end-of-life of the system based on rapid state of health (SOH) assessment.
Sharma, Maneesh; Lee, Chee; Kantorovich, Svetlana; Tedtaotao, Maria; Smith, Gregory A; Brenton, Ashley
2017-01-01
Opioid abuse in chronic pain patients is a major public health issue. Primary care providers are frequently the first to prescribe opioids to patients suffering from pain, yet do not always have the time or resources to adequately evaluate the risk of opioid use disorder (OUD). This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm ("profile") incorporating phenotypic and, more uniquely, genotypic risk factors. In a validation study with 452 participants diagnosed with OUD and 1237 controls, the algorithm successfully categorized patients at high and moderate risk of OUD with 91.8% sensitivity. Regardless of changes in the prevalence of OUD, sensitivity of the algorithm remained >90%. The algorithm correctly stratifies primary care patients into low-, moderate-, and high-risk categories to appropriately identify patients in need for additional guidance, monitoring, or treatment changes.
An acceleration framework for synthetic aperture radar algorithms
NASA Astrophysics Data System (ADS)
Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.
2017-04-01
Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.
NASA Technical Reports Server (NTRS)
Xiang, Xuwu; Smith, Eric A.; Tripoli, Gregory J.
1992-01-01
A hybrid statistical-physical retrieval scheme is explored which combines a statistical approach with an approach based on the development of cloud-radiation models designed to simulate precipitating atmospheres. The algorithm employs the detailed microphysical information from a cloud model as input to a radiative transfer model which generates a cloud-radiation model database. Statistical procedures are then invoked to objectively generate an initial guess composite profile data set from the database. The retrieval algorithm has been tested for a tropical typhoon case using Special Sensor Microwave/Imager (SSM/I) data and has shown satisfactory results.
pyNBS: A Python implementation for network-based stratification of tumor mutations.
Huang, Justin K; Jia, Tongqiu; Carlin, Daniel E; Ideker, Trey
2018-03-28
We present pyNBS: a modularized Python 2.7 implementation of the network-based stratification (NBS) algorithm for stratifying tumor somatic mutation profiles into molecularly and clinically relevant subtypes. In addition to release of the software, we benchmark its key parameters and provide a compact cancer reference network that increases the significance of tumor stratification using the NBS algorithm. The structure of the code exposes key steps of the algorithm to foster further collaborative development. The package, along with examples and data, can be downloaded and installed from the URL http://www.github.com/huangger/pyNBS/. jkh013@ucsd.edu.
Mukunthan, B; Nagaveni, N
2014-01-01
In genetic engineering, conventional techniques and algorithms employed by forensic scientists to assist in identification of individuals on the basis of their respective DNA profiles involves more complex computational steps and mathematical formulae, also the identification of location of mutation in a genomic sequence in laboratories is still an exigent task. This novel approach provides ability to solve the problems that do not have an algorithmic solution and the available solutions are also too complex to be found. The perfect blend made of bioinformatics and neural networks technique results in efficient DNA pattern analysis algorithm with utmost prediction accuracy.
NASA Astrophysics Data System (ADS)
Seo, Seong-Heon; Lee, K. D.
2012-10-01
A frequency modulation reflectometer has been developed to measure the density profile of the KSTAR tokamak. It has two channels operating in X-mode in the frequency range of Q band (33-50 GHz) and V band (50-75 GHz). The full band is swept in 20 μs. The mixer output is directly digitized at the sampling rate of 100 MSamples/s. A new phase detection algorithm is developed to analyze both amplitude and frequency modulated signal. The algorithm is benchmarked for a synthesized amplitude modulation-frequency modulation signal. This new algorithm is applied to the data analysis of KSTAR reflectometer.
Dynamic association rules for gene expression data analysis.
Chen, Shu-Chuan; Tsai, Tsung-Hsien; Chung, Cheng-Han; Li, Wen-Hsiung
2015-10-14
The purpose of gene expression analysis is to look for the association between regulation of gene expression levels and phenotypic variations. This association based on gene expression profile has been used to determine whether the induction/repression of genes correspond to phenotypic variations including cell regulations, clinical diagnoses and drug development. Statistical analyses on microarray data have been developed to resolve gene selection issue. However, these methods do not inform us of causality between genes and phenotypes. In this paper, we propose the dynamic association rule algorithm (DAR algorithm) which helps ones to efficiently select a subset of significant genes for subsequent analysis. The DAR algorithm is based on association rules from market basket analysis in marketing. We first propose a statistical way, based on constructing a one-sided confidence interval and hypothesis testing, to determine if an association rule is meaningful. Based on the proposed statistical method, we then developed the DAR algorithm for gene expression data analysis. The method was applied to analyze four microarray datasets and one Next Generation Sequencing (NGS) dataset: the Mice Apo A1 dataset, the whole genome expression dataset of mouse embryonic stem cells, expression profiling of the bone marrow of Leukemia patients, Microarray Quality Control (MAQC) data set and the RNA-seq dataset of a mouse genomic imprinting study. A comparison of the proposed method with the t-test on the expression profiling of the bone marrow of Leukemia patients was conducted. We developed a statistical way, based on the concept of confidence interval, to determine the minimum support and minimum confidence for mining association relationships among items. With the minimum support and minimum confidence, one can find significant rules in one single step. The DAR algorithm was then developed for gene expression data analysis. Four gene expression datasets showed that the proposed DAR algorithm not only was able to identify a set of differentially expressed genes that largely agreed with that of other methods, but also provided an efficient and accurate way to find influential genes of a disease. In the paper, the well-established association rule mining technique from marketing has been successfully modified to determine the minimum support and minimum confidence based on the concept of confidence interval and hypothesis testing. It can be applied to gene expression data to mine significant association rules between gene regulation and phenotype. The proposed DAR algorithm provides an efficient way to find influential genes that underlie the phenotypic variance.
Evaluation of six TPS algorithms in computing entrance and exit doses
Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun P.; Elliott, Alex
2014-01-01
Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%‐3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.N‐, 87.53.Bn PMID:24892349
Inference from clustering with application to gene-expression microarrays.
Dougherty, Edward R; Barrera, Junior; Brun, Marcel; Kim, Seungchan; Cesar, Roberto M; Chen, Yidong; Bittner, Michael; Trent, Jeffrey M
2002-01-01
There are many algorithms to cluster sample data points based on nearness or a similarity measure. Often the implication is that points in different clusters come from different underlying classes, whereas those in the same cluster come from the same class. Stochastically, the underlying classes represent different random processes. The inference is that clusters represent a partition of the sample points according to which process they belong. This paper discusses a model-based clustering toolbox that evaluates cluster accuracy. Each random process is modeled as its mean plus independent noise, sample points are generated, the points are clustered, and the clustering error is the number of points clustered incorrectly according to the generating random processes. Various clustering algorithms are evaluated based on process variance and the key issue of the rate at which algorithmic performance improves with increasing numbers of experimental replications. The model means can be selected by hand to test the separability of expected types of biological expression patterns. Alternatively, the model can be seeded by real data to test the expected precision of that output or the extent of improvement in precision that replication could provide. In the latter case, a clustering algorithm is used to form clusters, and the model is seeded with the means and variances of these clusters. Other algorithms are then tested relative to the seeding algorithm. Results are averaged over various seeds. Output includes error tables and graphs, confusion matrices, principal-component plots, and validation measures. Five algorithms are studied in detail: K-means, fuzzy C-means, self-organizing maps, hierarchical Euclidean-distance-based and correlation-based clustering. The toolbox is applied to gene-expression clustering based on cDNA microarrays using real data. Expression profile graphics are generated and error analysis is displayed within the context of these profile graphics. A large amount of generated output is available over the web.
NASA Astrophysics Data System (ADS)
Ekinci, Yunus Levent; Özyalın, Şenol; Sındırgı, Petek; Balkaya, Çağlayan; Göktürkler, Gökhan
2017-12-01
In this work, analytic signal amplitude (ASA) inversion of total field magnetic anomalies has been achieved by differential evolution (DE) which is a population-based evolutionary metaheuristic algorithm. Using an elitist strategy, the applicability and effectiveness of the proposed inversion algorithm have been evaluated through the anomalies due to both hypothetical model bodies and real isolated geological structures. Some parameter tuning studies relying mainly on choosing the optimum control parameters of the algorithm have also been performed to enhance the performance of the proposed metaheuristic. Since ASAs of magnetic anomalies are independent of both ambient field direction and the direction of magnetization of the causative sources in a two-dimensional (2D) case, inversions of synthetic noise-free and noisy single model anomalies have produced satisfactory solutions showing the practical applicability of the algorithm. Moreover, hypothetical studies using multiple model bodies have clearly showed that the DE algorithm is able to cope with complicated anomalies and some interferences from neighbouring sources. The proposed algorithm has then been used to invert small- (120 m) and large-scale (40 km) magnetic profile anomalies of an iron deposit (Kesikköprü-Bala, Turkey) and a deep-seated magnetized structure (Sea of Marmara, Turkey), respectively to determine depths, geometries and exact origins of the source bodies. Inversion studies have yielded geologically reasonable solutions which are also in good accordance with the results of normalized full gradient and Euler deconvolution techniques. Thus, we propose the use of DE not only for the amplitude inversion of 2D analytical signals of magnetic profile anomalies having induced or remanent magnetization effects but also the low-dimensional data inversions in geophysics. A part of this paper was presented as an abstract at the 2nd International Conference on Civil and Environmental Engineering, 8-10 May 2017, Cappadocia-Nevşehir (Turkey).
Report of the International Ozone Trends Panel 1988, volume 1
NASA Technical Reports Server (NTRS)
1989-01-01
Chapters on the following topics are presented: spacecraft instrument calibration and stability; information content of ozone retrieval algorithms; trends in total column ozone measurements; and trends in ozone profile measurement.
Big data integration for regional hydrostratigraphic mapping
NASA Astrophysics Data System (ADS)
Friedel, M. J.
2013-12-01
Numerical models provide a way to evaluate groundwater systems, but determining the hydrostratigraphic units (HSUs) used in devising these models remains subjective, nonunique, and uncertain. A novel geophysical-hydrogeologic data integration scheme is proposed to constrain the estimation of continuous HSUs. First, machine-learning and multivariate statistical techniques are used to simultaneously integrate borehole hydrogeologic (lithology, hydraulic conductivity, aqueous field parameters, dissolved constituents) and geophysical (gamma, spontaneous potential, and resistivity) measurements. Second, airborne electromagnetic measurements are numerically inverted to obtain subsurface resistivity structure at randomly selected locations. Third, the machine-learning algorithm is trained using the borehole hydrostratigraphic units and inverted airborne resistivity profiles. The trained machine-learning algorithm is then used to estimate HSUs at independent resistivity profile locations. We demonstrate efficacy of the proposed approach to map the hydrostratigraphy of a heterogeneous surficial aquifer in northwestern Nebraska.
CloudSat 2C-ICE product update with a new Ze parameterization in lidar-only region.
Deng, Min; Mace, Gerald G; Wang, Zhien; Berry, Elizabeth
2015-12-16
The CloudSat 2C-ICE data product is derived from a synergetic ice cloud retrieval algorithm that takes as input a combination of CloudSat radar reflectivity ( Z e ) and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation lidar attenuated backscatter profiles. The algorithm uses a variational method for retrieving profiles of visible extinction coefficient, ice water content, and ice particle effective radius in ice or mixed-phase clouds. Because of the nature of the measurements and to maintain consistency in the algorithm numerics, we choose to parameterize (with appropriately large specification of uncertainty) Z e and lidar attenuated backscatter in the regions of a cirrus layer where only the lidar provides data and where only the radar provides data, respectively. To improve the Z e parameterization in the lidar-only region, the relations among Z e , extinction, and temperature have been more thoroughly investigated using Atmospheric Radiation Measurement long-term millimeter cloud radar and Raman lidar measurements. This Z e parameterization provides a first-order estimation of Z e as a function extinction and temperature in the lidar-only regions of cirrus layers. The effects of this new parameterization have been evaluated for consistency using radiation closure methods where the radiative fluxes derived from retrieved cirrus profiles compare favorably with Clouds and the Earth's Radiant Energy System measurements. Results will be made publicly available for the entire CloudSat record (since 2006) in the most recent product release known as R05.
Ablation algorithms and corneal asphericity in myopic correction with excimer lasers
NASA Astrophysics Data System (ADS)
Iroshnikov, Nikita G.; Larichev, Andrey V.; Yablokov, Michail G.
2007-06-01
The purpose of this work is studying a corneal asphericity change after a myopic refractive correction by mean of excimer lasers. As the ablation profile shape plays a key role in the post-op corneal asphericity, ablation profiles of recent lasers should be studied. The other task of this research was to analyze operation (LASIK) outcomes of one of the lasers with generic spherical ablation profile and to compare an asphericity change with theoretical predictions. The several correction methods, like custom generated aspherical profiles, may be utilized for mitigation of unwanted effects of asphericity change. Here we also present preliminary results of such correction for one of the excimer lasers.
Model-based testing with UML applied to a roaming algorithm for bluetooth devices.
Dai, Zhen Ru; Grabowski, Jens; Neukirchen, Helmut; Pals, Holger
2004-11-01
In late 2001, the Object Management Group issued a Request for Proposal to develop a testing profile for UML 2.0. In June 2003, the work on the UML 2.0 Testing Profile was finally adopted by the OMG. Since March 2004, it has become an official standard of the OMG. The UML 2.0 Testing Profile provides support for UML based model-driven testing. This paper introduces a methodology on how to use the testing profile in order to modify and extend an existing UML design model for test issues. The application of the methodology will be explained by applying it to an existing UML Model for a Bluetooth device.
Driver behavior profiling: An investigation with different smartphone sensors and machine learning
Ferreira, Jair; Carvalho, Eduardo; Ferreira, Bruno V.; de Souza, Cleidson; Suhara, Yoshihiko; Pentland, Alex
2017-01-01
Driver behavior impacts traffic safety, fuel/energy consumption and gas emissions. Driver behavior profiling tries to understand and positively impact driver behavior. Usually driver behavior profiling tasks involve automated collection of driving data and application of computer models to generate a classification that characterizes the driver aggressiveness profile. Different sensors and classification methods have been employed in this task, however, low-cost solutions and high performance are still research targets. This paper presents an investigation with different Android smartphone sensors, and classification algorithms in order to assess which sensor/method assembly enables classification with higher performance. The results show that specific combinations of sensors and intelligent methods allow classification performance improvement. PMID:28394925
Real Time Optima Tracking Using Harvesting Models of the Genetic Algorithm
NASA Technical Reports Server (NTRS)
Baskaran, Subbiah; Noever, D.
1999-01-01
Tracking optima in real time propulsion control, particularly for non-stationary optimization problems is a challenging task. Several approaches have been put forward for such a study including the numerical method called the genetic algorithm. In brief, this approach is built upon Darwinian-style competition between numerical alternatives displayed in the form of binary strings, or by analogy to 'pseudogenes'. Breeding of improved solution is an often cited parallel to natural selection in.evolutionary or soft computing. In this report we present our results of applying a novel model of a genetic algorithm for tracking optima in propulsion engineering and in real time control. We specialize the algorithm to mission profiling and planning optimizations, both to select reduced propulsion needs through trajectory planning and to explore time or fuel conservation strategies.
Linear feasibility algorithms for treatment planning in interstitial photodynamic therapy
NASA Astrophysics Data System (ADS)
Rendon, A.; Beck, J. C.; Lilge, Lothar
2008-02-01
Interstitial Photodynamic therapy (IPDT) has been under intense investigation in recent years, with multiple clinical trials underway. This effort has demanded the development of optimization strategies that determine the best locations and output powers for light sources (cylindrical or point diffusers) to achieve an optimal light delivery. Furthermore, we have recently introduced cylindrical diffusers with customizable emission profiles, placing additional requirements on the optimization algorithms, particularly in terms of the stability of the inverse problem. Here, we present a general class of linear feasibility algorithms and their properties. Moreover, we compare two particular instances of these algorithms, which are been used in the context of IPDT: the Cimmino algorithm and a weighted gradient descent (WGD) algorithm. The algorithms were compared in terms of their convergence properties, the cost function they minimize in the infeasible case, their ability to regularize the inverse problem, and the resulting optimal light dose distributions. Our results show that the WGD algorithm overall performs slightly better than the Cimmino algorithm and that it converges to a minimizer of a clinically relevant cost function in the infeasible case. Interestingly however, treatment plans resulting from either algorithms were very similar in terms of the resulting fluence maps and dose volume histograms, once the diffuser powers adjusted to achieve equal prostate coverage.
Computation of repetitions and regularities of biologically weighted sequences.
Christodoulakis, M; Iliopoulos, C; Mouchard, L; Perdikuri, K; Tsakalidis, A; Tsichlas, K
2006-01-01
Biological weighted sequences are used extensively in molecular biology as profiles for protein families, in the representation of binding sites and often for the representation of sequences produced by a shotgun sequencing strategy. In this paper, we address three fundamental problems in the area of biologically weighted sequences: (i) computation of repetitions, (ii) pattern matching, and (iii) computation of regularities. Our algorithms can be used as basic building blocks for more sophisticated algorithms applied on weighted sequences.
A concept for a fuel efficient flight planning aid for general aviation
NASA Technical Reports Server (NTRS)
Collins, B. P.; Haines, A. L.; Wales, C. J.
1982-01-01
A core equation for estimation of fuel burn from path profile data was developed. This equation was used as a necessary ingredient in a dynamic program to define a fuel efficient flight path. The resultant algorithm is oriented toward use by general aviation. The pilot provides a description of the desired ground track, standard aircraft parameters, and weather at selected waypoints. The algorithm then derives the fuel efficient altitudes and velocities at the waypoints.
We and others have shown that transition and maintenance of biological states is controlled by master regulator proteins, which can be inferred by interrogating tissue-specific regulatory models (interactomes) with transcriptional signatures, using the VIPER algorithm. Yet, some tissues may lack molecular profiles necessary for interactome inference (orphan tissues), or, as for single cells isolated from heterogeneous samples, their tissue context may be undetermined.
Plaisier, Christopher L; Bare, J Christopher; Baliga, Nitin S
2011-07-01
Transcriptome profiling studies have produced staggering numbers of gene co-expression signatures for a variety of biological systems. A significant fraction of these signatures will be partially or fully explained by miRNA-mediated targeted transcript degradation. miRvestigator takes as input lists of co-expressed genes from Caenorhabditis elegans, Drosophila melanogaster, G. gallus, Homo sapiens, Mus musculus or Rattus norvegicus and identifies the specific miRNAs that are likely to bind to 3' un-translated region (UTR) sequences to mediate the observed co-regulation. The novelty of our approach is the miRvestigator hidden Markov model (HMM) algorithm which systematically computes a similarity P-value for each unique miRNA seed sequence from the miRNA database miRBase to an overrepresented sequence motif identified within the 3'-UTR of the query genes. We have made this miRNA discovery tool accessible to the community by integrating our HMM algorithm with a proven algorithm for de novo discovery of miRNA seed sequences and wrapping these algorithms into a user-friendly interface. Additionally, the miRvestigator web server also produces a list of putative miRNA binding sites within 3'-UTRs of the query transcripts to facilitate the design of validation experiments. The miRvestigator is freely available at http://mirvestigator.systemsbiology.net.
Modeling of light distribution in the brain for topographical imaging
NASA Astrophysics Data System (ADS)
Okada, Eiji; Hayashi, Toshiyuki; Kawaguchi, Hiroshi
2004-07-01
Multi-channel optical imaging system can obtain a topographical distribution of the activated region in the brain cortex by a simple mapping algorithm. Near-infrared light is strongly scattered in the head and the volume of tissue that contributes to the change in the optical signal detected with source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. We report theoretical investigations on the spatial resolution of the topographic imaging of the brain activity. The head model for the theoretical study consists of five layers that imitate the scalp, skull, subarachnoid space, gray matter and white matter. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The source-detector pairs are one dimensionally arranged on the surface of the model and the distance between the adjoining source-detector pairs are varied from 4 mm to 32 mm. The change in detected intensity caused by the absorption change is obtained by Monte Carlo simulation. The position of absorption change is reconstructed by the conventional mapping algorithm and the reconstruction algorithm using the spatial sensitivity profiles. We discuss the effective interval between the source-detector pairs and the choice of reconstruction algorithms to improve the topographic images of brain activity.
Gilles, Luc; Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Ellerbroek, Brent
2013-05-01
This paper discusses the performance and cost of two computationally efficient Fourier-based tomographic wavefront reconstruction algorithms for wide-field laser guide star (LGS) adaptive optics (AO). The first algorithm is the iterative Fourier domain preconditioned conjugate gradient (FDPCG) algorithm developed by Yang et al. [Appl. Opt.45, 5281 (2006)], combined with pseudo-open-loop control (POLC). FDPCG's computational cost is proportional to N log(N), where N denotes the dimensionality of the tomography problem. The second algorithm is the distributed Kalman filter (DKF) developed by Massioni et al. [J. Opt. Soc. Am. A28, 2298 (2011)], which is a noniterative spatially invariant controller. When implemented in the Fourier domain, DKF's cost is also proportional to N log(N). Both algorithms are capable of estimating spatial frequency components of the residual phase beyond the wavefront sensor (WFS) cutoff frequency thanks to regularization, thereby reducing WFS spatial aliasing at the expense of more computations. We present performance and cost analyses for the LGS multiconjugate AO system under design for the Thirty Meter Telescope, as well as DKF's sensitivity to uncertainties in wind profile prior information. We found that, provided the wind profile is known to better than 10% wind speed accuracy and 20 deg wind direction accuracy, DKF, despite its spatial invariance assumptions, delivers a significantly reduced wavefront error compared to the static FDPCG minimum variance estimator combined with POLC. Due to its nonsequential nature and high degree of parallelism, DKF is particularly well suited for real-time implementation on inexpensive off-the-shelf graphics processing units.
Accuracy of Geophysical Parameters Derived from AIRS/AMSU as a Function of Fractional Cloud Cover
NASA Technical Reports Server (NTRS)
Susskind, Joel; Barnet, Chris; Blaisdell, John; Iredell, Lena; Keita, Fricky; Kouvaris, Lou; Molnar, Gyula; Chahine, Moustafa
2005-01-01
AIRS was launched on EOS Aqua on May 4,2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20%, in cases with up to 80% effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, was described previously. Pre-launch simulation studies using this algorithm indicated that these results should be achievable. Some modifications have been made to the at-launch retrieval algorithm as described in this paper. Sample fields of parameters retrieved from AIRS/AMSU/HSB data are presented and validated as a function of retrieved fractional cloud cover. As in simulation, the degradation of retrieval accuracy with increasing cloud cover is small. HSB failed in February 2005, and consequently HSB channel radiances are not used in the results shown in this paper. The AIRS/AMSU retrieval algorithm described in this paper, called Version 4, become operational at the Goddard DAAC in April 2005 and is being used to analyze near-real time AIRS/AMSU data. Historical AIRS/AMSU data, going backwards from March 2005 through September 2002, is also being analyzed by the DAAC using the Version 4 algorithm.
CrossLink: a novel method for cross-condition classification of cancer subtypes.
Ma, Chifeng; Sastry, Konduru S; Flore, Mario; Gehani, Salah; Al-Bozom, Issam; Feng, Yusheng; Serpedin, Erchin; Chouchane, Lotfi; Chen, Yidong; Huang, Yufei
2016-08-22
We considered the prediction of cancer classes (e.g. subtypes) using patient gene expression profiles that contain both systematic and condition-specific biases when compared with the training reference dataset. The conventional normalization-based approaches cannot guarantee that the gene signatures in the reference and prediction datasets always have the same distribution for all different conditions as the class-specific gene signatures change with the condition. Therefore, the trained classifier would work well under one condition but not under another. To address the problem of current normalization approaches, we propose a novel algorithm called CrossLink (CL). CL recognizes that there is no universal, condition-independent normalization mapping of signatures. In contrast, it exploits the fact that the signature is unique to its associated class under any condition and thus employs an unsupervised clustering algorithm to discover this unique signature. We assessed the performance of CL for cross-condition predictions of PAM50 subtypes of breast cancer by using a simulated dataset modeled after TCGA BRCA tumor samples with a cross-validation scheme, and datasets with known and unknown PAM50 classification. CL achieved prediction accuracy >73 %, highest among other methods we evaluated. We also applied the algorithm to a set of breast cancer tumors derived from Arabic population to assign a PAM50 classification to each tumor based on their gene expression profiles. A novel algorithm CrossLink for cross-condition prediction of cancer classes was proposed. In all test datasets, CL showed robust and consistent improvement in prediction performance over other state-of-the-art normalization and classification algorithms.
Powered Descent Guidance with General Thrust-Pointing Constraints
NASA Technical Reports Server (NTRS)
Carson, John M., III; Acikmese, Behcet; Blackmore, Lars
2013-01-01
The Powered Descent Guidance (PDG) algorithm and software for generating Mars pinpoint or precision landing guidance profiles has been enhanced to incorporate thrust-pointing constraints. Pointing constraints would typically be needed for onboard sensor and navigation systems that have specific field-of-view requirements to generate valid ground proximity and terrain-relative state measurements. The original PDG algorithm was designed to enforce both control and state constraints, including maximum and minimum thrust bounds, avoidance of the ground or descent within a glide slope cone, and maximum speed limits. The thrust-bound and thrust-pointing constraints within PDG are non-convex, which in general requires nonlinear optimization methods to generate solutions. The short duration of Mars powered descent requires guaranteed PDG convergence to a solution within a finite time; however, nonlinear optimization methods have no guarantees of convergence to the global optimal or convergence within finite computation time. A lossless convexification developed for the original PDG algorithm relaxed the non-convex thrust bound constraints. This relaxation was theoretically proven to provide valid and optimal solutions for the original, non-convex problem within a convex framework. As with the thrust bound constraint, a relaxation of the thrust-pointing constraint also provides a lossless convexification that ensures the enhanced relaxed PDG algorithm remains convex and retains validity for the original nonconvex problem. The enhanced PDG algorithm provides guidance profiles for pinpoint and precision landing that minimize fuel usage, minimize landing error to the target, and ensure satisfaction of all position and control constraints, including thrust bounds and now thrust-pointing constraints.
Data Imputation in Epistatic MAPs by Network-Guided Matrix Completion
Žitnik, Marinka; Zupan, Blaž
2015-01-01
Abstract Epistatic miniarray profile (E-MAP) is a popular large-scale genetic interaction discovery platform. E-MAPs benefit from quantitative output, which makes it possible to detect subtle interactions with greater precision. However, due to the limits of biotechnology, E-MAP studies fail to measure genetic interactions for up to 40% of gene pairs in an assay. Missing measurements can be recovered by computational techniques for data imputation, in this way completing the interaction profiles and enabling downstream analysis algorithms that could otherwise be sensitive to missing data values. We introduce a new interaction data imputation method called network-guided matrix completion (NG-MC). The core part of NG-MC is low-rank probabilistic matrix completion that incorporates prior knowledge presented as a collection of gene networks. NG-MC assumes that interactions are transitive, such that latent gene interaction profiles inferred by NG-MC depend on the profiles of their direct neighbors in gene networks. As the NG-MC inference algorithm progresses, it propagates latent interaction profiles through each of the networks and updates gene network weights toward improved prediction. In a study with four different E-MAP data assays and considered protein–protein interaction and gene ontology similarity networks, NG-MC significantly surpassed existing alternative techniques. Inclusion of information from gene networks also allowed NG-MC to predict interactions for genes that were not included in original E-MAP assays, a task that could not be considered by current imputation approaches. PMID:25658751
Kumar, Keshav
2017-11-01
Multivariate curve resolution alternating least square (MCR-ALS) analysis is the most commonly used curve resolution technique. The MCR-ALS model is fitted using the alternate least square (ALS) algorithm that needs initialisation of either contribution profiles or spectral profiles of each of the factor. The contribution profiles can be initialised using the evolve factor analysis; however, in principle, this approach requires that data must belong to the sequential process. The initialisation of the spectral profiles are usually carried out using the pure variable approach such as SIMPLISMA algorithm, this approach demands that each factor must have the pure variables in the data sets. Despite these limitations, the existing approaches have been quite a successful for initiating the MCR-ALS analysis. However, the present work proposes an alternate approach for the initialisation of the spectral variables by generating the random variables in the limits spanned by the maxima and minima of each spectral variable of the data set. The proposed approach does not require that there must be pure variables for each component of the multicomponent system or the concentration direction must follow the sequential process. The proposed approach is successfully validated using the excitation-emission matrix fluorescence data sets acquired for certain fluorophores with significant spectral overlap. The calculated contribution and spectral profiles of these fluorophores are found to correlate well with the experimental results. In summary, the present work proposes an alternate way to initiate the MCR-ALS analysis.
Buscema, Massimo; Grossi, Enzo; Montanini, Luisa; Street, Maria E.
2015-01-01
Objectives Intra-uterine growth retardation is often of unknown origin, and is of great interest as a “Fetal Origin of Adult Disease” has been now well recognized. We built a benchmark based upon a previously analysed data set related to Intrauterine Growth Retardation with 46 subjects described by 14 variables, related with the insulin-like growth factor system and pro-inflammatory cytokines, namely interleukin -6 and tumor necrosis factor -α. Design and Methods We used new algorithms for optimal information sorting based on the combination of two neural network algorithms: Auto-contractive Map and Activation and Competition System. Auto-Contractive Map spatializes the relationships among variables or records by constructing a suitable embedding space where ‘closeness’ among variables or records reflects accurately their associations. The Activation and Competition System algorithm instead works as a dynamic non linear associative memory on the weight matrices of other algorithms, and is able to produce a prototypical variable profile of a given target. Results Classical statistical analysis, proved to be unable to distinguish intrauterine growth retardation from appropriate-for-gestational age (AGA) subjects due to the high non-linearity of underlying functions. Auto-contractive map succeeded in clustering and differentiating completely the conditions under study, while Activation and Competition System allowed to develop the profile of variables which discriminated the two conditions under study better than any other previous form of attempt. In particular, Activation and Competition System showed that ppropriateness for gestational age was explained by IGF-2 relative gene expression, and by IGFBP-2 and TNF-α placental contents. IUGR instead was explained by IGF-I, IGFBP-1, IGFBP-2 and IL-6 gene expression in placenta. Conclusion This further analysis provided further insight into the placental key-players of fetal growth within the insulin-like growth factor and cytokine systems. Our previous published analysis could identify only which variables were predictive of fetal growth in general, and identified only some relationships. PMID:26158499
Scintillator-based transverse proton beam profiler for laser-plasma ion sources.
Dover, N P; Nishiuchi, M; Sakaki, H; Alkhimova, M A; Faenov, A Ya; Fukuda, Y; Kiriyama, H; Kon, A; Kondo, K; Nishitani, K; Ogura, K; Pikuz, T A; Pirozhkov, A S; Sagisaka, A; Kando, M; Kondo, K
2017-07-01
A high repetition rate scintillator-based transverse beam profile diagnostic for laser-plasma accelerated proton beams has been designed and commissioned. The proton beam profiler uses differential filtering to provide coarse energy resolution and a flexible design to allow optimisation for expected beam energy range and trade-off between spatial and energy resolution depending on the application. A plastic scintillator detector, imaged with a standard 12-bit scientific camera, allows data to be taken at a high repetition rate. An algorithm encompassing the scintillator non-linearity is described to estimate the proton spectrum at different spatial locations.
Preprocessing for Eddy Dissipation Rate and TKE Profile Generation
NASA Technical Reports Server (NTRS)
Zak, J. Allen; Rodgers, William G., Jr.; McKissick, Burnell T. (Technical Monitor)
2001-01-01
The Aircraft Vortex Spacing System (AVOSS), a set of algorithms to determine aircraft spacing according to wake vortex behavior prediction, requires turbulence profiles to appropriately determine arrival and departure aircraft spacing. The ambient atmospheric turbulence profile must always be produced, even if the result is an arbitrary (canned) profile. The original turbulence profile code was generated By North Carolina State University and used in a non-real-time environment in the past. All the input parameters could be carefully selected and screened prior to input. Since this code must run in real-time using actual measurements in the field as input, it became imperative to begin a data checking and screening process as part of the real-time implementation. The process described herein is a step towards ensuring that the best possible turbulence profile is always provided to AVOSS. Data fill-ins, constant profiles and arbitrary profiles are used only as a last resort, but are essential to ensure uninterrupted application of AVOSS.
Long-Term Pavement Performance Automated Faulting Measurement
DOT National Transportation Integrated Search
2015-02-01
This study focused on identifying transverse joint locations on jointed plain concrete pavements using an automated joint detection algorithm and computing faulting at these locations using Long-Term Pavement Performance (LTPP) Program profile data c...
NASA Astrophysics Data System (ADS)
Lowden, D. W.
1992-10-01
Disbonds simulated in a composite helicopter rotor blade were profiled using eddy currents. The method is inherently accurate and reproducible. An algorithm is described for calculating disbond margin. Disbond area is estimated assuming in-service disbondments exhibit circular geometry.
Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A
2015-06-01
Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.
Influence of a priori profiles on trend calculations from Umkehr data
NASA Astrophysics Data System (ADS)
Mateer, C. L.; Dütsch, H. U.; Staehelin, J.; Deluisi, J. J.
1996-07-01
Although the new (1992) ozone profile retrieval algorithm for Umkehr measurements provides much better agreement with ozone sounding results than the old (1964) algorithm, considerable discrepancies remain with respect to ozone trends at different levels in the atmosphere. These discrepancies have been found by the comparison of long-term trends obtained from the Umkehr measurements at Arosa and the ozone balloon soundings at Payerne (Switzerland). It is investigated here whether these obvious discrepancies can be removed by using time-dependent a priori profiles. This procedure is successful only in the lowest part of the atmosphere, below about 19 km. To further explore this problem, synthetic Umkehr observations are calculated from the ozonesonde profiles. Trends are calculated for both the synthetic and actual Umkehr observations. The difference pattern between these Umkehr observation trends is compared with the difference in ozone profile retrieval trends from the synthetic and actual observations. The distinctive difference patterns strongly indicate an inherent disagreement between the Umkehr observations and the ozonesonde profiles. The application of corrections for stratospheric aerosol effects to the Umkehr profiles reduces, but does not eliminate, a discrepancy above 32 km. It is concluded that the discrepancies are due to the constant mixing ratio assumption used in computing the residual ozone above balloon burst level and to the fair-weather bias of Umkehr observations (there are Umkehr observations at Arosa on fewer than 20% of the sonde observation days at Payerne). This sampling difference influences the results for the lower stratosphere. The study furthermore indicates that the ozone trends derived from Umkehr measurements for altitudes above about 32 km are robust for time-dependent changes in the a priori profiles at lower altitudes. Based on the results of this study, we conclude with revised recommendations as to which atmospheric layers should be used for Umkehr trend studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroniger, K; Herzog, M; Landry, G
2015-06-15
Purpose: We describe and demonstrate a fast analytical tool for prompt-gamma emission prediction based on filter functions applied on the depth dose profile. We present the implementation in a treatment planning system (TPS) of the same algorithm for positron emitter distributions. Methods: The prediction of the desired observable is based on the convolution of filter functions with the depth dose profile. For both prompt-gammas and positron emitters, the results of Monte Carlo simulations (MC) are compared with those of the analytical tool. For prompt-gamma emission from inelastic proton-induced reactions, homogeneous and inhomogeneous phantoms alongside with patient data are used asmore » irradiation targets of mono-energetic proton pencil beams. The accuracy of the tool is assessed in terms of the shape of the analytically calculated depth profiles and their absolute yields, compared to MC. For the positron emitters, the method is implemented in a research RayStation TPS and compared to MC predictions. Digital phantoms and patient data are used and positron emitter spatial density distributions are analyzed. Results: Calculated prompt-gamma profiles agree with MC within 3 % in terms of absolute yield and reproduce the correct shape. Based on an arbitrary reference material and by means of 6 filter functions (one per chemical element), profiles in any other material composed of those elements can be predicted. The TPS implemented algorithm is accurate enough to enable, via the analytically calculated positron emitters profiles, detection of range differences between the TPS and MC with errors of the order of 1–2 mm. Conclusion: The proposed analytical method predicts prompt-gamma and positron emitter profiles which generally agree with the distributions obtained by a full MC. The implementation of the tool in a TPS shows that reliable profiles can be obtained directly from the dose calculated by the TPS, without the need of full MC simulation.« less
A transform from absorption to Raman excitation profile. A time-dependent approach
NASA Astrophysics Data System (ADS)
Lee, Soo-Y.; Yeo, Robert C. K.
1994-04-01
An alternative time-frame approach, which is canonically conjugate to the energy-frame approach, for implementing the transform relations for calculating Raman excitation profiles directly from the optical absorption spectrum is presented. Practical and efficient fast Fourier transformation in the time frame replaces the widely used Chan and Page algorithm for evaluating the Hilbert transform in the energy frame. The time-frame approach is applied to: (a) a two-mode model which illustrates the missing mode effect in both absorption and Raman excitation profiles, (b) carotene, in which both the absorption spectrum and the Raman excitation profile show vibrational structure and (c) hexamethylbenzene: TCNE electron donor—acceptor complex where the same spectra are structureless and the Raman excitation profile for the 168 cm -1 mode poses a problem for the energy-frame approach. A similar time-frame approach can be used for the inverse transform from the Raman excitation profile to the optical absorption spectrum.
Radiofrequency pulse design using nonlinear gradient magnetic fields.
Kopanoglu, Emre; Constable, R Todd
2015-09-01
An iterative k-space trajectory and radiofrequency (RF) pulse design method is proposed for excitation using nonlinear gradient magnetic fields. The spatial encoding functions (SEFs) generated by nonlinear gradient fields are linearly dependent in Cartesian coordinates. Left uncorrected, this may lead to flip angle variations in excitation profiles. In the proposed method, SEFs (k-space samples) are selected using a matching pursuit algorithm, and the RF pulse is designed using a conjugate gradient algorithm. Three variants of the proposed approach are given: the full algorithm, a computationally cheaper version, and a third version for designing spoke-based trajectories. The method is demonstrated for various target excitation profiles using simulations and phantom experiments. The method is compared with other iterative (matching pursuit and conjugate gradient) and noniterative (coordinate-transformation and Jacobian-based) pulse design methods as well as uniform density spiral and EPI trajectories. The results show that the proposed method can increase excitation fidelity. An iterative method for designing k-space trajectories and RF pulses using nonlinear gradient fields is proposed. The method can either be used for selecting the SEFs individually to guide trajectory design, or can be adapted to design and optimize specific trajectories of interest. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hong, Junseok; Kim, Yong Ha; Chung, Jong-Kyun; Ssessanga, Nicholas; Kwak, Young-Sil
2017-03-01
In South Korea, there are about 80 Global Positioning System (GPS) monitoring stations providing total electron content (TEC) every 10 min, which can be accessed through Korea Astronomy and Space Science Institute (KASI) for scientific use. We applied the computerized ionospheric tomography (CIT) algorithm to the TEC dataset from this GPS network for monitoring the regional ionosphere over South Korea. The algorithm utilizes multiplicative algebraic reconstruction technique (MART) with an initial condition of the latest International Reference Ionosphere-2016 model (IRI-2016). In order to reduce the number of unknown variables, the vertical profiles of electron density are expressed with a linear combination of empirical orthonormal functions (EOFs) that were derived from the IRI empirical profiles. Although the number of receiver sites is much smaller than that of Japan, the CIT algorithm yielded reasonable structure of the ionosphere over South Korea. We verified the CIT results with NmF2 from ionosondes in Icheon and Jeju and also with GPS TEC at the center of South Korea. In addition, the total time required for CIT calculation was only about 5 min, enabling the exploration of the vertical ionospheric structure in near real time.
Raposo, Letícia M; Nobre, Flavio F
2017-08-30
Resistance to antiretrovirals (ARVs) is a major problem faced by HIV-infected individuals. Different rule-based algorithms were developed to infer HIV-1 susceptibility to antiretrovirals from genotypic data. However, there is discordance between them, resulting in difficulties for clinical decisions about which treatment to use. Here, we developed ensemble classifiers integrating three interpretation algorithms: Agence Nationale de Recherche sur le SIDA (ANRS), Rega, and the genotypic resistance interpretation system from Stanford HIV Drug Resistance Database (HIVdb). Three approaches were applied to develop a classifier with a single resistance profile: stacked generalization, a simple plurality vote scheme and the selection of the interpretation system with the best performance. The strategies were compared with the Friedman's test and the performance of the classifiers was evaluated using the F-measure, sensitivity and specificity values. We found that the three strategies had similar performances for the selected antiretrovirals. For some cases, the stacking technique with naïve Bayes as the learning algorithm showed a statistically superior F-measure. This study demonstrates that ensemble classifiers can be an alternative tool for clinical decision-making since they provide a single resistance profile from the most commonly used resistance interpretation systems.
Cloud Properties and Radiative Heating Rates for TWP
Comstock, Jennifer
2013-11-07
A cloud properties and radiative heating rates dataset is presented where cloud properties retrieved using lidar and radar observations are input into a radiative transfer model to compute radiative fluxes and heating rates at three ARM sites located in the Tropical Western Pacific (TWP) region. The cloud properties retrieval is a conditional retrieval that applies various retrieval techniques depending on the available data, that is if lidar, radar or both instruments detect cloud. This Combined Remote Sensor Retrieval Algorithm (CombRet) produces vertical profiles of liquid or ice water content (LWC or IWC), droplet effective radius (re), ice crystal generalized effective size (Dge), cloud phase, and cloud boundaries. The algorithm was compared with 3 other independent algorithms to help estimate the uncertainty in the cloud properties, fluxes, and heating rates (Comstock et al. 2013). The dataset is provided at 2 min temporal and 90 m vertical resolution. The current dataset is applied to time periods when the MMCR (Millimeter Cloud Radar) version of the ARSCL (Active Remotely-Sensed Cloud Locations) Value Added Product (VAP) is available. The MERGESONDE VAP is utilized where temperature and humidity profiles are required. Future additions to this dataset will utilize the new KAZR instrument and its associated VAPs.
A Two-dimensional Version of the Niblett-Bostick Transformation for Magnetotelluric Interpretations
NASA Astrophysics Data System (ADS)
Esparza, F.
2005-05-01
An imaging technique for two-dimensional magnetotelluric interpretations is developed following the well known Niblett-Bostick transformation for one-dimensional profiles. The algorithm uses a Hopfield artificial neural network to process series and parallel magnetotelluric impedances along with their analytical influence functions. The adaptive, weighted average approximation preserves part of the nonlinearity of the original problem. No initial model in the usual sense is required for the recovery of a functional model. Rather, the built-in relationship between model and data considers automatically, all at the same time, many half spaces whose electrical conductivities vary according to the data. The use of series and parallel impedances, a self-contained pair of invariants of the impedance tensor, avoids the need to decide on best angles of rotation for TE and TM separations. Field data from a given profile can thus be fed directly into the algorithm without much processing. The solutions offered by the Hopfield neural network correspond to spatial averages computed through rectangular windows that can be chosen at will. Applications of the algorithm to simple synthetic models and to the COPROD2 data set illustrate the performance of the approximation.
NASA Technical Reports Server (NTRS)
Mannucci, A. J.; Anderson, D. N.; Abdu, A. M.
1994-01-01
The Parametrized Real-Time Ionosphere Specification Model (PRISM) is a global ionospheric specification model that can incorporate real-time data to compute accurate electron density profiles. Time series of computed and measured data are compared in this paper. This comparison can be used to suggest methods of optimizing the PRISM adjustment algorithm for TEC data obtained at low altitudes.
NASA Astrophysics Data System (ADS)
Massaro, G.; Stiperski, I.; Pospichal, B.; Rotach, M. W.
2015-08-01
Within the Innsbruck Box project, a ground-based microwave radiometer (RPG-HATPRO) was operated in the Inn Valley (Austria), in very complex terrain, between September 2012 and May 2013 to obtain temperature and humidity vertical profiles of the full troposphere with a specific focus on the valley boundary layer. In order to assess its performance in a deep alpine valley, the profiles obtained by the radiometer with different retrieval algorithms based on different climatologies are compared to local radiosonde data. A retrieval that is improved with respect to the one provided by the manufacturer, based on better resolved data, shows a significantly smaller root mean square error (RMSE), both for the temperature and humidity profiles. The improvement is particularly substantial at the heights close to the mountaintop level and in the upper troposphere. Lower-level inversions, common in an alpine valley, are resolved to a satisfactory degree. On the other hand, upper-level inversions (above 1200 m) still pose a significant challenge for retrieval. For this purpose, specialized retrieval algorithms were developed by classifying the radiosonde climatologies into specialized categories according to different criteria (seasons, daytime, nighttime) and using additional regressors (e.g., measurements from mountain stations). The training and testing on the radiosonde data for these specialized categories suggests that a classification of profiles that reproduces meaningful physical characteristics can yield improved targeted specialized retrievals. A novel and very promising method of improving the profile retrieval in a mountainous region is adding further information in the retrieval, such as the surface temperature at fixed levels along a topographic slope or from nearby mountaintops.
CO2 profile retrievals from TCCON spectra
NASA Astrophysics Data System (ADS)
Dohe, Susanne; Hase, Frank; Sepúlveda, Eliezer; García, Omaira; Wunch, Debra; Wennberg, Paul; Gómez-Peláez, Angel; Abshire, James B.; Wofsy, Steven C.; Schneider, Matthias; Blumenstock, Thomas
2014-05-01
The Total Carbon Column Observing Network (TCCON) is a global network of ground-based Fourier Transform Spectrometers recording direct solar spectra in the near-infrared spectral region. With stringent requirements on the instrumentation, data processing and calibration, accurate and precise column-averaged abundances of CO2, CH4, N2O, HF, CO, H2O, and HDO are retrieved being an essential contribution for the validation of satellite data (e.g. GOSAT, OCO-2) and carbon cycle research (Olsen and Randerson, 2004). However, the determined column-averaged dry air mole fraction (DMF) contains no information about the vertical CO2 profile, due to the use of a simple scaling retrieval within the common TCCON analysis, where the fitting algorithm GFIT (e.g. Yang et al., 2005) is used. In this presentation we will apply a different procedure for calculating trace gas abundances from the measured spectra, the fitting algorithm PROFFIT (Hase et. al., 2004) which has been shown to be in very good accordance with GFIT. PROFFIT additionally offers the ability to perform profile retrievals in which the pressure broadening effect of absorption lines is used to retrieve vertical gas profiles, being of great interest especially for the CO2 modelling community. A new analyzing procedure will be shown and retrieved vertical CO2 profiles of the TCCON sites Izaña (Tenerife, Canary Islands, Spain) and Lamont (Oklahoma, USA) will be presented and compared with simultaneously performed surface in-situ measurements and CO2 profiles from different aircraft campaigns. References: - Hase, F. et al., J.Q.S.R.T. 87, 25-52, 2004. - Olsen, S.C. and Randerson, J.T., J.G.Res., 109, D023012, 2004. - Yang, Z. et al., J.Q.S.R.T., 90, 309-321, 2005.
NASA Astrophysics Data System (ADS)
Kramarova, Natalya A.; Bhartia, Pawan K.; Jaross, Glen; Moy, Leslie; Xu, Philippe; Chen, Zhong; DeLand, Matthew; Froidevaux, Lucien; Livesey, Nathaniel; Degenstein, Douglas; Bourassa, Adam; Walker, Kaley A.; Sheese, Patrick
2018-05-01
The Limb Profiler (LP) is a part of the Ozone Mapping and Profiler Suite launched on board of the Suomi NPP satellite in October 2011. The LP measures solar radiation scattered from the atmospheric limb in ultraviolet and visible spectral ranges between the surface and 80 km. These measurements of scattered solar radiances allow for the retrieval of ozone profiles from cloud tops up to 55 km. The LP started operational observations in April 2012. In this study we evaluate more than 5.5 years of ozone profile measurements from the OMPS LP processed with the new NASA GSFC version 2.5 retrieval algorithm. We provide a brief description of the key changes that had been implemented in this new algorithm, including a pointing correction, new cloud height detection, explicit aerosol correction and a reduction of the number of wavelengths used in the retrievals. The OMPS LP ozone retrievals have been compared with independent satellite profile measurements obtained from the Aura Microwave Limb Sounder (MLS), Atmospheric Chemistry Experiment Fourier Transform Spectrometer (ACE-FTS) and Odin Optical Spectrograph and InfraRed Imaging System (OSIRIS). We document observed biases and seasonal differences and evaluate the stability of the version 2.5 ozone record over 5.5 years. Our analysis indicates that the mean differences between LP and correlative measurements are well within required ±10 % between 18 and 42 km. In the upper stratosphere and lower mesosphere (> 43 km) LP tends to have a negative bias. We find larger biases in the lower stratosphere and upper troposphere, but LP ozone retrievals have significantly improved in version 2.5 compared to version 2 due to the implemented aerosol correction. In the northern high latitudes we observe larger biases between 20 and 32 km due to the remaining thermal sensitivity issue. Our analysis shows that LP ozone retrievals agree well with the correlative satellite observations in characterizing vertical, spatial and temporal ozone distribution associated with natural processes, like the seasonal cycle and quasi-biennial oscillations. We found a small positive drift ˜ 0.5 % yr-1 in the LP ozone record against MLS and OSIRIS that is more pronounced at altitudes above 35 km. This pattern in the relative drift is consistent with a possible 100 m drift in the LP sensor pointing detected by one of our altitude-resolving methods.
Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle
NASA Astrophysics Data System (ADS)
Ettl, Svenja
2015-04-01
'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.
A community effort to assess and improve drug sensitivity prediction algorithms
Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo
2015-01-01
Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods. PMID:24880487
A community effort to assess and improve drug sensitivity prediction algorithms.
Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo
2014-12-01
Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods.
Estimation of road profile variability from measured vehicle responses
NASA Astrophysics Data System (ADS)
Fauriat, W.; Mattrand, C.; Gayton, N.; Beakou, A.; Cembrzynski, T.
2016-05-01
When assessing the statistical variability of fatigue loads acting throughout the life of a vehicle, the question of the variability of road roughness naturally arises, as both quantities are strongly related. For car manufacturers, gathering information on the environment in which vehicles evolve is a long and costly but necessary process to adapt their products to durability requirements. In the present paper, a data processing algorithm is proposed in order to estimate the road profiles covered by a given vehicle, from the dynamic responses measured on this vehicle. The algorithm based on Kalman filtering theory aims at solving a so-called inverse problem, in a stochastic framework. It is validated using experimental data obtained from simulations and real measurements. The proposed method is subsequently applied to extract valuable statistical information on road roughness from an existing load characterisation campaign carried out by Renault within one of its markets.
A Strategy toward Collaborative Filter Recommended Location Service for Privacy Protection
Wang, Peng; Yang, Jing; Zhang, Jianpei
2018-01-01
A new collaborative filtered recommendation strategy was proposed for existing privacy and security issues in location services. In this strategy, every user establishes his/her own position profiles according to their daily position data, which is preprocessed using a density clustering method. Then, density prioritization was used to choose similar user groups as service request responders and the neighboring users in the chosen groups recommended appropriate location services using a collaborative filter recommendation algorithm. The two filter algorithms based on position profile similarity and position point similarity measures were designed in the recommendation, respectively. At the same time, the homomorphic encryption method was used to transfer location data for effective protection of privacy and security. A real location dataset was applied to test the proposed strategy and the results showed that the strategy provides better location service and protects users’ privacy. PMID:29751670
A Strategy toward Collaborative Filter Recommended Location Service for Privacy Protection.
Wang, Peng; Yang, Jing; Zhang, Jianpei
2018-05-11
A new collaborative filtered recommendation strategy was proposed for existing privacy and security issues in location services. In this strategy, every user establishes his/her own position profiles according to their daily position data, which is preprocessed using a density clustering method. Then, density prioritization was used to choose similar user groups as service request responders and the neighboring users in the chosen groups recommended appropriate location services using a collaborative filter recommendation algorithm. The two filter algorithms based on position profile similarity and position point similarity measures were designed in the recommendation, respectively. At the same time, the homomorphic encryption method was used to transfer location data for effective protection of privacy and security. A real location dataset was applied to test the proposed strategy and the results showed that the strategy provides better location service and protects users' privacy.
NASA Astrophysics Data System (ADS)
Dempsey, M. J.; Booth, J.; Arend, M.; Melecio-Vazquez, D.
2016-12-01
The radar wind profiler (RWP) located on the Liberty Science Center in Jersey City, NJ is a part of the New York City Meteorological Network (NYCMetNet). An automatic algorithm based on those by Angevine [1] and Molod [2] is expanded upon and implemented to take RWP signal to noise ratio data and create an urban boundary layer (UBL) height product. Time series of the RWP UBL heights from clear and cloudy days are examined and compared to UBL height time series calculated from thermal data obtained from a NYCMetNet radiometer located on the roof of the Grove School of Engineering at The City College of New York. UBL data from the RWP are also compared to the MERRA (Modern Era Retrospective Analysis for Research and Applications) planetary boundary layer height time series product. A limited seasonal climatology is created from the available RWP data for clear and cloudy days and then compared to a limited seasonal climatology produced from boundary layer data obtained from MERRA and boundary layer data calculated from the CCNY radiometer. As with wind profilers in the NOAA wind profiler network, the signal return to the lowest range gates is not always the result of turbulent scattering, but from scattering from other targets such as the building itself, birds and insects. The algorithm attempts to address this during the daytime, when strong signal returns at the lowest range gates mask the SNR maxima above which are representative of the actual UBL height. Detecting the collapse and fall of the boundary layer meets with limited success, also, from the hours of 2:30pm to 5:00pm. Upper and lower range gates from the wind profiler limit observation of the nighttime boundary layer for heights falling below the lowest range gate and daytime convective boundary layer maxima rising above the highest. Due to the constraints of the instrument and the algorithm it is recommended that the boundary layer height product be constrained to the hours of 8am to 7pm.
Computing the Envelope for Stepwise Constant Resource Allocations
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Clancy, Daniel (Technical Monitor)
2001-01-01
Estimating tight resource level is a fundamental problem in the construction of flexible plans with resource utilization. In this paper we describe an efficient algorithm that builds a resource envelope, the tightest possible such bound. The algorithm is based on transforming the temporal network of resource consuming and producing events into a flow network with noises equal to the events and edges equal to the necessary predecessor links between events. The incremental solution of a staged maximum flow problem on the network is then used to compute the time of occurrence and the height of each step of the resource envelope profile. The staged algorithm has the same computational complexity of solving a maximum flow problem on the entire flow network. This makes this method computationally feasible for use in the inner loop of search-based scheduling algorithms.
Using the Multilayer Free-Surface Flow Model to Solve Wave Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prokof’ev, V. A., E-mail: ProkofyevVA@vniig.ru
2017-01-15
A method is presented for changing over from a single-layer shallow-water model to a multilayer model with hydrostatic pressure profile and, then, to a multilayer model with nonhydrostatic pressure profile. The method does not require complex procedures for solving the discrete Poisson’s equation and features high computation efficiency. The results of validating the algorithm against experimental data critical for the numerical dissipation of the numerical scheme are presented. Examples are considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilmore, Mark Allen
Turbulence, and turbulence-driven transport are ubiquitous in magnetically confined plasmas, where there is an intimate relationship between turbulence, transport, instability driving mechanisms (such as gradients), plasma flows, and flow shear. Though many of the detailed physics of the interrelationship between turbulence, transport, drive mechanisms, and flow remain unclear, there have been many demonstrations that transport and/or turbulence can be suppressed or reduced via manipulations of plasma flow profiles. This is well known in magnetic fusion plasmas [e.g., high confinement mode (H-mode) and internal transport barriers (ITB’s)], and has also been demonstrated in laboratory plasmas. However, it may be that themore » levels of particle transport obtained in such cases [e.g. H-mode, ITB’s] are actually lower than is desirable for a practical fusion device. Ideally, one would be able to actively feedback control the turbulent transport, via manipulation of the flow profiles. The purpose of this research was to investigate the feasibility of using both advanced model-based control algorithms, as well as non-model-based algorithms, to control cross-field turbulence-driven particle transport through appropriate manipulation of radial plasma flow profiles. The University of New Mexico was responsible for the experimental portion of the project, while our collaborators at the University of Montana provided plasma transport modeling, and collaborators at Lehigh University developed and explored control methods.« less
Stargate GTM: Bridging Descriptor and Activity Spaces.
Gaspar, Héléna A; Baskin, Igor I; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre
2015-11-23
Predicting the activity profile of a molecule or discovering structures possessing a specific activity profile are two important goals in chemoinformatics, which could be achieved by bridging activity and molecular descriptor spaces. In this paper, we introduce the "Stargate" version of the Generative Topographic Mapping approach (S-GTM) in which two different multidimensional spaces (e.g., structural descriptor space and activity space) are linked through a common 2D latent space. In the S-GTM algorithm, the manifolds are trained simultaneously in two initial spaces using the probabilities in the 2D latent space calculated as a weighted geometric mean of probability distributions in both spaces. S-GTM has the following interesting features: (1) activities are involved during the training procedure; therefore, the method is supervised, unlike conventional GTM; (2) using molecular descriptors of a given compound as input, the model predicts a whole activity profile, and (3) using an activity profile as input, areas populated by relevant chemical structures can be detected. To assess the performance of S-GTM prediction models, a descriptor space (ISIDA descriptors) of a set of 1325 GPCR ligands was related to a B-dimensional (B = 1 or 8) activity space corresponding to pKi values for eight different targets. S-GTM outperforms conventional GTM for individual activities and performs similarly to the Lasso multitask learning algorithm, although it is still slightly less accurate than the Random Forest method.
Is risk stratification ever the same as 'profiling'?
Braithwaite, R Scott; Stevens, Elizabeth R; Caplan, Arthur
2016-05-01
Physicians engage in risk stratification as a normative part of their professional duties. Risk stratification has the potential to be beneficial in many ways, and implicit recognition of this potential benefit underlies its acceptance as a cornerstone of the medical profession. However, risk stratification also has the potential to be harmful. We argue that 'profiling' is a term that corresponds to risk stratification strategies in which there is concern that ethical harms exceed likely or proven benefits. In the case of risk stratification for health goals, this would occur most frequently if benefits were obtained by threats to justice, autonomy or privacy. We discuss implications of the potential overlap between risk stratification and profiling for researchers and for clinicians, and we consider whether there are salient characteristics that make a particular risk stratification algorithm more or less likely to overlap with profiling, such as whether the risk stratification algorithm is based on voluntary versus non-voluntary characteristics, based on causal versus non-causal characteristics, or based on signifiers of historical disadvantage. We also discuss the ethical challenges created when a risk stratification scheme helps all subgroups but some more than others, or when risk stratification harms some subgroups but benefits the aggregate group. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Road profile estimation of city roads using DTPS
NASA Astrophysics Data System (ADS)
Wang, Qi; McDaniel, J. Gregory; Sun, Nian X.; Wang, Ming L.
2013-04-01
This work presents a non-destructive and non-contact acoustic sensing approach for measuring road profile of road and bridge deck with vehicles running at normal speed without stopping traffic. This approach uses an instantaneous and real-time dynamic tire pressure sensor (DTPS) that can measure dynamic response of the tire-road interaction and increases the efficiency of currently used road profile measuring systems with vehicle body-mounted profilers and axle-mounted accelerometers. In this work, a prototype of real-time DTPS system has been developed and demonstrated on a testing van at speeds from 5 to 80 miles per hour (mph). A data analysis algorithm has been developed to remove axle dynamic motions from the measured DTPS data and to find the transfer function between dynamic tire pressure change and the road profile. Field test has been performed to estimate road profiles. The road profile resolution is approximately 5 to 10 cm in width and sensitivity is 0. 3 cm for the height road surface features at driving speeds of 5 to 80 mph.
The profile algorithm for microwave delay estimation from water vapor radiometer data
NASA Technical Reports Server (NTRS)
Robinson, Steven E.
1988-01-01
A new algorithm has been developed for the estimation of tropospheric microwave path delays from water vapor radiometer (WVR) data, which does not require site and weather dependent empirical parameters to produce accuracy better than 0.3 cm of delay. Instead of taking the conventional linear approach, the new algorithm first uses the observables with an emission model to determine an approximate form of the vertical water vapor distribution, which is then explicitly integrated to estimate wet path delays in a second step. The intrinsic accuracy of this algorithm, excluding uncertainties caused by the radiometers and the emission model, has been examined for two channel WVR data using path delays and corresponding simulated observables computed from archived radiosonde data. It is found that annual rms errors for a wide range of sites average 0.18 cm in the absence of clouds, 0.22 cm in cloudy weather, and 0.19 cm overall. In clear weather, the new algorithm's accuracy is comparable to the best that can be obtained from conventional linear algorithms, while in cloudy weather it offers a 35 percent improvement.
NASA Technical Reports Server (NTRS)
Liu, X.; Kizer, S.; Barnet, C.; Dvakarla, M.; Zhou, D. K.; Larar, A. M.
2012-01-01
The Joint Polar Satellite System (JPSS) is a U.S. National Oceanic and Atmospheric Administration (NOAA) mission in collaboration with the U.S. National Aeronautical Space Administration (NASA) and international partners. The NPP Cross-track Infrared Microwave Sounding Suite (CrIMSS) consists of the infrared (IR) Crosstrack Infrared Sounder (CrIS) and the microwave (MW) Advanced Technology Microwave Sounder (ATMS). The CrIS instrument is hyperspectral interferometer, which measures high spectral and spatial resolution upwelling infrared radiances. The ATMS is a 22-channel radiometer similar to Advanced Microwave Sounding Units (AMSU) A and B. It measures top of atmosphere MW upwelling radiation and provides capability of sounding below clouds. The CrIMSS Environmental Data Record (EDR) algorithm provides three EDRs, namely the atmospheric vertical temperature, moisture and pressure profiles (AVTP, AVMP and AVPP, respectively), with the lower tropospheric AVTP and the AVMP being JPSS Key Performance Parameters (KPPs). The operational CrIMSS EDR an algorithm was originally designed to run on large IBM computers with dedicated data management subsystem (DMS). We have ported the operational code to simple Linux systems by replacing DMS with appropriate interfaces. We also changed the interface of the operational code so that we can read data from both the CrIMSS science code and the operational code and be able to compare lookup tables, parameter files, and output results. The detail of the CrIMSS EDR algorithm is described in reference [1]. We will present results of testing the CrIMSS EDR operational algorithm using proxy data generated from the Infrared Atmospheric Sounding Interferometer (IASI) satellite data and from the NPP CrIS/ATMS data.
Bartsch, Georg; Mitra, Anirban P; Mitra, Sheetal A; Almal, Arpit A; Steven, Kenneth E; Skinner, Donald G; Fry, David W; Lenehan, Peter F; Worzel, William P; Cote, Richard J
2016-02-01
Due to the high recurrence risk of nonmuscle invasive urothelial carcinoma it is crucial to distinguish patients at high risk from those with indolent disease. In this study we used a machine learning algorithm to identify the genes in patients with nonmuscle invasive urothelial carcinoma at initial presentation that were most predictive of recurrence. We used the genes in a molecular signature to predict recurrence risk within 5 years after transurethral resection of bladder tumor. Whole genome profiling was performed on 112 frozen nonmuscle invasive urothelial carcinoma specimens obtained at first presentation on Human WG-6 BeadChips (Illumina®). A genetic programming algorithm was applied to evolve classifier mathematical models for outcome prediction. Cross-validation based resampling and gene use frequencies were used to identify the most prognostic genes, which were combined into rules used in a voting algorithm to predict the sample target class. Key genes were validated by quantitative polymerase chain reaction. The classifier set included 21 genes that predicted recurrence. Quantitative polymerase chain reaction was done for these genes in a subset of 100 patients. A 5-gene combined rule incorporating a voting algorithm yielded 77% sensitivity and 85% specificity to predict recurrence in the training set, and 69% and 62%, respectively, in the test set. A singular 3-gene rule was constructed that predicted recurrence with 80% sensitivity and 90% specificity in the training set, and 71% and 67%, respectively, in the test set. Using primary nonmuscle invasive urothelial carcinoma from initial occurrences genetic programming identified transcripts in reproducible fashion, which were predictive of recurrence. These findings could potentially impact nonmuscle invasive urothelial carcinoma management. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Veselovskii, I.; Dubovik, O.; Kolgotin, A.; Lapyonok, T.; di Girolamo, P.; Summa, D.; Whiteman, D. N.; Mishchenko, M.; Tanré, D.
2010-11-01
Multiwavelength (MW) Raman lidars have demonstrated their potential to profile particle parameters; however, until now, the physical models used in retrieval algorithms for processing MW lidar data have been predominantly based on the Mie theory. This approach is applicable to the modeling of light scattering by spherically symmetric particles only and does not adequately reproduce the scattering by generally nonspherical desert dust particles. Here we present an algorithm based on a model of randomly oriented spheroids for the inversion of multiwavelength lidar data. The aerosols are modeled as a mixture of two aerosol components: one composed only of spherical and the second composed of nonspherical particles. The nonspherical component is an ensemble of randomly oriented spheroids with size-independent shape distribution. This approach has been integrated into an algorithm retrieving aerosol properties from the observations with a Raman lidar based on a tripled Nd:YAG laser. Such a lidar provides three backscattering coefficients, two extinction coefficients, and the particle depolarization ratio at a single or multiple wavelengths. Simulations were performed for a bimodal particle size distribution typical of desert dust particles. The uncertainty of the retrieved particle surface, volume concentration, and effective radius for 10% measurement errors is estimated to be below 30%. We show that if the effect of particle nonsphericity is not accounted for, the errors in the retrieved aerosol parameters increase notably. The algorithm was tested with experimental data from a Saharan dust outbreak episode, measured with the BASIL multiwavelength Raman lidar in August 2007. The vertical profiles of particle parameters as well as the particle size distributions at different heights were retrieved. It was shown that the algorithm developed provided substantially reasonable results consistent with the available independent information about the observed aerosol event.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menikoff, Ralph
2012-04-03
Shock initiation in a plastic-bonded explosives (PBX) is due to hot spots. Current reactive burn models are based, at least heuristically, on the ignition and growth concept. The ignition phase occurs when a small localized region of high temperature (or hot spot) burns on a fast time scale. This is followed by a growth phase in which a reactive front spreads out from the hot spot. Propagating reactive fronts are deflagration waves. A key question is the deflagration speed in a PBX compressed and heated by a shock wave that generated the hot spot. Here, the ODEs for a steadymore » deflagration wave profile in a compressible fluid are derived, along with the needed thermodynamic quantities of realistic equations of state corresponding to the reactants and products of a PBX. The properties of the wave profile equations are analyzed and an algorithm is derived for computing the deflagration speed. As an illustrative example, the algorithm is applied to compute the deflagration speed in shock compressed PBX 9501 as a function of shock pressure. The calculated deflagration speed, even at the CJ pressure, is low compared to the detonation speed. The implication of this are briefly discussed.« less
Laser guide star wavefront sensing for ground-layer adaptive optics on extremely large telescopes.
Clare, Richard M; Le Louarn, Miska; Béchet, Clementine
2011-02-01
We propose ground-layer adaptive optics (GLAO) to improve the seeing on the 42 m European Extremely Large Telescope. Shack-Hartmann wavefront sensors (WFSs) with laser guide stars (LGSs) will experience significant spot elongation due to off-axis observation. This spot elongation influences the design of the laser launch location, laser power, WFS detector, and centroiding algorithm for LGS GLAO on an extremely large telescope. We show, using end-to-end numerical simulations, that with a noise-weighted matrix-vector-multiply reconstructor, the performance in terms of 50% ensquared energy (EE) of the side and central launch of the lasers is equivalent, the matched filter and weighted center of gravity centroiding algorithms are the most promising, and approximately 10×10 undersampled pixels are optimal. Significant improvement in the 50% EE can be observed with a few tens of photons/subaperture/frame, and no significant gain is seen by adding more than 200 photons/subaperture/frame. The LGS GLAO is not particularly sensitive to the sodium profile present in the mesosphere nor to a short-timescale (less than 100 s) evolution of the sodium profile. The performance of LGS GLAO is, however, sensitive to the atmospheric turbulence profile.
Description of data on the Nimbus 7 LIMS map archive tape: Ozone and nitric acid
NASA Technical Reports Server (NTRS)
Remsberg, E. E.; Kurzeja, R. J.; Haggard, K. V.; Russell, J. M., III; Gordley, L. L.
1986-01-01
The Nimbus 7 Limb Infrared Monitor of the Stratosphere (LIMS) data set has been processed into a Fourier coefficient representation with a Kalman filter algorithm applied to profile data at individual latitudes and pressure levels. The algorithm produces synoptic data at noon Greenwich Mean Time (GMT) from the asynoptic orbital profiles. This form of the data set is easy to use and is appropriate for time series analysis and further data manipulation and display. Ozone and nitric acid results are grouped together in this report because the LIMS vertical field of views (FOV's) and analysis characteristics for these species are similar. A comparison of the orbital input data with mixing ratios derived from Kalman filter coefficients indicates errors in mixing ratio of generally less than 5 percent, with 15 percent being a maximum error. The high quality of the mapped data was indicated by coherence of both the phases and the amplitudes of waves with latitude and pressure. Examples of the mapped fields are presented, and details are given concerning the importance of diurnal variations, the removal of polar stratospheric cloud signatures, and the interpretation of bias effects in the data near the tops of profiles.
On the stability analysis of sharply stratified shear flows
NASA Astrophysics Data System (ADS)
Churilov, Semyon
2018-05-01
When the stability of a sharply stratified shear flow is studied, the density profile is usually taken stepwise and a weak stratification between pycnoclines is neglected. As a consequence, in the instability domain of the flow two-sided neutral curves appear such that the waves corresponding to them are neutrally stable, whereas the neighboring waves on either side of the curve are unstable, in contrast with the classical result of Miles (J Fluid Mech 16:209-227, 1963) who proved that in stratified flows unstable oscillations can be only on one side of the neutral curve. In the paper, the contradiction is resolved and changes in the flow stability pattern under transition from a model stepwise to a continuous density profile are analyzed. On this basis, a simple self-consistent algorithm is proposed for studying the stability of sharply stratified shear flows with a continuous density variation and an arbitrary monotonic velocity profile without inflection points. Because our calculations and the algorithm are both based on the method of stability analysis (Churilov J Fluid Mech 539:25-55, 2005; ibid, 617, 301-326, 2008), which differs essentially from usually used, the paper starts with a brief review of the method and results obtained with it.
Thermal depth profiling of vascular lesions: automated regularization of reconstruction algorithms
NASA Astrophysics Data System (ADS)
Verkruysse, Wim; Choi, Bernard; Zhang, Jenny R.; Kim, Jeehyun; Nelson, J. Stuart
2008-03-01
Pulsed photo-thermal radiometry (PPTR) is a non-invasive, non-contact diagnostic technique used to locate cutaneous chromophores such as melanin (epidermis) and hemoglobin (vascular structures). Clinical utility of PPTR is limited because it typically requires trained user intervention to regularize the inversion solution. Herein, the feasibility of automated regularization was studied. A second objective of this study was to depart from modeling port wine stain PWS, a vascular skin lesion frequently studied with PPTR, as strictly layered structures since this may influence conclusions regarding PPTR reconstruction quality. Average blood vessel depths, diameters and densities derived from histology of 30 PWS patients were used to generate 15 randomized lesion geometries for which we simulated PPTR signals. Reconstruction accuracy for subjective regularization was compared with that for automated regularization methods. The objective regularization approach performed better. However, the average difference was much smaller than the variation between the 15 simulated profiles. Reconstruction quality depended more on the actual profile to be reconstructed than on the reconstruction algorithm or regularization method. Similar, or better, accuracy reconstructions can be achieved with an automated regularization procedure which enhances prospects for user friendly implementation of PPTR to optimize laser therapy on an individual patient basis.
Validation of Suomi NPP OMPS Limb Profiler Ozone Measurements
NASA Astrophysics Data System (ADS)
Buckner, S. N.; Flynn, L. E.; McCormick, M. P.; Anderson, J.
2017-12-01
The Ozone Mapping and Profiler Suite (OMPS) Limb Profiler onboard the Suomi National Polar-Orbiting Partnership satellite (SNPP) makes measurements of limb-scattered solar radiances over Ultraviolet and Visible wavelengths. These measurements are used in retrieval algorithms to create high vertical resolution ozone profiles, helping monitor the evolution of the atmospheric ozone layer. NOAA is in the process of implementing these algorithms to make near-real-time versions of these products. The main objective of this project is to generate estimates of the accuracy and precision of the OMPS Limb products by analysis of matchup comparisons with similar products from the Earth Observing System Microwave Limb Sounder (EOS Aura MLS). The studies investigated the sources of errors, and classified them with respect to height, geographic location, and atmospheric and observation conditions. In addition, this project included working with the algorithm developers in an attempt to develop corrections and adjustments. Collocation and zonal mean comparisons were made and statistics were gathered on both a daily and monthly basis encompassing the entire OMPS data record. This validation effort of the OMPS-LP data will be used to help validate data from the Stratosphere Aerosol and Gas Experiment III on the International Space Station (SAGE III ISS) and will also be used in conjunction with the NOAA Total Ozone from Assimilation of Stratosphere and Troposphere (TOAST) product to develop a new a-priori for the NOAA Unique Combined Atmosphere Processing System (NUCAPS) ozone product. The current NUCAPS ozone product uses a combination of Cross-track Infrared Sounder (CrIS) data for the troposphere and a tropopause based climatology derived from ozonesonde data for the stratosphere a-priori. The latest version of TOAST uses a combination of both CrIS and OMPS-LP data. We will further develop the newest version of TOAST and incorporate it into the NUCAPS system as a new a-priori, in hopes of creating a better global ozone product.
A novel approach to selecting and weighting nutrients for nutrient profiling of foods and diets.
Arsenault, Joanne E; Fulgoni, Victor L; Hersey, James C; Muth, Mary K
2012-12-01
Nutrient profiling of foods is the science of ranking or classifying foods based on their nutrient composition. Most profiling systems use similar weighting factors across nutrients due to lack of scientific evidence to assign levels of importance to nutrients. Our aim was to use a statistical approach to determine the nutrients that best explain variation in Healthy Eating Index (HEI) scores and to obtain β-coefficients for the nutrients for use as weighting factors for a nutrient-profiling algorithm. We used a cross-sectional analysis of nutrient intakes and HEI scores. Our subjects included 16,587 individuals from the National Health and Nutrition Examination Survey 2005-2008 who were 2 years of age or older and not pregnant. Our main outcome measure was variation (R(2)) in HEI scores. Linear regression analyses were conducted with HEI scores as the dependent variable and all possible combinations of 16 nutrients of interest as independent variables, with covariates age, sex, and ethnicity. The analyses identified the best 1-nutrient variable model (with the highest R(2)), the best 2-nutrient variable model, and up to the best 16-nutrient variable model. The model with 8 nutrients explained 65% of the variance in HEI scores, similar to the models with 9 to 16 nutrients, but substantially higher than previous algorithms reported in the literature. The model contained five nutrients with positive β-coefficients (ie, protein, fiber, calcium, unsaturated fat, and vitamin C) and three nutrients with negative coefficients (ie, saturated fat, sodium, and added sugar). β-coefficients from the model were used as weighting factors to create an algorithm that generated a weighted nutrient density score representing the overall nutritional quality of a food. The weighted nutrient density score can be easily calculated and is useful for describing the overall nutrient quality of both foods and diets. Copyright © 2012 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Zou, Lingyun; Wang, Zhengzhi; Huang, Jiaomin
2007-12-01
Subcellular location is one of the key biological characteristics of proteins. Position-specific profiles (PSP) have been introduced as important characteristics of proteins in this article. In this study, to obtain position-specific profiles, the Position Specific Iterative-Basic Local Alignment Search Tool (PSI-BLAST) has been used to search for protein sequences in a database. Position-specific scoring matrices are extracted from the profiles as one class of characteristics. Four-part amino acid compositions and 1st-7th order dipeptide compositions have also been calculated as the other two classes of characteristics. Therefore, twelve characteristic vectors are extracted from each of the protein sequences. Next, the characteristic vectors are weighed by a simple weighing function and inputted into a BP neural network predictor named PSP-Weighted Neural Network (PSP-WNN). The Levenberg-Marquardt algorithm is employed to adjust the weight matrices and thresholds during the network training instead of the error back propagation algorithm. With a jackknife test on the RH2427 dataset, PSP-WNN has achieved a higher overall prediction accuracy of 88.4% rather than the prediction results by the general BP neural network, Markov model, and fuzzy k-nearest neighbors algorithm on this dataset. In addition, the prediction performance of PSP-WNN has been evaluated with a five-fold cross validation test on the PK7579 dataset and the prediction results have been consistently better than those of the previous method on the basis of several support vector machines, using compositions of both amino acids and amino acid pairs. These results indicate that PSP-WNN is a powerful tool for subcellular localization prediction. At the end of the article, influences on prediction accuracy using different weighting proportions among three characteristic vector categories have been discussed. An appropriate proportion is considered by increasing the prediction accuracy.
Characterizing the Vertical Distribution of Aerosols using Ground-based Multiwavelength Lidar Data
NASA Astrophysics Data System (ADS)
Ferrare, R. A.; Thorsen, T. J.; Clayton, M.; Mueller, D.; Chemyakin, E.; Burton, S. P.; Goldsmith, J.; Holz, R.; Kuehn, R.; Eloranta, E. W.; Marais, W.; Newsom, R. K.; Liu, X.; Sawamura, P.; Holben, B. N.; Hostetler, C. A.
2016-12-01
Observations of aerosol optical and microphysical properties are critical for developing and evaluating aerosol transport model parameterizations and assessing global aerosol-radiation impacts on climate. During the Combined HSRL And Raman lidar Measurement Study (CHARMS), we investigated the synergistic use of ground-based Raman lidar and High Spectral Resolution Lidar (HSRL) measurements to retrieve aerosol properties aloft. Continuous (24/7) operation of these co-located lidars during the ten-week CHARMS mission (mid-July through September 2015) allowed the acquisition of a unique, multiwavelength ground-based lidar dataset for studying aerosol properties above the Southern Great Plains (SGP) site. The ARM Raman lidar measured profiles of aerosol backscatter, extinction and depolarization at 355 nm as well as profiles of water vapor mixing ratio and temperature. The University of Wisconsin HSRL simultaneously measured profiles of aerosol backscatter, extinction and depolarization at 532 nm and aerosol backscatter at 1064 nm. Recent advances in both lidar retrieval theory and algorithm development demonstrate that vertically-resolved retrievals using such multiwavelength lidar measurements of aerosol backscatter and extinction can help constrain both the aerosol optical (e.g. complex refractive index, scattering, etc.) and microphysical properties (e.g. effective radius, concentrations) as well as provide qualitative aerosol classification. Based on this work, the NASA Langley Research Center (LaRC) HSRL group developed automated algorithms for classifying and retrieving aerosol optical and microphysical properties, demonstrated these retrievals using data from the unique NASA/LaRC airborne multiwavelength HSRL-2 system, and validated the results using coincident airborne in situ data. We apply these algorithms to the CHARMS multiwavelength (Raman+HSRL) lidar dataset to retrieve aerosol properties above the SGP site. We present some profiles of aerosol effective radius and concentration retrieved from the CHARMS data and compare column-average aerosol properties derived from the multiwavelength lidar aerosol retrievals to corresponding values retrieved from AERONET measurements.
Design optimization of steel frames using an enhanced firefly algorithm
NASA Astrophysics Data System (ADS)
Carbas, Serdar
2016-12-01
Mathematical modelling of real-world-sized steel frames under the Load and Resistance Factor Design-American Institute of Steel Construction (LRFD-AISC) steel design code provisions, where the steel profiles for the members are selected from a table of steel sections, turns out to be a discrete nonlinear programming problem. Finding the optimum design of such design optimization problems using classical optimization techniques is difficult. Metaheuristic algorithms provide an alternative way of solving such problems. The firefly algorithm (FFA) belongs to the swarm intelligence group of metaheuristics. The standard FFA has the drawback of being caught up in local optima in large-sized steel frame design problems. This study attempts to enhance the performance of the FFA by suggesting two new expressions for the attractiveness and randomness parameters of the algorithm. Two real-world-sized design examples are designed by the enhanced FFA and its performance is compared with standard FFA as well as with particle swarm and cuckoo search algorithms.
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems
Mohamed, Mohamed A.; Eltamaly, Ali M.; Alolah, Abdulrahman I.
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers. PMID:27513000
PSO-Based Smart Grid Application for Sizing and Optimization of Hybrid Renewable Energy Systems.
Mohamed, Mohamed A; Eltamaly, Ali M; Alolah, Abdulrahman I
2016-01-01
This paper introduces an optimal sizing algorithm for a hybrid renewable energy system using smart grid load management application based on the available generation. This algorithm aims to maximize the system energy production and meet the load demand with minimum cost and highest reliability. This system is formed by photovoltaic array, wind turbines, storage batteries, and diesel generator as a backup source of energy. Demand profile shaping as one of the smart grid applications is introduced in this paper using load shifting-based load priority. Particle swarm optimization is used in this algorithm to determine the optimum size of the system components. The results obtained from this algorithm are compared with those from the iterative optimization technique to assess the adequacy of the proposed algorithm. The study in this paper is performed in some of the remote areas in Saudi Arabia and can be expanded to any similar regions around the world. Numerous valuable results are extracted from this study that could help researchers and decision makers.
NASA Astrophysics Data System (ADS)
Rajalakshmi, N.; Padma Subramanian, D.; Thamizhavel, K.
2015-03-01
The extent of real power loss and voltage deviation associated with overloaded feeders in radial distribution system can be reduced by reconfiguration. Reconfiguration is normally achieved by changing the open/closed state of tie/sectionalizing switches. Finding optimal switch combination is a complicated problem as there are many switching combinations possible in a distribution system. Hence optimization techniques are finding greater importance in reducing the complexity of reconfiguration problem. This paper presents the application of firefly algorithm (FA) for optimal reconfiguration of radial distribution system with distributed generators (DG). The algorithm is tested on IEEE 33 bus system installed with DGs and the results are compared with binary genetic algorithm. It is found that binary FA is more effective than binary genetic algorithm in achieving real power loss reduction and improving voltage profile and hence enhancing the performance of radial distribution system. Results are found to be optimum when DGs are added to the test system, which proved the impact of DGs on distribution system.
Optimization in optical systems revisited: Beyond genetic algorithms
NASA Astrophysics Data System (ADS)
Gagnon, Denis; Dumont, Joey; Dubé, Louis
2013-05-01
Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).
On the suitability of the connection machine for direct particle simulation
NASA Technical Reports Server (NTRS)
Dagum, Leonard
1990-01-01
The algorithmic structure was examined of the vectorizable Stanford particle simulation (SPS) method and the structure is reformulated in data parallel form. Some of the SPS algorithms can be directly translated to data parallel, but several of the vectorizable algorithms have no direct data parallel equivalent. This requires the development of new, strictly data parallel algorithms. In particular, a new sorting algorithm is developed to identify collision candidates in the simulation and a master/slave algorithm is developed to minimize communication cost in large table look up. Validation of the method is undertaken through test calculations for thermal relaxation of a gas, shock wave profiles, and shock reflection from a stationary wall. A qualitative measure is provided of the performance of the Connection Machine for direct particle simulation. The massively parallel architecture of the Connection Machine is found quite suitable for this type of calculation. However, there are difficulties in taking full advantage of this architecture because of lack of a broad based tradition of data parallel programming. An important outcome of this work has been new data parallel algorithms specifically of use for direct particle simulation but which also expand the data parallel diction.
NASA Astrophysics Data System (ADS)
Li Voti, R.; Sibilia, C.; Bertolotti, M.
2003-01-01
Photothermal depth profiling has been the subject of many papers in the last years. Inverse problems on different kinds of materials have been identified, classified, and solved. A first classification has been done according to the type of depth profile: the physical quantity to be reconstructed is the optical absorption in the problems of type I, the thermal effusivity for type II, and both of them for type III. Another classification may be done depending on the time scale of the pump beam heating (frequency scan, time scan), or on its geometrical symmetry (one- or three-dimensional). In this work we want to discuss two different approaches, the genetic algorithms (GA) [R. Li Voti, C. Melchiorri, C. Sibilia, and M. Bertolotti, Anal. Sci. 17, 410 (2001); R. Li Voti, Proceedings, IV Int. Workshop on Advances in Signal Processing for Non-Destructive Evaluation of Materials, Quebec, August 2001] and the thermal wave backscattering (TWBS) [R. Li Voti, G. L. Liakhou, S. Paoloni, C. Sibilia, and M. Bertolotti, Anal. Sci. 17, 414 (2001); J. C. Krapez and R. Li Voti, Anal. Sci. 17, 417 (2001)], showing their performances and limits of validity for several kinds of photothermal depth profiling problems: The two approaches are based on different mechanisms and exhibit obviously different features. GA may be implemented on the exact heat diffusion equation as follows: one chromosome is associated to each profile. The genetic evolution of the chromosome allows one to find better and better profiles, eventually converging towards the solution of the inverse problem. The main advantage is that GA may be applied to any arbitrary profile, but several disadvantages exist; for example, the complexity of the algorithm, the slow convergence, and consequently the computer time consumed. On the contrary, TWBS uses a simplified theoretical model of heat diffusion in inhomogeneous materials. According to such a model, the photothermal signal depends linearly on the thermal effusivity inhomogeneities, which may be detected because they act as backscattering centers for the heat flux. The physical problem is reduced to the inversion of a algebraic linear system. The advantage is that TWBS allows excellent reconstructions, but only within the limits of validity of the approximate model, which include any slowly varying profile. Recently we have tested the perfomance of both TWBS and GA on linear conductivity profiles. In other words, we have done the numerical simulations of the photothermal measurements coming from a film over a substrate, where the conductivity in the film changes linearly from k1 at the surface, to k2 at the substrate. TWBS and GA have been used to reconstruct the original profiles. If the conductivity mismatch ranges as 0.2
NASA Technical Reports Server (NTRS)
Bak, Juseon; Liu, X.; Wei, J.; Kim, J. H.; Chance, K.; Barnet, C.
2011-01-01
An advance algorithm based on the optimal estimation technique has beeen developed to derive ozone profile from GOME UV radiances and have adapted it to OMI UV radiances. OMI vertical resolution : 7-11 km in the troposphere and 10-14 km in the stratosphere. Satellite ultraviolet measurements (GOME, OMI) contain little vertical information for the small scale of ozone, especially in the upper troposphere (UT) and lower stratosphere (LS) where the sharp O3 gradient across the tropopause and large ozone variability are observed. Therefore, retrievals depend greatly on the a-priori knowledge in the UTLS
Diffractive optical elements for transformation of modes in lasers
Sridharan, Arun K.; Pax, Paul H.; Heebner, John E.; Drachenberg, Derrek R.; Armstrong, James P.; Dawson, Jay W.
2015-09-01
Spatial mode conversion modules are described, with the capability of efficiently transforming a given optical beam profile, at one plane in space into another well-defined optical beam profile at a different plane in space, whose detailed spatial features and symmetry properties can, in general, differ significantly. The modules are comprised of passive, high-efficiency, low-loss diffractive optical elements, combined with Fourier transform optics. Design rules are described that employ phase retrieval techniques and associated algorithms to determine the necessary profiles of the diffractive optical components. System augmentations are described that utilize real-time adaptive optical techniques for enhanced performance as well as power scaling.
Diffractive optical elements for transformation of modes in lasers
Sridharan, Arun K; Pax, Paul H; Heebner, John E; Drachenberg, Derrek R.; Armstrong, James P.; Dawson, Jay W.
2016-06-21
Spatial mode conversion modules are described, with the capability of efficiently transforming a given optical beam profile, at one plane in space into another well-defined optical beam profile at a different plane in space, whose detailed spatial features and symmetry properties can, in general, differ significantly. The modules are comprised of passive, high-efficiency, low-loss diffractive optical elements, combined with Fourier transform optics. Design rules are described that employ phase retrieval techniques and associated algorithms to determine the necessary profiles of the diffractive optical components. System augmentations are described that utilize real-time adaptive optical techniques for enhanced performance as well as power scaling.
Rain rate range profiling from a spaceborne radar
NASA Technical Reports Server (NTRS)
Meneghini, R.
1980-01-01
At certain frequencies and incidence angles the relative invariance of the surface scattering properites over land can be used to estimate the total attenuation and the integrated rain from a spaceborne attenuation-wavelength radar. The technique is generalized so that rain rate profiles along the radar beam can be estimated, i.e., rain rate determination at each range bin. This is done by modifying the standard algorithm for an attenuating-wavelength radar to include in it the measurement of the total attenuation. Simple error analyses of the estimates show that this type of profiling is possible if the total attenuation can be measured with a modest degree of accuracy.
Effect of out-of-roundness on the performance of a diesel engine connecting-rod bearing
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.; Brewe, D. E.; Keith, T. G., Jr.
1993-01-01
In this paper, the dynamic performance of the Ruston and Hornsby VEB diesel engine connecting-rod bearing with circular and out-of-round profiles is analyzed. The effect of cavitation is considered by using a cavitation algorithm, which mimics JFO boundary conditions. The effect of mass inertia is accounted for by solving coupled nonlinear equations of motion. The journal profiles considered are circular, elliptical, semi-elliptical, and three lobe epicycloid. The predicted journal trajectory and other performance parameters for one complete load cycle are presented for all of the out-of-round profiles and are also compared with the predictions for the circular bearing.
Effect of out-of-roundness on the performance of a diesel engine connecting-rod bearing
NASA Technical Reports Server (NTRS)
Vijayaraghavan, D.; Brewe, D. E.; Keith, T. G., Jr.
1991-01-01
In this paper, the dynamic performance of the Ruston and Hornsby VEB diesel engine connecting-rod bearing with circular and out-of-round profiles is analyzed. The effect of cavitation is considered by using a cavitation algorithm, which mimics JFO boundary conditions. The effect of mass inertia is accounted for by solving coupled nonlinear equations of motion. The journal profiles considered are circular, elliptical, semi-elliptical, and three lobe epicycloid. The predicted journal trajectory and other performance parameters for one complete load cycle are presented for all of the out-of-round profiles and are also compared with the predictions for the circular bearing.
Electron Density Profiles of the Topside Ionosphere
NASA Technical Reports Server (NTRS)
Huang, Xue-Qin; Reinsch, Bodo W.; Bilitza, Dieter; Benson, Robert F.
2002-01-01
The existing uncertainties about the electron density profiles in the topside ionosphere, i.e., in the height region from h,F2 to - 2000 km, require the search for new data sources. The ISIS and Alouette topside sounder satellites from the sixties to the eighties recorded millions of ionograms but most were not analyzed in terms of electron density profiles. In recent years an effort started to digitize the analog recordings to prepare the ionograms for computerized analysis. As of November 2001 about 350000 ionograms have been digitized from the original 7-track analog tapes. These data are available in binary and CDF format from the anonymous ftp site of the National Space Science Data Center. A search site and browse capabilities on CDAWeb assist the scientific usage of these data. All information and access links can be found at http://nssdc.gsfc.nasa.gov/space/isis/isis- status.htm1. This paper describes the ISIS data restoration effort and shows how the digital ionograms are automatically processed into electron density profiles from satellite orbit altitude (1400 km for ISIS-2) down to the F peak. Because of the large volume of data an automated processing algorithm is imperative. The TOPside Ionogram Scaler with True height algorithm TOPIST software developed for this task is successfully scaling - 70% of the ionograms. An <
Reynier, Frédéric; Petit, Fabien; Paye, Malick; Turrel-Davin, Fanny; Imbert, Pierre-Emmanuel; Hot, Arnaud; Mougin, Bruno; Miossec, Pierre
2011-01-01
The analysis of gene expression data shows that many genes display similarity in their expression profiles suggesting some co-regulation. Here, we investigated the co-expression patterns in gene expression data and proposed a correlation-based research method to stratify individuals. Using blood from rheumatoid arthritis (RA) patients, we investigated the gene expression profiles from whole blood using Affymetrix microarray technology. Co-expressed genes were analyzed by a biclustering method, followed by gene ontology analysis of the relevant biclusters. Taking the type I interferon (IFN) pathway as an example, a classification algorithm was developed from the 102 RA patients and extended to 10 systemic lupus erythematosus (SLE) patients and 100 healthy volunteers to further characterize individuals. We developed a correlation-based algorithm referred to as Classification Algorithm Based on a Biological Signature (CABS), an alternative to other approaches focused specifically on the expression levels. This algorithm applied to the expression of 35 IFN-related genes showed that the IFN signature presented a heterogeneous expression between RA, SLE and healthy controls which could reflect the level of global IFN signature activation. Moreover, the monitoring of the IFN-related genes during the anti-TNF treatment identified changes in type I IFN gene activity induced in RA patients. In conclusion, we have proposed an original method to analyze genes sharing an expression pattern and a biological function showing that the activation levels of a biological signature could be characterized by its overall state of correlation.
Zanderigo, Francesca; Sparacino, Giovanni; Kovatchev, Boris; Cobelli, Claudio
2007-09-01
The aim of this article was to use continuous glucose error-grid analysis (CG-EGA) to assess the accuracy of two time-series modeling methodologies recently developed to predict glucose levels ahead of time using continuous glucose monitoring (CGM) data. We considered subcutaneous time series of glucose concentration monitored every 3 minutes for 48 hours by the minimally invasive CGM sensor Glucoday® (Menarini Diagnostics, Florence, Italy) in 28 type 1 diabetic volunteers. Two prediction algorithms, based on first-order polynomial and autoregressive (AR) models, respectively, were considered with prediction horizons of 30 and 45 minutes and forgetting factors (ff) of 0.2, 0.5, and 0.8. CG-EGA was used on the predicted profiles to assess their point and dynamic accuracies using original CGM profiles as reference. Continuous glucose error-grid analysis showed that the accuracy of both prediction algorithms is overall very good and that their performance is similar from a clinical point of view. However, the AR model seems preferable for hypoglycemia prevention. CG-EGA also suggests that, irrespective of the time-series model, the use of ff = 0.8 yields the highest accurate readings in all glucose ranges. For the first time, CG-EGA is proposed as a tool to assess clinically relevant performance of a prediction method separately at hypoglycemia, euglycemia, and hyperglycemia. In particular, we have shown that CG-EGA can be helpful in comparing different prediction algorithms, as well as in optimizing their parameters.
Assimilation of Atmospheric InfraRed Sounder (AIRS) Profiles using WRF-Var
NASA Technical Reports Server (NTRS)
Zavodsky, Brad; Jedlovec, Gary J.; Lapenta, William
2008-01-01
The Weather Research and Forecasting (WRF) model contains a three-dimensional variational (3DVAR) assimilation system (WRF-Var), which allows a user to join data from multiple sources into one coherent analysis. WRF-Var combines observations with a background field traditionally generated using a previous model forecast through minimization of a cost function. In data sparse regions, remotely-sensed observations may be able to improve analyses and produce improved forecasts. One such source comes from the Atmospheric Infrared Sounder (AIRS), which together with the Advanced Microwave Sounding Unit (AMSU), represents one of the most advanced space-based atmospheric sounding systems. The combined AIRS/AMSU system provides radiance measurements used as input to a sophisticated retrieval scheme which has been shown to produce temperature profiles with an accuracy of 1 K over 1 km layers and humidity profiles with accuracy of 15% in 2 km layers in both clear and partly cloudy conditions. The retrieval algorithm also provides estimates of the accuracy of the retrieved values at each pressure level, allowing the user to select profiles based on the required error tolerances of the application. The purpose of this paper is to describe a procedure to optimally assimilate high-resolution AIRS profile data into a regional configuration of the Advanced Research WRF (ARW) version 2.2 using WRF-Var. The paper focuses on development of background error covariances for the regional domain and background field type using gen_be and an optimal methodology for ingesting AIRS temperature and moisture profiles as separate overland and overwater retrievals with different error characteristics in the WRF-Var. The AIRS thermodynamic profiles are obtained from the version 5.0 Earth Observing System (EOS) science team retrieval algorithm and contain information about the quality of each temperature layer. The quality indicators are used to select the highest quality temperature and moisture data for each profile location and pressure level. Analyses are run to produce quasi-real-time regional weather forecasts over the continental U.S. The preliminary assessment of the impact of the AIRS profiles will focus on intelligent use of the quality indicators, optimized tuning of the WRF-Var, and comparison of analysis soundings to radiosondes.
Spettell, Claire M; Wall, Terry C; Allison, Jeroan; Calhoun, Jaimee; Kobylinski, Richard; Fargason, Rachel; Kiefe, Catarina I
2003-01-01
Background Multiple factors limit identification of patients with depression from administrative data. However, administrative data drives many quality measurement systems, including the Health Plan Employer Data and Information Set (HEDIS®). Methods We investigated two algorithms for identification of physician-recognized depression. The study sample was drawn from primary care physician member panels of a large managed care organization. All members were continuously enrolled between January 1 and December 31, 1997. Algorithm 1 required at least two criteria in any combination: (1) an outpatient diagnosis of depression or (2) a pharmacy claim for an antidepressant. Algorithm 2 included the same criteria as algorithm 1, but required a diagnosis of depression for all patients. With algorithm 1, we identified the medical records of a stratified, random subset of patients with and without depression (n=465). We also identified patients of primary care physicians with a minimum of 10 depressed members by algorithm 1 (n=32,819) and algorithm 2 (n=6,837). Results The sensitivity, specificity, and positive predictive values were: Algorithm 1: 95 percent, 65 percent, 49 percent; Algorithm 2: 52 percent, 88 percent, 60 percent. Compared to algorithm 1, profiles from algorithm 2 revealed higher rates of follow-up visits (43 percent, 55 percent) and appropriate antidepressant dosage acutely (82 percent, 90 percent) and chronically (83 percent, 91 percent) (p<0.05 for all). Conclusions Both algorithms had high false positive rates. Denominator construction (algorithm 1 versus 2) contributed significantly to variability in measured quality. Our findings raise concern about interpreting depression quality reports based upon administrative data. PMID:12968818
NASA Astrophysics Data System (ADS)
Schwarz, Jakob; Kirchengast, Gottfried; Schwaerz, Marc
2018-05-01
Global Navigation Satellite System (GNSS) radio occultation (RO) observations are highly accurate, long-term stable data sets and are globally available as a continuous record from 2001. Essential climate variables for the thermodynamic state of the free atmosphere - such as pressure, temperature, and tropospheric water vapor profiles (involving background information) - can be derived from these records, which therefore have the potential to serve as climate benchmark data. However, to exploit this potential, atmospheric profile retrievals need to be very accurate and the remaining uncertainties quantified and traced throughout the retrieval chain from raw observations to essential climate variables. The new Reference Occultation Processing System (rOPS) at the Wegener Center aims to deliver such an accurate RO retrieval chain with integrated uncertainty propagation. Here we introduce and demonstrate the algorithms implemented in the rOPS for uncertainty propagation from excess phase to atmospheric bending angle profiles, for estimated systematic and random uncertainties, including vertical error correlations and resolution estimates. We estimated systematic uncertainty profiles with the same operators as used for the basic state profiles retrieval. The random uncertainty is traced through covariance propagation and validated using Monte Carlo ensemble methods. The algorithm performance is demonstrated using test day ensembles of simulated data as well as real RO event data from the satellite missions CHAllenging Minisatellite Payload (CHAMP); Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC); and Meteorological Operational Satellite A (MetOp). The results of the Monte Carlo validation show that our covariance propagation delivers correct uncertainty quantification from excess phase to bending angle profiles. The results from the real RO event ensembles demonstrate that the new uncertainty estimation chain performs robustly. Together with the other parts of the rOPS processing chain this part is thus ready to provide integrated uncertainty propagation through the whole RO retrieval chain for the benefit of climate monitoring and other applications.
Importance of A Priori Vertical Ozone Profiles for TEMPO Air Quality Retrievals
NASA Astrophysics Data System (ADS)
Johnson, M. S.; Sullivan, J. T.; Liu, X.; Zoogman, P.; Newchurch, M.; Kuang, S.; McGee, T. J.; Leblanc, T.
2017-12-01
Ozone (O3) is a toxic pollutant which plays a major role in air quality. Typically, monitoring of surface air quality and O3 mixing ratios is conducted using in situ measurement networks. This is partially due to high-quality information related to air quality being limited from space-borne platforms due to coarse spatial resolution, limited temporal frequency, and minimal sensitivity to lower tropospheric and surface-level O3. The Tropospheric Emissions: Monitoring of Pollution (TEMPO) satellite is designed to address the limitations of current space-based platforms and to improve our ability to monitor North American air quality. TEMPO will provide hourly data of total column and vertical profiles of O3 with high spatial resolution to be used as a near-real-time air quality product. TEMPO O3 retrievals will apply the Smithsonian Astrophysical Observatory profile algorithm developed based on work from GOME, GOME-2, and OMI. This algorithm is suggested to use a priori O3 profile information from a climatological data-base developed from long-term ozone-sonde measurements (tropopause-based (TB-Clim) O3 climatology). This study evaluates the TB-Clim dataset and model simulated O3 profiles, which could potentially serve as a priori O3 profile information in TEMPO retrievals, from near-real-time data assimilation model products (NASA GMAO's operational GEOS-5 FP model and reanalysis data from MERRA2) and a full chemical transport model (CTM), GEOS-Chem. In this study, vertical profile products are evaluated with surface (0-2 km) and tropospheric (0-10 km) TOLNet observations and the theoretical impact of individual a priori profile sources on the accuracy of TEMPO O3 retrievals in the troposphere and at the surface are presented. Results indicate that while the TB-Clim climatological dataset can replicate seasonally-averaged tropospheric O3 profiles, model-simulated profiles from a full CTM resulted in more accurate tropospheric and surface-level O3 retrievals from TEMPO when compared to hourly and daily-averaged TOLNet observations. Furthermore, it is shown that when large surface O3 mixing ratios are observed, TEMPO retrieval values at the surface are most accurate when applying CTM a priori profile information compared to all other data products.
Langenbucher, Frieder
2003-01-01
MS Excel is a useful tool to handle in vitro/in vivo correlation (IVIVC) distribution functions, with emphasis on the Weibull and the biexponential distribution, which are most useful for the presentation of cumulative profiles, e.g. release in vitro or urinary excretion in vivo, and differential profiles such as the plasma response in vivo. The discussion includes moments (AUC and mean) as summarizing statistics, and data-fitting algorithms for parameter estimation.
2006-05-11
examined. These data were processed by the Automatic Real Time Ionogram Scaler with True Height ( ARTIST ) [Reinisch and Huang, 1983] program into electron...IDA3D. The data is locally available and previously quality checked. In addition, IDA3D maps using ARTIST -calculated profiles from hand scaled...ionograms are available for comparison. The first test run of the IDA3D used only O-mode autoscaled virtual height profiles from five different digisondes
2017-11-01
inversion layer, or the well-mixed boundary layer. In such cases a low cloud ceiling is not present. In all instances the atmospheric extinction profiles...height, radiation fog depth, or the inversion layer height. The visibility regions and several representative vertical profiles of extinction are...the coefficient B can be found by B = ln(D/A) . (2) The coefficient B is sometimes a function of the cloud ceiling height, the inversion layer height
Fernandez, M Castrillon; Venencia, C; Garrigó, E; Caussa, L
2012-06-01
To compare measured and calculated doses using Pencil Beam (PB) and Monte Carlo (MC) algorithm on a CIRS thorax phantom for SBRT lung treatments. A 6MV photon beam generated by a Primus linac with an Optifocus MLC (Siemens) was used. Dose calculation was done using iPlan v4.1.2 TPS (BrainLAB) by PB and MC (dose to water and dose to medium) algorithms. The commissioning of both algorithms was done reproducing experimental measurements in water. A CIRS thorax phantom was used to compare doses using a Farmer type ion chamber (PTW) and EDR2 radiographic films (KODAK). The ionization chamber, into a tissue equivalent insert, was placed in two position of lung tissue and was irradiated using three treatments plans. Axial dose distributions were measured for four treatments plans using conformal and IMRT technique. Dose distribution comparisons were done by dose profiles and gamma index (3%/3mm). For the studied beam configurations, ion chamber measurements shows that PB overestimate the dose up to 8.5%, whereas MC has a maximum variation of 1.6%. Dosimetric analysis using dose profiles shows that PB overestimates the dose in the region corresponding to the lung up to 16%. For axial dose distribution comparison the percentage of pixels with gamma index bigger than one for MC and PB was, plan 1: 95.6% versus 87.4%, plan 2: 91.2% versus 77.6%, plan 3: 99.7% versus 93.1% and for plan 4: 98.8% versus 91.7%. It was confirmed that the lower dosimetric errors calculated applying MC algorithm appears when the spatial resolution and variance decrease at the expense of increased computation time. The agreement between measured and calculated doses, in a phantom with lung heterogeneities, is better with MC algorithm. PB algorithm overestimates the doses in lung tissue, which could have a clinical impact in SBRT lung treatments. © 2012 American Association of Physicists in Medicine.
An OMIC biomarker detection algorithm TriVote and its application in methylomic biomarker detection.
Xu, Cheng; Liu, Jiamei; Yang, Weifeng; Shu, Yayun; Wei, Zhipeng; Zheng, Weiwei; Feng, Xin; Zhou, Fengfeng
2018-04-01
Transcriptomic and methylomic patterns represent two major OMIC data sources impacted by both inheritable genetic information and environmental factors, and have been widely used as disease diagnosis and prognosis biomarkers. Modern transcriptomic and methylomic profiling technologies detect the status of tens of thousands or even millions of probing residues in the human genome, and introduce a major computational challenge for the existing feature selection algorithms. This study proposes a three-step feature selection algorithm, TriVote, to detect a subset of transcriptomic or methylomic residues with highly accurate binary classification performance. TriVote outperforms both filter and wrapper feature selection algorithms with both higher classification accuracy and smaller feature number on 17 transcriptomes and two methylomes. Biological functions of the methylome biomarkers detected by TriVote were discussed for their disease associations. An easy-to-use Python package is also released to facilitate the further applications.
Extremum Seeking Control of Smart Inverters for VAR Compensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Daniel; Negrete-Pincetic, Matias; Stewart, Emma
2015-09-04
Reactive power compensation is used by utilities to ensure customer voltages are within pre-defined tolerances and reduce system resistive losses. While much attention has been paid to model-based control algorithms for reactive power support and Volt Var Optimization (VVO), these strategies typically require relatively large communications capabilities and accurate models. In this work, a non-model-based control strategy for smart inverters is considered for VAR compensation. An Extremum Seeking control algorithm is applied to modulate the reactive power output of inverters based on real power information from the feeder substation, without an explicit feeder model. Simulation results using utility demand informationmore » confirm the ability of the control algorithm to inject VARs to minimize feeder head real power consumption. In addition, we show that the algorithm is capable of improving feeder voltage profiles and reducing reactive power supplied by the distribution substation.« less
Evolutionary computation applied to the reconstruction of 3-D surface topography in the SEM.
Kodama, Tetsuji; Li, Xiaoyuan; Nakahira, Kenji; Ito, Dai
2005-10-01
A genetic algorithm has been applied to the line profile reconstruction from the signals of the standard secondary electron (SE) and/or backscattered electron detectors in a scanning electron microscope. This method solves the topographical surface reconstruction problem as one of combinatorial optimization. To extend this optimization approach for three-dimensional (3-D) surface topography, this paper considers the use of a string coding where a 3-D surface topography is represented by a set of coordinates of vertices. We introduce the Delaunay triangulation, which attains the minimum roughness for any set of height data to capture the fundamental features of the surface being probed by an electron beam. With this coding, the strings are processed with a class of hybrid optimization algorithms that combine genetic algorithms and simulated annealing algorithms. Experimental results on SE images are presented.
A sonification algorithm for developing the off-roads models for driving simulators
NASA Astrophysics Data System (ADS)
Chiroiu, Veturia; Brişan, Cornel; Dumitriu, Dan; Munteanu, Ligia
2018-01-01
In this paper, a sonification algorithm for developing the off-road models for driving simulators, is proposed. The aim of this algorithm is to overcome difficulties of heuristics identification which are best suited to a particular off-road profile built by measurements. The sonification algorithm is based on the stochastic polynomial chaos analysis suitable in solving equations with random input data. The fluctuations are generated by incomplete measurements leading to inhomogeneities of the cross-sectional curves of off-roads before and after deformation, the unstable contact between the tire and the road and the unreal distribution of contact and friction forces in the unknown contact domains. The approach is exercised on two particular problems and results compare favorably to existing analytical and numerical solutions. The sonification technique represents a useful multiscale analysis able to build a low-cost virtual reality environment with increased degrees of realism for driving simulators and higher user flexibility.
NASA Astrophysics Data System (ADS)
González, Diego; Botella, Guillermo; García, Carlos; Prieto, Manuel; Tirado, Francisco
2013-12-01
This contribution focuses on the optimization of matching-based motion estimation algorithms widely used for video coding standards using an Altera custom instruction-based paradigm and a combination of synchronous dynamic random access memory (SDRAM) with on-chip memory in Nios II processors. A complete profile of the algorithms is achieved before the optimization, which locates code leaks, and afterward, creates a custom instruction set, which is then added to the specific design, enhancing the original system. As well, every possible memory combination between on-chip memory and SDRAM has been tested to achieve the best performance. The final throughput of the complete designs are shown. This manuscript outlines a low-cost system, mapped using very large scale integration technology, which accelerates software algorithms by converting them into custom hardware logic blocks and showing the best combination between on-chip memory and SDRAM for the Nios II processor.
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methods were proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA.more » Moreover, these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less
NASA Astrophysics Data System (ADS)
Ingacheva, Anastasia; Chukalina, Marina; Khanipov, Timur; Nikolaev, Dmitry
2018-04-01
Motion blur caused by camera vibration is a common source of degradation in photographs. In this paper we study the problem of finding the point spread function (PSF) of a blurred image using the tomography technique. The PSF reconstruction result strongly depends on the particular tomography technique used. We present a tomography algorithm with regularization adapted specifically for this task. We use the algebraic reconstruction technique (ART algorithm) as the starting algorithm and introduce regularization. We use the conjugate gradient method for numerical implementation of the proposed approach. The algorithm is tested using a dataset which contains 9 kernels extracted from real photographs by the Adobe corporation where the point spread function is known. We also investigate influence of noise on the quality of image reconstruction and investigate how the number of projections influence the magnitude change of the reconstruction error.
Computational gene expression profiling under salt stress reveals patterns of co-expression
Sanchita; Sharma, Ashok
2016-01-01
Plants respond differently to environmental conditions. Among various abiotic stresses, salt stress is a condition where excess salt in soil causes inhibition of plant growth. To understand the response of plants to the stress conditions, identification of the responsible genes is required. Clustering is a data mining technique used to group the genes with similar expression. The genes of a cluster show similar expression and function. We applied clustering algorithms on gene expression data of Solanum tuberosum showing differential expression in Capsicum annuum under salt stress. The clusters, which were common in multiple algorithms were taken further for analysis. Principal component analysis (PCA) further validated the findings of other cluster algorithms by visualizing their clusters in three-dimensional space. Functional annotation results revealed that most of the genes were involved in stress related responses. Our findings suggest that these algorithms may be helpful in the prediction of the function of co-expressed genes. PMID:26981411
Cross-modal work helps OMC improve the safety of commercial transportation
DOT National Transportation Integrated Search
1997-01-01
This article describes the Commercial Vehicle Information System (CVIS), designed to deploy a national safety program for the U.S. commercial trucking fleet. CVIS is built around a safety analysis algorithm called SafeStat which constructs a profile ...
Two-wavelength Lidar inversion algorithm for determining planetary boundary layer height
NASA Astrophysics Data System (ADS)
Liu, Boming; Ma, Yingying; Gong, Wei; Jian, Yang; Ming, Zhang
2018-02-01
This study proposes a two-wavelength Lidar inversion algorithm to determine the boundary layer height (BLH) based on the particles clustering. Color ratio and depolarization ratio are used to analyze the particle distribution, based on which the proposed algorithm can overcome the effects of complex aerosol layers to calculate the BLH. The algorithm is used to determine the top of the boundary layer under different mixing state. Experimental results demonstrate that the proposed algorithm can determine the top of the boundary layer even in a complex case. Moreover, it can better deal with the weak convection conditions. Finally, experimental data from June 2015 to December 2015 were used to verify the reliability of the proposed algorithm. The correlation between the results of the proposed algorithm and the manual method is R2 = 0.89 with a RMSE of 131 m and mean bias of 49 m; the correlation between the results of the ideal profile fitting method and the manual method is R2 = 0.64 with a RMSE of 270 m and a mean bias of 165 m; and the correlation between the results of the wavelet covariance transform method and manual method is R2 = 0.76, with a RMSE of 196 m and mean bias of 23 m. These findings indicate that the proposed algorithm has better reliability and stability than traditional algorithms.
Bidargaddi, Niranjan P; Chetty, Madhu; Kamruzzaman, Joarder
2008-06-01
Profile hidden Markov models (HMMs) based on classical HMMs have been widely applied for protein sequence identification. The formulation of the forward and backward variables in profile HMMs is made under statistical independence assumption of the probability theory. We propose a fuzzy profile HMM to overcome the limitations of that assumption and to achieve an improved alignment for protein sequences belonging to a given family. The proposed model fuzzifies the forward and backward variables by incorporating Sugeno fuzzy measures and Choquet integrals, thus further extends the generalized HMM. Based on the fuzzified forward and backward variables, we propose a fuzzy Baum-Welch parameter estimation algorithm for profiles. The strong correlations and the sequence preference involved in the protein structures make this fuzzy architecture based model as a suitable candidate for building profiles of a given family, since the fuzzy set can handle uncertainties better than classical methods.
Terminal iterative learning control based station stop control of a train
NASA Astrophysics Data System (ADS)
Hou, Zhongsheng; Wang, Yi; Yin, Chenkun; Tang, Tao
2011-07-01
The terminal iterative learning control (TILC) method is introduced for the first time into the field of train station stop control and three TILC-based algorithms are proposed in this study. The TILC-based train station stop control approach utilises the terminal stop position error in previous braking process to update the current control profile. The initial braking position, or the braking force, or their combination is chosen as the control input, and corresponding learning law is developed. The terminal stop position error of each algorithm is guaranteed to converge to a small region related with the initial offset of braking position with rigorous analysis. The validity of the proposed algorithms is verified by illustrative numerical examples.
Fast Optimization for Aircraft Descent and Approach Trajectory
NASA Technical Reports Server (NTRS)
Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John
2017-01-01
We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.
Utilization of high-frequency Rayleigh waves in near-surface geophysics
Xia, J.; Miller, R.D.; Park, C.B.; Ivanov, J.; Tian, G.; Chen, C.
2004-01-01
Shear-wave velocities can be derived from inverting the dispersive phase velocity of the surface. The multichannel analysis of surface waves (MASW) is one technique for inverting high-frequency Rayleigh waves. The process includes acquisition of high-frequency broad-band Rayleigh waves, efficient and accurate algorithms designed to extract Rayleigh-wave dispersion curves from Rayleigh waves, and stable and efficient inversion algorithms to obtain near-surface S-wave velocity profiles. MASW estimates S-wave velocity from multichannel vertical compoent data and consists of data acquisition, dispersion-curve picking, and inversion.
Creating a Satellite-Based Record of Tropospheric Ozone
NASA Technical Reports Server (NTRS)
Oetjen, Hilke; Payne, Vivienne H.; Kulawik, Susan S.; Eldering, Annmarie; Worden, John; Edwards, David P.; Francis, Gene L.; Worden, Helen M.
2013-01-01
The TES retrieval algorithm has been applied to IASI radiances. We compare the retrieved ozone profiles with ozone sonde profiles for mid-latitudes for the year 2008. We find a positive bias in the IASI ozone profiles in the UTLS region of up to 22 %. The spatial coverage of the IASI instrument allows sampling of effectively the same air mass with several IASI scenes simultaneously. Comparisons of the root-mean-square of an ensemble of IASI profiles to theoretical errors indicate that the measurement noise and the interference of temperature and water vapour on the retrieval together mostly explain the empirically derived random errors. The total degrees of freedom for signal of the retrieval for ozone are 3.1 +/- 0.2 and the tropospheric degrees of freedom are 1.0 +/- 0.2 for the described cases. IASI ozone profiles agree within the error bars with coincident ozone profiles derived from a TES stare sequence for the ozone sonde station at Bratt's Lake (50.2 deg N, 104.7 deg W).