An analytical method of estimating turbine performance
NASA Technical Reports Server (NTRS)
Kochendorfer, Fred D; Nettles, J Cary
1949-01-01
A method is developed by which the performance of a turbine over a range of operating conditions can be analytically estimated from the blade angles and flow areas. In order to use the method, certain coefficients that determine the weight flow and the friction losses must be approximated. The method is used to calculate the performance of the single-stage turbine of a commercial aircraft gas-turbine engine and the calculated performance is compared with the performance indicated by experimental data. For the turbine of the typical example, the assumed pressure losses and the tuning angles give a calculated performance that represents the trends of the experimental performance with reasonable accuracy. The exact agreement between analytical performance and experimental performance is contingent upon the proper selection of a blading-loss parameter.
Fast analytical scatter estimation using graphics processing units.
Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris
2015-01-01
To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.
Estimation of the uncertainty of analyte concentration from the measurement uncertainty.
Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F
2015-09-01
Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.
Westgard, Sten A
2016-06-01
To assess the analytical performance of instruments and methods through external quality assessment and proficiency testing data on the Sigma scale. A representative report from five different EQA/PT programs around the world (2 US, 1 Canadian, 1 UK, and 1 Australasian) was accessed. The instrument group standard deviations were used as surrogate estimates of instrument imprecision. Performance specifications from the US CLIA proficiency testing criteria were used to establish a common quality goal. Then Sigma-metrics were calculated to grade the analytical performance. Different methods have different Sigma-metrics for each analyte reviewed. Summary Sigma-metrics estimate the percentage of the chemistry analytes that are expected to perform above Five Sigma, which is where optimized QC design can be implemented. The range of performance varies from 37% to 88%, exhibiting significant differentiation between instruments and manufacturers. Median Sigmas for the different manufacturers in three analytes (albumin, glucose, sodium) showed significant differentiation. Chemistry tests are not commodities. Quality varies significantly from manufacturer to manufacturer, instrument to instrument, and method to method. The Sigma-assessments from multiple EQA/PT programs provide more insight into the performance of methods and instruments than any single program by itself. It is possible to produce a ranking of performance by manufacturer, instrument and individual method. Laboratories seeking optimal instrumentation would do well to consult this data as part of their decision-making process. To confirm that these assessments are stable and reliable, a longer term study should be conducted that examines more results over a longer time period. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Leion, Felicia; Hegbrant, Josefine; den Bakker, Emil; Jonsson, Magnus; Abrahamson, Magnus; Nyman, Ulf; Björk, Jonas; Lindström, Veronica; Larsson, Anders; Bökenkamp, Arend; Grubb, Anders
2017-09-01
Estimating glomerular filtration rate (GFR) in adults by using the average of values obtained by a cystatin C- (eGFR cystatin C ) and a creatinine-based (eGFR creatinine ) equation shows at least the same diagnostic performance as GFR estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparison of eGFR cystatin C and eGFR creatinine plays a pivotal role in the diagnosis of Shrunken Pore Syndrome, where low eGFR cystatin C compared to eGFR creatinine has been associated with higher mortality in adults. The present study was undertaken to elucidate if this concept can also be applied in children. Using iohexol and inulin clearance as gold standard in 702 children, we studied the diagnostic performance of 10 creatinine-based, 5 cystatin C-based and 3 combined cystatin C-creatinine eGFR equations and compared them to the result of the average of 9 pairs of a eGFR cystatin C and a eGFR creatinine estimate. While creatinine-based GFR estimations are unsuitable in children unless calibrated in a pediatric or mixed pediatric-adult population, cystatin C-based estimations in general performed well in children. The average of a suitable creatinine-based and a cystatin C-based equation generally displayed a better diagnostic performance than estimates obtained by equations using only one of these analytes or by complex equations using both analytes. Comparing eGFR cystatin and eGFR creatinine may help identify pediatric patients with Shrunken Pore Syndrome.
Mackay, Michael M
2016-09-01
This article offers a correlation matrix of meta-analytic estimates between various employee job attitudes (i.e., Employee engagement, job satisfaction, job involvement, and organizational commitment) and indicators of employee effectiveness (i.e., Focal performance, contextual performance, turnover intention, and absenteeism). The meta-analytic correlations in the matrix are based on over 1100 individual studies representing over 340,000 employees. Data was collected worldwide via employee self-report surveys. Structural path analyses based on the matrix, and the interpretation of the data, can be found in "Investigating the incremental validity of employee engagement in the prediction of employee effectiveness: a meta-analytic path analysis" (Mackay et al., 2016) [1].
Maximum Likelihood Estimation in Meta-Analytic Structural Equation Modeling
ERIC Educational Resources Information Center
Oort, Frans J.; Jak, Suzanne
2016-01-01
Meta-analytic structural equation modeling (MASEM) involves fitting models to a common population correlation matrix that is estimated on the basis of correlation coefficients that are reported by a number of independent studies. MASEM typically consist of two stages. The method that has been found to perform best in terms of statistical…
A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance
NASA Technical Reports Server (NTRS)
Woolley, Ryan C.
2014-01-01
The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.
Ming, Y; Peiwen, Q
2001-03-01
The understanding of ultrasonic motor performances as a function of input parameters, such as the voltage amplitude, driving frequency, the preload on the rotor, is a key to many applications and control of ultrasonic motor. This paper presents performances estimation of the piezoelectric rotary traveling wave ultrasonic motor as a function of input voltage amplitude and driving frequency and preload. The Love equation is used to derive the traveling wave amplitude on the stator surface. With the contact model of the distributed spring-rigid body between the stator and rotor, a two-dimension analytical model of the rotary traveling wave ultrasonic motor is constructed. Then the performances of stead rotation speed and stall torque are deduced. With MATLAB computational language and iteration algorithm, we estimate the performances of rotation speed and stall torque versus input parameters respectively. The same experiments are completed with the optoelectronic tachometer and stand weight. Both estimation and experiment results reveal the pattern of performance variation as a function of its input parameters.
Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György
2018-01-01
Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.
Accuracy of selected techniques for estimating ice-affected streamflow
Walker, John F.
1991-01-01
This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.
Petersen, Per H; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2016-11-01
Background The distributions of within-subject biological variation are usually described as coefficients of variation, as are analytical performance specifications for bias, imprecision and other characteristics. Estimation of specifications required for reference change values is traditionally done using relationship between the batch-related changes during routine performance, described as Δbias, and the coefficients of variation for analytical imprecision (CV A ): the original theory is based on standard deviations or coefficients of variation calculated as if distributions were Gaussian. Methods The distribution of between-subject biological variation can generally be described as log-Gaussian. Moreover, recent analyses of within-subject biological variation suggest that many measurands have log-Gaussian distributions. In consequence, we generated a model for the estimation of analytical performance specifications for reference change value, with combination of Δbias and CV A based on log-Gaussian distributions of CV I as natural logarithms. The model was tested using plasma prolactin and glucose as examples. Results Analytical performance specifications for reference change value generated using the new model based on log-Gaussian distributions were practically identical with the traditional model based on Gaussian distributions. Conclusion The traditional and simple to apply model used to generate analytical performance specifications for reference change value, based on the use of coefficients of variation and assuming Gaussian distributions for both CV I and CV A , is generally useful.
Nielsen, Marie Katrine Klose; Johansen, Sys Stybe; Linnet, Kristian
2014-01-01
Assessment of total uncertainty of analytical methods for the measurements of drugs in human hair has mainly been derived from the analytical variation. However, in hair analysis several other sources of uncertainty will contribute to the total uncertainty. Particularly, in segmental hair analysis pre-analytical variations associated with the sampling and segmentation may be significant factors in the assessment of the total uncertainty budget. The aim of this study was to develop and validate a method for the analysis of 31 common drugs in hair using ultra-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) with focus on the assessment of both the analytical and pre-analytical sampling variations. The validated method was specific, accurate (80-120%), and precise (CV≤20%) across a wide linear concentration range from 0.025-25 ng/mg for most compounds. The analytical variation was estimated to be less than 15% for almost all compounds. The method was successfully applied to 25 segmented hair specimens from deceased drug addicts showing a broad pattern of poly-drug use. The pre-analytical sampling variation was estimated from the genuine duplicate measurements of two bundles of hair collected from each subject after subtraction of the analytical component. For the most frequently detected analytes, the pre-analytical variation was estimated to be 26-69%. Thus, the pre-analytical variation was 3-7 folds larger than the analytical variation (7-13%) and hence the dominant component in the total variation (29-70%). The present study demonstrated the importance of including the pre-analytical variation in the assessment of the total uncertainty budget and in the setting of the 95%-uncertainty interval (±2CVT). Excluding the pre-analytical sampling variation could significantly affect the interpretation of results from segmental hair analysis. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
The Development of MST Test Information for the Prediction of Test Performances
ERIC Educational Resources Information Center
Park, Ryoungsun; Kim, Jiseon; Chung, Hyewon; Dodd, Barbara G.
2017-01-01
The current study proposes novel methods to predict multistage testing (MST) performance without conducting simulations. This method, called MST test information, is based on analytic derivation of standard errors of ability estimates across theta levels. We compared standard errors derived analytically to the simulation results to demonstrate the…
NASA Astrophysics Data System (ADS)
Ivanova, V.; Surleva, A.; Koleva, B.
2018-06-01
An ion chromatographic method for determination of fluoride, chloride, nitrate and sulphate in untreated and treated drinking waters was described. An automated 850 IC Professional, Metrohm system equipped with conductivity detector and Metrosep A Supp 7-250 (250 x 4 mm) column was used. The validation of the method was performed for simultaneous determination of all studied analytes and the results have showed that the validated method fits the requirements of the current water legislation. The main analytical characteristics were estimated for each of studied analytes: limits of detection, limits of quantification, working and linear ranges, repeatability and intermediate precision, recovery. The trueness of the method was estimated by analysis of certified reference material for soft drinking water. Recovery test was performed on spiked drinking water samples. An uncertainty was estimated. The method was applied for analysis of drinking waters before and after chlorination.
NASA Technical Reports Server (NTRS)
Everett, L.
1992-01-01
This report documents the performance characteristics of a Targeting Reflective Alignment Concept (TRAC) sensor. The performance will be documented for both short and long ranges. For long ranges, the sensor is used without the flat mirror attached to the target. To better understand the capabilities of the TRAC based sensors, an engineering model is required. The model can be used to better design the system for a particular application. This is necessary because there are many interrelated design variables in application. These include lense parameters, camera, and target configuration. The report presents first an analytical development of the performance, and second an experimental verification of the equations. In the analytical presentation it is assumed that the best vision resolution is a single pixel element. The experimental results suggest however that the resolution is better than 1 pixel. Hence the analytical results should be considered worst case conditions. The report also discusses advantages and limitations of the TRAC sensor in light of the performance estimates. Finally the report discusses potential improvements.
Dawidowicz, Andrzej L; Wianowska, Dorota
2005-04-29
Pressurised liquid extraction (PLE) is recognised as one of the most effective sample preparation methods. Despite the enhanced extraction power of PLE, the full recovery of an analyte from plant material may require multiple extractions of the same sample. The presented investigations show the possibility of estimating the true concentration value of an analyte in plant material employing one-cycle PLE in which plant samples of different weight are used. The performed experiments show a linear dependence between the reciprocal value of the analyte amount (E*), extracted in single-step PLE from a plant matrix, and the ratio of plant material mass to extrahent volume (m(p)/V(s)). Hence, time-consuming multi-step PLE can be replaced by a few single-step PLEs performed at different (m(p)/V(s)) ratios. The concentrations of rutin in Sambucus nigra L. and caffeine in tea and coffee estimated by means of the tested procedure are almost the same as their concentrations estimated by multiple PLE.
Schroder, L.J.; Brooks, M.H.; Malo, B.A.; Willoughby, T.C.
1986-01-01
Five intersite comparison studies for the field determination of pH and specific conductance, using simulated-precipitation samples, were conducted by the U.S.G.S. for the National Atmospheric Deposition Program and National Trends Network. These comparisons were performed to estimate the precision of pH and specific conductance determinations made by sampling-site operators. Simulated-precipitation samples were prepared from nitric acid and deionized water. The estimated standard deviation for site-operator determination of pH was 0.25 for pH values ranging from 3.79 to 4.64; the estimated standard deviation for specific conductance was 4.6 microsiemens/cm at 25 C for specific-conductance values ranging from 10.4 to 59.0 microsiemens/cm at 25 C. Performance-audit samples with known analyte concentrations were prepared by the U.S.G.S.and distributed to the National Atmospheric Deposition Program 's Central Analytical Laboratory. The differences between the National Atmospheric Deposition Program and national Trends Network-reported analyte concentrations and known analyte concentrations were calculated, and the bias and precision were determined. For 1983, concentrations of calcium, magnesium, sodium, and chloride were biased at the 99% confidence limit; concentrations of potassium and sulfate were unbiased at the 99% confidence limit. Four analytical laboratories routinely analyzing precipitation were evaluated in their analysis of identical natural- and simulated precipitation samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple-range test on data produced by these laboratories, from the analysis of identical simulated-precipitation samples. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Interlaboratory comparability results may be used to normalize natural-precipitation chemistry data obtained from two or more of these laboratories. (Author 's abstract)
NASA Astrophysics Data System (ADS)
Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon
Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.
Hansen, Steen Ingemann; Petersen, Per Hyltoft; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2018-04-25
Recently, the use of separate gender-partitioned patient medians of serum sodium has revealed potential for monitoring analytical stability within the optimum analytical performance specifications for laboratory medicine. The serum albumin concentration depends on whether a patient is sitting or recumbent during phlebotomy. We therefore investigated only examinations requested by general practitioners (GPs) to provide data from sitting patients. Weekly and monthly patient medians of serum albumin requested by GP for both male and female patients were calculated from the raw data obtained from three analysers in the hospital laboratory on examination of samples from those >18 years. The half-range of medians were applied as an estimate of the maximum bias. Further, the ratios between the two medians were calculated (females/males). The medians for male and female patients were closely related despite considerable variation due to the current analytical variation. This relationship was confirmed by the calculated half-range for the monthly ratio between the genders of 0.44%, which surpasses the optimum analytical performance specification for bias of serum albumin (0.72%). The weekly ratio had a half-range of 1.83%, which surpasses the minimum analytical performance specifications of 2.15%. Monthly gender-partitioned patient medians of serum albumin are useful for monitoring of long-term analytical stability, where the gender medians are two independent estimates of changes in (delta) bias: only results requested by GP are of value in this application to ensure that all patients are sitting during phlebotomy.
Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.
1987-01-01
The U.S. Geological Survey operated a blind audit sample program during 1974 to test the effects of the sample handling and shipping procedures used by the National Atmospheric Deposition Program and National Trends Network on the quality of wet deposition data produced by the combined networks. Blind audit samples, which were dilutions of standard reference water samples, were submitted by network site operators to the central analytical laboratory disguised as actual wet deposition samples. Results from the analyses of blind audit samples were used to calculate estimates of analyte bias associated with all network wet deposition samples analyzed in 1984 and to estimate analyte precision. Concentration differences between double blind samples that were submitted to the central analytical laboratory and separate analyses of aliquots of those blind audit samples that had not undergone network sample handling and shipping were used to calculate analyte masses that apparently were added to each blind audit sample by routine network handling and shipping procedures. These calculated masses indicated statistically significant biases for magnesium, sodium , potassium, chloride, and sulfate. Median calculated masses were 41.4 micrograms (ug) for calcium, 14.9 ug for magnesium, 23.3 ug for sodium, 0.7 ug for potassium, 16.5 ug for chloride and 55.3 ug for sulfate. Analyte precision was estimated using two different sets of replicate measures performed by the central analytical laboratory. Estimated standard deviations were similar to those previously reported. (Author 's abstract)
Subirats, Xavier; Bosch, Elisabeth; Rosés, Martí
2007-01-05
The use of methanol-aqueous buffer mobile phases in HPLC is a common election when performing chromatographic separations of ionisable analytes. The addition of methanol to the aqueous buffer to prepare such a mobile phase changes the buffer capacity and the pH of the solution. In the present work, the variation of these buffer properties is studied for acetic acid-acetate, phosphoric acid-dihydrogenphosphate-hydrogenphosphate, citric acid-dihydrogencitrate-hydrogencitrate-citrate, and ammonium-ammonia buffers. It is well established that the pH change of the buffers depends on the initial concentration and aqueous pH of the buffer, on the percentage of methanol added, and on the particular buffer used. The proposed equations allow the pH estimation of methanol-water buffered mobile phases up to 80% in volume of organic modifier from initial aqueous buffer pH and buffer concentration (before adding methanol) between 0.001 and 0.01 mol L(-1). From both the estimated pH values of the mobile phase and the estimated pKa of the ionisable analytes, it is possible to predict the degree of ionisation of the analytes and therefore, the interpretation of acid-base analytes behaviour in a particular methanol-water buffered mobile phase.
NASA Astrophysics Data System (ADS)
Zarifi, Keyvan; Gershman, Alex B.
2006-12-01
We analyze the performance of two popular blind subspace-based signature waveform estimation techniques proposed by Wang and Poor and Buzzi and Poor for direct-sequence code division multiple-access (DS-CDMA) systems with unknown correlated noise. Using the first-order perturbation theory, analytical expressions for the mean-square error (MSE) of these algorithms are derived. We also obtain simple high SNR approximations of the MSE expressions which explicitly clarify how the performance of these techniques depends on the environmental parameters and how it is related to that of the conventional techniques that are based on the standard white noise assumption. Numerical examples further verify the consistency of the obtained analytical results with simulation results.
An interactive website for analytical method comparison and bias estimation.
Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T
2017-12-01
Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, D.; Cui, Y.
2015-12-01
The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model
Sabatini, Angelo Maria; Ligorio, Gabriele; Mannini, Andrea
2015-11-23
In biomechanical studies Optical Motion Capture Systems (OMCS) are considered the gold standard for determining the orientation and the position (pose) of an object in a global reference frame. However, the use of OMCS can be difficult, which has prompted research on alternative sensing technologies, such as body-worn inertial sensors. We developed a drift-free method to estimate the three-dimensional (3D) displacement of a body part during cyclical motions using body-worn inertial sensors. We performed the Fourier analysis of the stride-by-stride estimates of the linear acceleration, which were obtained by transposing the specific forces measured by the tri-axial accelerometer into the global frame using a quaternion-based orientation estimation algorithm and detecting when each stride began using a gait-segmentation algorithm. The time integration was performed analytically using the Fourier series coefficients; the inverse Fourier series was then taken for reconstructing the displacement over each single stride. The displacement traces were concatenated and spline-interpolated to obtain the entire trace. The method was applied to estimate the motion of the lower trunk of healthy subjects that walked on a treadmill and it was validated using OMCS reference 3D displacement data; different approaches were tested for transposing the measured specific force into the global frame, segmenting the gait and performing time integration (numerically and analytically). The width of the limits of agreements were computed between each tested method and the OMCS reference method for each anatomical direction: Medio-Lateral (ML), VerTical (VT) and Antero-Posterior (AP); using the proposed method, it was observed that the vertical component of displacement (VT) was within ±4 mm (±1.96 standard deviation) of OMCS data and each component of horizontal displacement (ML and AP) was within ±9 mm of OMCS data. Fourier harmonic analysis was applied to model stride-by-stride linear accelerations during walking and to perform their analytical integration. Our results showed that analytical integration based on Fourier series coefficients was a useful approach to accurately estimate 3D displacement from noisy acceleration data.
The 2D analytic signal for envelope detection and feature extraction on ultrasound images.
Wachinger, Christian; Klein, Tassilo; Navab, Nassir
2012-08-01
The fundamental property of the analytic signal is the split of identity, meaning the separation of qualitative and quantitative information in form of the local phase and the local amplitude, respectively. Especially the structural representation, independent of brightness and contrast, of the local phase is interesting for numerous image processing tasks. Recently, the extension of the analytic signal from 1D to 2D, covering also intrinsic 2D structures, was proposed. We show the advantages of this improved concept on ultrasound RF and B-mode images. Precisely, we use the 2D analytic signal for the envelope detection of RF data. This leads to advantages for the extraction of the information-bearing signal from the modulated carrier wave. We illustrate this, first, by visual assessment of the images, and second, by performing goodness-of-fit tests to a Nakagami distribution, indicating a clear improvement of statistical properties. The evaluation is performed for multiple window sizes and parameter estimation techniques. Finally, we show that the 2D analytic signal allows for an improved estimation of local features on B-mode images. Copyright © 2012 Elsevier B.V. All rights reserved.
Bourget, Philippe; Amin, Alexandre; Vidal, Fabrice; Merlette, Christophe; Troude, Pénélope; Baillet-Guffroy, Arlette
2014-08-15
The purpose of the study was to perform a comparative analysis of the technical performance, respective costs and environmental effect of two invasive analytical methods (HPLC and UV/visible-FTIR) as compared to a new non-invasive analytical technique (Raman spectroscopy). Three pharmacotherapeutic models were used to compare the analytical performances of the three analytical techniques. Statistical inter-method correlation analysis was performed using non-parametric correlation rank tests. The study's economic component combined calculations relative to the depreciation of the equipment and the estimated cost of an AQC unit of work. In any case, analytical validation parameters of the three techniques were satisfactory, and strong correlations between the two spectroscopic techniques vs. HPLC were found. In addition, Raman spectroscopy was found to be superior as compared to the other techniques for numerous key criteria including a complete safety for operators and their occupational environment, a non-invasive procedure, no need for consumables, and a low operating cost. Finally, Raman spectroscopy appears superior for technical, economic and environmental objectives, as compared with the other invasive analytical methods. Copyright © 2014 Elsevier B.V. All rights reserved.
Oyaert, Matthijs; Van Maerken, Tom; Bridts, Silke; Van Loon, Silvi; Laverge, Heleen; Stove, Veronique
2018-03-01
Point-of-care blood gas test results may benefit therapeutic decision making by their immediate impact on patient care. We evaluated the (pre-)analytical performance of a novel cartridge-type blood gas analyzer, the GEM Premier 5000 (Werfen), for the determination of pH, partial carbon dioxide pressure (pCO 2 ), partial oxygen pressure (pO 2 ), sodium (Na + ), potassium (K + ), chloride (Cl - ), ionized calcium ( i Ca 2+ ), glucose, lactate, and total hemoglobin (tHb). Total imprecision was estimated according to the CLSI EP5-A2 protocol. The estimated total error was calculated based on the mean of the range claimed by the manufacturer. Based on the CLSI EP9-A2 evaluation protocol, a method comparison with the Siemens RapidPoint 500 and Abbott i-STAT CG8+ was performed. Obtained data were compared against preset quality specifications. Interference of potential pre-analytical confounders on co-oximetry and electrolyte concentrations were studied. The analytical performance was acceptable for all parameters tested. Method comparison demonstrated good agreement to the RapidPoint 500 and i-STAT CG8+, except for some parameters (RapidPoint 500: pCO 2 , K + , lactate and tHb; i-STAT CG8+: pO 2 , Na + , i Ca 2+ and tHb) for which significant differences between analyzers were recorded. No interference of lipemia or methylene blue on CO-oximetry results was found. On the contrary, significant interference for benzalkonium and hemolysis on electrolyte measurements were found, for which the user is notified by an interferent specific flag. Identification of sample errors from pre-analytical sources, such as interferences and automatic corrective actions, along with the analytical performance, ease of use and low maintenance time of the instrument, makes the evaluated instrument a suitable blood gas analyzer for both POCT and laboratory use. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Determination of Uncertainties for the New SSME Model
NASA Technical Reports Server (NTRS)
Coleman, Hugh W.; Hawk, Clark W.
1996-01-01
This report discusses the uncertainty analysis performed in support of a new test analysis and performance prediction model for the Space Shuttle Main Engine. The new model utilizes uncertainty estimates for experimental data and for the analytical model to obtain the most plausible operating condition for the engine system. This report discusses the development of the data sets and uncertainty estimates to be used in the development of the new model. It also presents the application of uncertainty analysis to analytical models and the uncertainty analysis for the conservation of mass and energy balance relations is presented. A new methodology for the assessment of the uncertainty associated with linear regressions is presented.
Fallback options for airgap sensor fault of an electromagnetic suspension system
NASA Astrophysics Data System (ADS)
Michail, Konstantinos; Zolotas, Argyrios C.; Goodall, Roger M.
2013-06-01
The paper presents a method to recover the performance of an electromagnetic suspension under faulty airgap sensor. The proposed control scheme is a combination of classical control loops, a Kalman Estimator and analytical redundancy (for the airgap signal). In this way redundant airgap sensors are not essential for reliable operation of this system. When the airgap sensor fails the required signal is recovered using a combination of a Kalman estimator and analytical redundancy. The performance of the suspension is optimised using genetic algorithms and some preliminary robustness issues to load and operating airgap variations are discussed. Simulations on a realistic model of such type of suspension illustrate the efficacy of the proposed sensor tolerant control method.
Ozay, Guner; Seyhan, Ferda; Yilmaz, Aysun; Whitaker, Thomas B; Slate, Andrew B; Giesbrecht, Francis
2006-01-01
The variability associated with the aflatoxin test procedure used to estimate aflatoxin levels in bulk shipments of hazelnuts was investigated. Sixteen 10 kg samples of shelled hazelnuts were taken from each of 20 lots that were suspected of aflatoxin contamination. The total variance associated with testing shelled hazelnuts was estimated and partitioned into sampling, sample preparation, and analytical variance components. Each variance component increased as aflatoxin concentration (either B1 or total) increased. With the use of regression analysis, mathematical expressions were developed to model the relationship between aflatoxin concentration and the total, sampling, sample preparation, and analytical variances. The expressions for these relationships were used to estimate the variance for any sample size, subsample size, and number of analyses for a specific aflatoxin concentration. The sampling, sample preparation, and analytical variances associated with estimating aflatoxin in a hazelnut lot at a total aflatoxin level of 10 ng/g and using a 10 kg sample, a 50 g subsample, dry comminution with a Robot Coupe mill, and a high-performance liquid chromatographic analytical method are 174.40, 0.74, and 0.27, respectively. The sampling, sample preparation, and analytical steps of the aflatoxin test procedure accounted for 99.4, 0.4, and 0.2% of the total variability, respectively.
Krishna P. Poudel; Temesgen. Hailemariam
2015-01-01
Performance of three groups of methods to estimate total and/or component aboveground biomass was evaluated using the data collected from destructively sampled trees in different parts of Oregon. First group of methods used analytical approach to estimate total and component biomass using existing equations, and produced biased estimates for our dataset. The second...
Analytical flow duration curves for summer streamflow in Switzerland
NASA Astrophysics Data System (ADS)
Santos, Ana Clara; Portela, Maria Manuela; Rinaldo, Andrea; Schaefli, Bettina
2018-04-01
This paper proposes a systematic assessment of the performance of an analytical modeling framework for streamflow probability distributions for a set of 25 Swiss catchments. These catchments show a wide range of hydroclimatic regimes, including namely snow-influenced streamflows. The model parameters are calculated from a spatially averaged gridded daily precipitation data set and from observed daily discharge time series, both in a forward estimation mode (direct parameter calculation from observed data) and in an inverse estimation mode (maximum likelihood estimation). The performance of the linear and the nonlinear model versions is assessed in terms of reproducing observed flow duration curves and their natural variability. Overall, the nonlinear model version outperforms the linear model for all regimes, but the linear model shows a notable performance increase with catchment elevation. More importantly, the obtained results demonstrate that the analytical model performs well for summer discharge for all analyzed streamflow regimes, ranging from rainfall-driven regimes with summer low flow to snow and glacier regimes with summer high flow. These results suggest that the model's encoding of discharge-generating events based on stochastic soil moisture dynamics is more flexible than previously thought. As shown in this paper, the presence of snowmelt or ice melt is accommodated by a relative increase in the discharge-generating frequency, a key parameter of the model. Explicit quantification of this frequency increase as a function of mean catchment meteorological conditions is left for future research.
Olivieri, Alejandro C
2005-08-01
Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.
Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors
Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech
2011-01-01
Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935
DOE Office of Scientific and Technical Information (OSTI.GOV)
2008-01-15
The Verde Analytic Modules permit the user to ingest openly available data feeds about phenomenology (storm tracks, wind, precipitation, earthquake, wildfires, and similar natural and manmade power grid disruptions and forecast power outages, restoration times, customers outaged, and key facilities that will lose power. Damage areas are predicted using historic damage criteria of the affected area. The modules use a cellular automata approach to estimating the distribution circuits assigned to geo-located substations. Population estimates served within the service areas are located within 1 km grid cells and converted to customer counts by conversion through demographic estimation of households and commercialmore » firms within the population cells. Restoration times are estimated by agent-based simulation of restoration crews working according to utility published prioritization calibrated by historic performance.« less
ERIC Educational Resources Information Center
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Moision, Bruce E.
2010-01-01
Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
Dual nozzle aerodynamic and cooling analysis study
NASA Technical Reports Server (NTRS)
Meagher, G. M.
1981-01-01
Analytical models to predict performance and operating characteristics of dual nozzle concepts were developed and improved. Aerodynamic models are available to define flow characteristics and bleed requirements for both the dual throat and dual expander concepts. Advanced analytical techniques were utilized to provide quantitative estimates of the bleed flow, boundary layer, and shock effects within dual nozzle engines. Thermal analyses were performed to define cooling requirements for baseline configurations, and special studies of unique dual nozzle cooling problems defined feasible means of achieving adequate cooling.
Bassuoni, M M
2014-03-01
The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and -5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio.
Historical performance evaluation of Iowa pavement treatments using data analytics : final report.
DOT National Transportation Integrated Search
2016-11-01
The pavement network in Iowa has reached a mature state making maintenance and rehabilitation activities more important than new construction. As such, a need exists to evaluate the performance of the pavement treatments and estimate their performanc...
Performance Analysis for Channel Estimation With 1-Bit ADC and Unknown Quantization Threshold
NASA Astrophysics Data System (ADS)
Stein, Manuel S.; Bar, Shahar; Nossek, Josef A.; Tabrikian, Joseph
2018-05-01
In this work, the problem of signal parameter estimation from measurements acquired by a low-complexity analog-to-digital converter (ADC) with $1$-bit output resolution and an unknown quantization threshold is considered. Single-comparator ADCs are energy-efficient and can be operated at ultra-high sampling rates. For analysis of such systems, a fixed and known quantization threshold is usually assumed. In the symmetric case, i.e., zero hard-limiting offset, it is known that in the low signal-to-noise ratio (SNR) regime the signal processing performance degrades moderately by ${2}/{\\pi}$ ($-1.96$ dB) when comparing to an ideal $\\infty$-bit converter. Due to hardware imperfections, low-complexity $1$-bit ADCs will in practice exhibit an unknown threshold different from zero. Therefore, we study the accuracy which can be obtained with receive data processed by a hard-limiter with unknown quantization level by using asymptotically optimal channel estimation algorithms. To characterize the estimation performance of these nonlinear algorithms, we employ analytic error expressions for different setups while modeling the offset as a nuisance parameter. In the low SNR regime, we establish the necessary condition for a vanishing loss due to missing offset knowledge at the receiver. As an application, we consider the estimation of single-input single-output wireless channels with inter-symbol interference and validate our analysis by comparing the analytic and experimental performance of the studied estimation algorithms. Finally, we comment on the extension to multiple-input multiple-output channel models.
Horbowy, Jan; Tomczak, Maciej T
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.
Horbowy, Jan
2017-01-01
Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR), which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available) to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low. PMID:29131850
DOT National Transportation Integrated Search
2017-01-01
Evaluate the performance of the most-used pavement treatments in Iowa by considering different parameters such as type of treatment, treatment thickness, traffic, and pavement type : Estimate a service life for each treatment based on the obs...
Reich, Christian G; Ryan, Patrick B; Schuemie, Martijn J
2013-10-01
A systematic risk identification system has the potential to test marketed drugs for important Health Outcomes of Interest or HOI. For each HOI, multiple definitions are used in the literature, and some of them are validated for certain databases. However, little is known about the effect of different definitions on the ability of methods to estimate their association with medical products. Alternative definitions of HOI were studied for their effect on the performance of analytical methods in observational outcome studies. A set of alternative definitions for three HOI were defined based on literature review and clinical diagnosis guidelines: acute kidney injury, acute liver injury and acute myocardial infarction. The definitions varied by the choice of diagnostic codes and the inclusion of procedure codes and lab values. They were then used to empirically study an array of analytical methods with various analytical choices in four observational healthcare databases. The methods were executed against predefined drug-HOI pairs to generate an effect estimate and standard error for each pair. These test cases included positive controls (active ingredients with evidence to suspect a positive association with the outcome) and negative controls (active ingredients with no evidence to expect an effect on the outcome). Three different performance metrics where used: (i) Area Under the Receiver Operator Characteristics (ROC) curve (AUC) as a measure of a method's ability to distinguish between positive and negative test cases, (ii) Measure of bias by estimation of distribution of observed effect estimates for the negative test pairs where the true effect can be assumed to be one (no relative risk), and (iii) Minimal Detectable Relative Risk (MDRR) as a measure of whether there is sufficient power to generate effect estimates. In the three outcomes studied, different definitions of outcomes show comparable ability to differentiate true from false control cases (AUC) and a similar bias estimation. However, broader definitions generating larger outcome cohorts allowed more drugs to be studied with sufficient statistical power. Broader definitions are preferred since they allow studying drugs with lower prevalence than the more precise or narrow definitions while showing comparable performance characteristics in differentiation of signal vs. no signal as well as effect size estimation.
Data and Analytics to Inform Energy Retrofit of High Performance Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Yang, Le; Hill, David
Buildings consume more than one-third of the world?s primary energy. Reducing energy use in buildings with energy efficient technologies is feasible and also driven by energy policies such as energy benchmarking, disclosure, rating, and labeling in both the developed and developing countries. Current energy retrofits focus on the existing building stocks, especially older buildings, but the growing number of new high performance buildings built around the world raises a question that how these buildings perform and whether there are retrofit opportunities to further reduce their energy use. This is a new and unique problem for the building industry. Traditional energymore » audit or analysis methods are inadequate to look deep into the energy use of the high performance buildings. This study aims to tackle this problem with a new holistic approach powered by building performance data and analytics. First, three types of measured data are introduced, including the time series energy use, building systems operating conditions, and indoor and outdoor environmental parameters. An energy data model based on the ISO Standard 12655 is used to represent the energy use in buildings in a three-level hierarchy. Secondly, a suite of analytics were proposed to analyze energy use and to identify retrofit measures for high performance buildings. The data-driven analytics are based on monitored data at short time intervals, and cover three levels of analysis ? energy profiling, benchmarking and diagnostics. Thirdly, the analytics were applied to a high performance building in California to analyze its energy use and identify retrofit opportunities, including: (1) analyzing patterns of major energy end-use categories at various time scales, (2) benchmarking the whole building total energy use as well as major end-uses against its peers, (3) benchmarking the power usage effectiveness for the data center, which is the largest electricity consumer in this building, and (4) diagnosing HVAC equipment using detailed time-series operating data. Finally, a few energy efficiency measures were identified for retrofit, and their energy savings were estimated to be 20percent of the whole-building electricity consumption. Based on the analyses, the building manager took a few steps to improve the operation of fans, chillers, and data centers, which will lead to actual energy savings. This study demonstrated that there are energy retrofit opportunities for high performance buildings and detailed measured building performance data and analytics can help identify and estimate energy savings and to inform the decision making during the retrofit process. Challenges of data collection and analytics were also discussed to shape best practice of retrofitting high performance buildings.« less
Analytical functions to predict cosmic-ray neutron spectra in the atmosphere.
Sato, Tatsuhiko; Niita, Koji
2006-09-01
Estimation of cosmic-ray neutron spectra in the atmosphere has been an essential issue in the evaluation of the aircrew doses and the soft-error rates of semiconductor devices. We therefore performed Monte Carlo simulations for estimating neutron spectra using the PHITS code in adopting the nuclear data library JENDL-High-Energy file. Excellent agreements were observed between the calculated and measured spectra for a wide altitude range even at the ground level. Based on a comprehensive analysis of the simulation results, we propose analytical functions that can predict the cosmic-ray neutron spectra for any location in the atmosphere at altitudes below 20 km, considering the influences of local geometries such as ground and aircraft on the spectra. The accuracy of the analytical functions was well verified by various experimental data.
NASA Astrophysics Data System (ADS)
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
Devriendt, Floris; Moldovan, Darie; Verbeke, Wouter
2018-03-01
Prescriptive analytics extends on predictive analytics by allowing to estimate an outcome in function of control variables, allowing as such to establish the required level of control variables for realizing a desired outcome. Uplift modeling is at the heart of prescriptive analytics and aims at estimating the net difference in an outcome resulting from a specific action or treatment that is applied. In this article, a structured and detailed literature survey on uplift modeling is provided by identifying and contrasting various groups of approaches. In addition, evaluation metrics for assessing the performance of uplift models are reviewed. An experimental evaluation on four real-world data sets provides further insight into their use. Uplift random forests are found to be consistently among the best performing techniques in terms of the Qini and Gini measures, although considerable variability in performance across the various data sets of the experiments is observed. In addition, uplift models are frequently observed to be unstable and display a strong variability in terms of performance across different folds in the cross-validation experimental setup. This potentially threatens their actual use for business applications. Moreover, it is found that the available evaluation metrics do not provide an intuitively understandable indication of the actual use and performance of a model. Specifically, existing evaluation metrics do not facilitate a comparison of uplift models and predictive models and evaluate performance either at an arbitrary cutoff or over the full spectrum of potential cutoffs. In conclusion, we highlight the instability of uplift models and the need for an application-oriented approach to assess uplift models as prime topics for further research.
A semi-analytical refrigeration cycle modelling approach for a heat pump hot water heater
NASA Astrophysics Data System (ADS)
Panaras, G.; Mathioulakis, E.; Belessiotis, V.
2018-04-01
The use of heat pump systems in applications like the production of hot water or space heating makes important the modelling of the processes for the evaluation of the performance of existing systems, as well as for design purposes. The proposed semi-analytical model offers the opportunity to estimate the performance of a heat pump system producing hot water, without using detailed geometrical or any performance data. This is important, as for many commercial systems the type and characteristics of the involved subcomponents can hardly be detected, thus not allowing the implementation of more analytical approaches or the exploitation of the manufacturers' catalogue performance data. The analysis copes with the issues related with the development of the models of the subcomponents involved in the studied system. Issues not discussed thoroughly in the existing literature, as the refrigerant mass inventory in the case an accumulator is present, are examined effectively.
Experimental and analytical studies for the NASA carbon fiber risk assessment
NASA Technical Reports Server (NTRS)
1980-01-01
Various experimental and analytical studies performed for the NASA carbon fiber risk assessment program are described with emphasis on carbon fiber characteristics, sensitivity of electrical equipment and components to shorting or arcing by carbon fibers, attenuation effect of carbon fibers on aircraft landing aids, impact of carbon fibers on industrial facilities. A simple method of estimating damage from airborne carbon fibers is presented.
Table look-up estimation of signal and noise parameters from quantized observables
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Rodemich, E. R.
1986-01-01
A table look-up algorithm for estimating underlying signal and noise parameters from quantized observables is examined. A general mathematical model is developed, and a look-up table designed specifically for estimating parameters from four-bit quantized data is described. Estimator performance is evaluated both analytically and by means of numerical simulation, and an example is provided to illustrate the use of the look-up table for estimating signal-to-noise ratios commonly encountered in Voyager-type data.
Bassuoni, M.M.
2013-01-01
The dehumidifier is a key component in liquid desiccant air-conditioning systems. Analytical solutions have more advantages than numerical solutions in studying the dehumidifier performance parameters. This paper presents the performance results of exit parameters from an analytical model of an adiabatic cross-flow liquid desiccant air dehumidifier. Calcium chloride is used as desiccant material in this investigation. A program performing the analytical solution is developed using the engineering equation solver software. Good accuracy has been found between analytical solution and reliable experimental results with a maximum deviation of +6.63% and −5.65% in the moisture removal rate. The method developed here can be used in the quick prediction of the dehumidifier performance. The exit parameters from the dehumidifier are evaluated under the effects of variables such as air temperature and humidity, desiccant temperature and concentration, and air to desiccant flow rates. The results show that hot humid air and desiccant concentration have the greatest impact on the performance of the dehumidifier. The moisture removal rate is decreased with increasing both air inlet temperature and desiccant temperature while increases with increasing air to solution mass ratio, inlet desiccant concentration, and inlet air humidity ratio. PMID:25685485
Validating Analytical Protocols to Determine Selected Pesticides and PCBs Using Routine Samples.
Pindado Jiménez, Oscar; García Alonso, Susana; Pérez Pastor, Rosa María
2017-01-01
This study aims at providing recommendations concerning the validation of analytical protocols by using routine samples. It is intended to provide a case-study on how to validate the analytical methods in different environmental matrices. In order to analyze the selected compounds (pesticides and polychlorinated biphenyls) in two different environmental matrices, the current work has performed and validated two analytical procedures by GC-MS. A description is given of the validation of the two protocols by the analysis of more than 30 samples of water and sediments collected along nine months. The present work also scopes the uncertainty associated with both analytical protocols. In detail, uncertainty of water sample was performed through a conventional approach. However, for the sediments matrices, the estimation of proportional/constant bias is also included due to its inhomogeneity. Results for the sediment matrix are reliable, showing a range 25-35% of analytical variability associated with intermediate conditions. The analytical methodology for the water matrix determines the selected compounds with acceptable recoveries and the combined uncertainty ranges between 20 and 30%. Analyzing routine samples is rarely applied to assess trueness of novel analytical methods and up to now this methodology was not focused on organochlorine compounds in environmental matrices.
Piezocone Penetration Testing Device
DOT National Transportation Integrated Search
2017-01-03
Hydraulic characteristics of soils can be estimated from piezocone penetration test (called PCPT hereinafter) by performing dissipation test or on-the-fly using advanced analytical techniques. This research report presents a method for fast estimatio...
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level
Savalei, Victoria; Rhemtulla, Mijke
2017-01-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data—that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study. PMID:29276371
Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.
Savalei, Victoria; Rhemtulla, Mijke
2017-08-01
In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.
The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234
Mudge, Elizabeth M; Brown, Paula N
2016-01-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823
The Importance of Method Selection in Determining Product Integrity for Nutrition Research.
Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N
2016-03-01
The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.
Dynamic imaging model and parameter optimization for a star tracker.
Yan, Jinyun; Jiang, Jie; Zhang, Guangjun
2016-03-21
Under dynamic conditions, star spots move across the image plane of a star tracker and form a smeared star image. This smearing effect increases errors in star position estimation and degrades attitude accuracy. First, an analytical energy distribution model of a smeared star spot is established based on a line segment spread function because the dynamic imaging process of a star tracker is equivalent to the static imaging process of linear light sources. The proposed model, which has a clear physical meaning, explicitly reflects the key parameters of the imaging process, including incident flux, exposure time, velocity of a star spot in an image plane, and Gaussian radius. Furthermore, an analytical expression of the centroiding error of the smeared star spot is derived using the proposed model. An accurate and comprehensive evaluation of centroiding accuracy is obtained based on the expression. Moreover, analytical solutions of the optimal parameters are derived to achieve the best performance in centroid estimation. Finally, we perform numerical simulations and a night sky experiment to validate the correctness of the dynamic imaging model, the centroiding error expression, and the optimal parameters.
NASA Astrophysics Data System (ADS)
Ribera, Javier; Tahboub, Khalid; Delp, Edward J.
2015-03-01
Video surveillance systems are widely deployed for public safety. Real-time monitoring and alerting are some of the key requirements for building an intelligent video surveillance system. Real-life settings introduce many challenges that can impact the performance of real-time video analytics. Video analytics are desired to be resilient to adverse and changing scenarios. In this paper we present various approaches to characterize the uncertainty of a classifier and incorporate crowdsourcing at the times when the method is uncertain about making a particular decision. Incorporating crowdsourcing when a real-time video analytic method is uncertain about making a particular decision is known as online active learning from crowds. We evaluate our proposed approach by testing a method we developed previously for crowd flow estimation. We present three different approaches to characterize the uncertainty of the classifier in the automatic crowd flow estimation method and test them by introducing video quality degradations. Criteria to aggregate crowdsourcing results are also proposed and evaluated. An experimental evaluation is conducted using a publicly available dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindberg, Michael J.
2010-09-28
Between October 14, 2009 and February 22, 2010 sediment samples were received from 100-BC Decision Unit for geochemical studies. This is an analytical data report for sediments received from CHPRC at the 100 BC 5 OU. The analyses for this project were performed at the 325 building located in the 300 Area of the Hanford Site. The analyses were performed according to Pacific Northwest National Laboratory (PNNL) approved procedures and/or nationally recognized test procedures. The data sets include the sample identification numbers, analytical results, estimated quantification limits (EQL), and quality control data. The preparatory and analytical quality control requirements, calibrationmore » requirements, acceptance criteria, and failure actions are defined in the on-line QA plan 'Conducting Analytical Work in Support of Regulatory Programs' (CAW). This QA plan implements the Hanford Analytical Services Quality Assurance Requirements Documents (HASQARD) for PNNL.« less
Multispectral scanner system parameter study and analysis software system description, volume 2
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.
1978-01-01
The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.
Analytical Model For Fluid Dynamics In A Microgravity Environment
NASA Technical Reports Server (NTRS)
Naumann, Robert J.
1995-01-01
Report presents analytical approximation methodology for providing coupled fluid-flow, heat, and mass-transfer equations in microgravity environment. Experimental engineering estimates accurate to within factor of 2 made quickly and easily, eliminating need for time-consuming and costly numerical modeling. Any proposed experiment reviewed to see how it would perform in microgravity environment. Model applied in commercial setting for preliminary design of low-Grashoff/Rayleigh-number experiments.
Advanced Video Activity Analytics (AVAA): Human Performance Model Report
2017-12-01
NOTICES Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...Video Activity Analytics (AVAA) system. AVAA was designed to help US Army Intelligence Analysts exploit full-motion video more efficiently and
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates
Kostich, Mitchell S.; Flick, Robert W.; Angela L. Batt,; Mash, Heath E.; Boone, J. Scott; Furlong, Edward T.; Kolpin, Dana W.; Glassmeyer, Susan T.
2017-01-01
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes.
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates.
Kostich, Mitchell S; Flick, Robert W; Batt, Angela L; Mash, Heath E; Boone, J Scott; Furlong, Edward T; Kolpin, Dana W; Glassmeyer, Susan T
2017-02-01
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Published by Elsevier B.V.
Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke; Palsbøll, Per J
2012-09-06
Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments.
2012-01-01
Background Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Results Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Conclusion Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments. PMID:22954451
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2015-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2016-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
A genetic algorithm-based job scheduling model for big data analytics.
Lu, Qinghua; Li, Shanshan; Zhang, Weishan; Zhang, Lei
Big data analytics (BDA) applications are a new category of software applications that process large amounts of data using scalable parallel processing infrastructure to obtain hidden value. Hadoop is the most mature open-source big data analytics framework, which implements the MapReduce programming model to process big data with MapReduce jobs. Big data analytics jobs are often continuous and not mutually separated. The existing work mainly focuses on executing jobs in sequence, which are often inefficient and consume high energy. In this paper, we propose a genetic algorithm-based job scheduling model for big data analytics applications to improve the efficiency of big data analytics. To implement the job scheduling model, we leverage an estimation module to predict the performance of clusters when executing analytics jobs. We have evaluated the proposed job scheduling model in terms of feasibility and accuracy.
Analytical Study on Flight Performance of a RP Laser Launcher
NASA Astrophysics Data System (ADS)
Katsurayama, H.; Ushio, M.; Komurasaki, K.; Arakawa, Y.
2005-04-01
An air-breathing RP Laser Launcher has been proposed as the alternative to conventional chemical launch systems. This paper analytically examines the feasibility of SSTO system powered by RP lasers. The trajectory from the ground to the geosynchronous orbit is computed and the launch cost including laser-base development is estimated. The engine performance is evaluated by CFD computations and a cycle analysis. The results show that the beam power of 2.3MW per unit initial vehicle mass is optimum to reach a geo-synchronous transfer orbit, and 3,000 launches are necessary to redeem the cost for laser transmitter.
High-performance heat pipes for heat recovery applications
NASA Technical Reports Server (NTRS)
Saaski, E. W.; Hartl, J. H.
1980-01-01
Methods to improve the performance of reflux heat pipes for heat recovery applications were examined both analytically and experimentally. Various models for the estimation of reflux heat pipe transport capacity were surveyed in the literature and compared with experimental data. A high transport capacity reflux heat pipe was developed that provides up to a factor of 10 capacity improvement over conventional open tube designs; analytical models were developed for this device and incorporated into a computer program HPIPE. Good agreement of the model predictions with data for R-11 and benzene reflux heat pipes was obtained.
Aquatic concentrations of chemical analytes compared to ...
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concentration (EC) estimates, including USEPA aquatic life criteria, effective plasma concentrations of pharmaceuticals, published toxicity data summarized in the USEPA ECOTOX database, and chemical structure-based predictions. Potential dietary exposures were estimated using a generic 3-tiered food web accumulation scenario. For many analytes, few or no measured effect data were found, and for some analytes, reporting limits exceeded EC estimates, limiting the scope of conclusions. Results suggest occasional occurrence above ECs for copper, aluminum, strontium, lead, uranium, and nitrate. Sparse effect data for manganese, antimony, and vanadium suggest that these analytes may occur above ECs, but additional effect data would be desirable to corroborate EC estimates. These conclusions were not affected by bioaccumulation estimates. No organic analyte concentrations were found to exceed EC estimates, but ten analytes had concentrations in excess of 1/10th of their respective EC: triclocarban, norverapamil, progesterone, atrazine, metolachlor, triclosan, para-nonylphenol, ibuprofen, venlafaxine, and amitriptyline, suggesting more detailed characterization of these analytes. Purpose: to provide sc
Acoustic fatigue life prediction for nonlinear structures with multiple resonant modes
NASA Technical Reports Server (NTRS)
Miles, R. N.
1992-01-01
This report documents an effort to develop practical and accurate methods for estimating the fatigue lives of complex aerospace structures subjected to intense random excitations. The emphasis of the current program is to construct analytical schemes for performing fatigue life estimates for structures that exhibit nonlinear vibration behavior and that have numerous resonant modes contributing to the response.
NASA Technical Reports Server (NTRS)
Essias, Wayne E.; Abbott, Mark; Carder, Kendall; Campbell, Janet; Clark, Dennis; Evans, Robert; Brown, Otis; Kearns, Ed; Kilpatrick, Kay; Balch, W.
2003-01-01
Simplistic models relating global satellite ocean color, temperature, and light to ocean net primary production (ONPP) are sensitive to the accuracy and limitations of the satellite estimate of chlorophyll and other input fields, as well as the primary productivity model. The standard MODIS ONPP product uses the new semi-analytic chlorophyll algorithm as its input for two ONPP indexes. The three primary MODIS chlorophyll Q estimates from MODIS, as well as the SeaWiFS 4 chlorophyll product, were used to assess global and regional performance in estimating ONPP for the full mission, but concentrating on 2001. The two standard ONPP algorithms were examined with 8-day and 39 kilometer resolution to quantify chlorophyll algorithm dependency of ONPP. Ancillary data (MLD from FNMOC, MODIS SSTD1, and PAR from the GSFC DAO) were identical. The standard MODIS ONPP estimates for annual production in 2001 was 59 and 58 GT C for the two ONPP algorithms. Differences in ONPP using alternate chlorophylls were on the order of 10% for global annual ONPP, but ranged to 100% regionally. On all scales the differences in ONPP were smaller between MODIS and SeaWiFS than between ONPP models, or among chlorophyll algorithms within MODIS. Largest regional ONPP differences were found in the Southern Ocean (SO). In the SO, application of the semi-analytic chlorophyll resulted in not only a magnitude difference in ONPP (2x), but also a temporal shift in the time of maximum production compared to empirical algorithms when summed over standard oceanic areas. The resulting increase in global ONPP (6-7 GT) is supported by better performance of the semi-analytic chlorophyll in the SO and other high chlorophyll regions. The differences are significant in terms of understanding regional differences and dynamics of ocean carbon transformations.
Hansen, Steen Ingemann; Petersen, Per Hyltoft; Lund, Flemming; Fraser, Callum G; Sölétormos, György
2017-10-26
During monitoring of monthly medians of results from patients undertaken to assess analytical stability in routine laboratory performance, the medians for serum sodium for male and female patients were found to be significantly related. Daily, weekly and monthly patient medians of serum sodium for both male and female patients were calculated from results obtained on samples from the population >18 years on three analysers in the hospital laboratory. The half-range of medians was applied as an estimate of the maximum bias. Further, the ratios between the two medians were calculated. The medians of both genders demonstrated dispersions over time, but they were closely connected in like patterns, which were confirmed by the half-range of the ratios of medians for males and females that varied from 0.36% for daily, 0.14% for weekly and 0.036% for monthly ratios over all instruments. The tight relationship between the gender medians for serum sodium is only possible when raw laboratory data are used for calculation. The two patient medians can be used to confirm both and are useful as independent estimates of analytical bias during constant calibration periods. In contrast to the gender combined median, the estimate of analytical bias can be confirmed further by calculation of the ratios of medians for males and females.
2010-01-01
Background The Oncotype DX® Colon Cancer Assay is a new diagnostic test for determining the likelihood of recurrence in stage II colon cancer patients after surgical resection using fixed paraffin embedded (FPE) primary colon tumor tissue. Like the Oncotype DX Breast Cancer Assay, this is a high complexity, multi-analyte, reverse transcription (RT) polymerase chain reaction (PCR) assay that measures the expression levels of specific cancer-related genes. By capturing the biology underlying each patient's tumor, the Oncotype DX Colon Cancer Assay provides a Recurrence Score (RS) that reflects an individualized risk of disease recurrence. Here we describe its analytical performance using pre-determined performance criteria, which is a critical component of molecular diagnostic test validation. Results All analytical measurements met pre-specified performance criteria. PCR amplification efficiency for all 12 assays was high, ranging from 96% to 107%, while linearity was demonstrated over an 11 log2 concentration range for all assays. Based on estimated components of variance for FPE RNA pools, analytical reproducibility and precision demonstrated low SDs for individual genes (0.16 to 0.32 CTs), gene groups (≤0.05 normalized/aggregate CTs) and RS (≤1.38 RS units). Conclusions Analytical performance characteristics shown here for both individual genes and gene groups in the Oncotype DX Colon Cancer Assay demonstrate consistent translation of specific biology of individual tumors into clinically useful diagnostic information. The results of these studies illustrate how the analytical capability of the Oncotype DX Colon Cancer Assay has enabled clinical validation of a test to determine individualized recurrence risk after colon cancer surgery. PMID:21176237
Analytical performance evaluation of SAR ATR with inaccurate or estimated models
NASA Astrophysics Data System (ADS)
DeVore, Michael D.
2004-09-01
Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.
A complex symbol signal-to-noise ratio estimator and its performance
NASA Technical Reports Server (NTRS)
Feria, Y.
1994-01-01
This article presents an algorithm for estimating the signal-to-noise ratio (SNR) of signals that contain data on a downconverted suppressed carrier or the first harmonic of a square-wave subcarrier. This algorithm can be used to determine the performance of the full-spectrum combiner for the Galileo S-band (2.2- to 2.3-GHz) mission by measuring the input and output symbol SNR. A performance analysis of the algorithm shows that the estimator can estimate the complex symbol SNR using 10,000 symbols at a true symbol SNR of -5 dB with a mean of -4.9985 dB and a standard deviation of 0.2454 dB, and these analytical results are checked by simulations of 100 runs with a mean of -5.06 dB and a standard deviation of 0.2506 dB.
Merritt, Michael L.
2004-01-01
Aquifers are subjected to mechanical stresses from natural, non-anthropogenic, processes such as pressure loading or mechanical forcing of the aquifer by ocean tides, earth tides, and pressure fluctuations in the atmosphere. The resulting head fluctuations are evident even in deep confined aquifers. The present study was conducted for the purpose of reviewing the research that has been done on the use of these phenomena for estimating the values of aquifer properties, and determining which of the analytical techniques might be useful for estimating hydraulic properties in the dissolved-carbonate hydrologic environment of southern Florida. Fifteen techniques are discussed in this report, of which four were applied.An analytical solution for head oscillations in a well near enough to the ocean to be influenced by ocean tides was applied to data from monitor zones in a well near Naples, Florida. The solution assumes a completely non-leaky confining unit of infinite extent. Resulting values of transmissivity are in general agreement with the results of aquifer performance tests performed by the South Florida Water Management District. There seems to be an inconsistency between results of the amplitude ratio analysis and independent estimates of loading efficiency. A more general analytical solution that takes leakage through the confining layer into account yielded estimates that were lower than those obtained using the non-leaky method, and closer to the South Florida Water Management District estimates. A numerical model with a cross-sectional grid design was applied to explore additional aspects of the problem.A relation between specific storage and the head oscillation observed in a well provided estimates of specific storage that were considered reasonable. Porosity estimates based on the specific storage estimates were consistent with values obtained from measurements on core samples. Methods are described for determining aquifer diffusivity by comparing the time-varying drawdown in an open well with periodic pressure-head oscillations in the aquifer, but the applicability of such methods might be limited in studies of the Floridan aquifer system.
Low cost microfluidic device based on cotton threads for electroanalytical application.
Agustini, Deonir; Bergamini, Márcio F; Marcolino-Junior, Luiz Humberto
2016-01-21
Microfluidic devices are an interesting alternative for performing analytical assays, due to the speed of analyses, reduced sample, reagent and solvent consumption and less waste generation. However, the high manufacturing costs still prevent the massive use of these devices worldwide. Here, we present the construction of a low cost microfluidic thread-based electroanalytical device (μTED), employing extremely cheap materials and a manufacturing process free of equipment. The microfluidic channels were built with cotton threads and the estimated cost per device was only $0.39. The flow of solutions (1.12 μL s(-1)) is generated spontaneously due to the capillary forces, eliminating the use of any pumping system. To demonstrate the analytical performance of the μTED, a simultaneous determination of acetaminophen (ACT) and diclofenac (DCF) was performed by multiple pulse amperometry (MPA). A linear dynamic range (LDR) of 10 to 320 μmol L(-1) for both species, a limit of detection (LOD) and a limit of quantitation (LOQ) of 1.4 and 4.7 μmol L(-1) and 2.5 and 8.3 μmol L(-1) for ACT and DCF, respectively, as well as an analytical frequency of 45 injections per hour were reached. Thus, the proposed device has shown potential to extend the use of microfluidic analytical devices, due to its simplicity, low cost and good analytical performance.
Solid-State Thermionic Power Generators: An Analytical Analysis in the Nonlinear Regime
NASA Astrophysics Data System (ADS)
Zebarjadi, M.
2017-07-01
Solid-state thermionic power generators are an alternative to thermoelectric modules. In this paper, we develop an analytical model to investigate the performance of these generators in the nonlinear regime. We identify dimensionless parameters determining their performance and provide measures to estimate an acceptable range of thermal and electrical resistances of thermionic generators. We find the relation between the optimum load resistance and the internal resistance and suggest guidelines for the design of thermionic power generators. Finally, we show that in the nonlinear regime, thermionic power generators can have efficiency values higher than the state-of-the-art thermoelectric modules.
A review of the analytical simulation of aircraft crash dynamics
NASA Technical Reports Server (NTRS)
Fasanella, Edwin L.; Carden, Huey D.; Boitnott, Richard L.; Hayduk, Robert J.
1990-01-01
A large number of full scale tests of general aviation aircraft, helicopters, and one unique air-to-ground controlled impact of a transport aircraft were performed. Additionally, research was also conducted on seat dynamic performance, load-limiting seats, load limiting subfloor designs, and emergency-locator-transmitters (ELTs). Computer programs were developed to provide designers with methods for predicting accelerations, velocities, and displacements of collapsing structure and for estimating the human response to crash loads. The results of full scale aircraft and component tests were used to verify and guide the development of analytical simulation tools and to demonstrate impact load attenuating concepts. Analytical simulation of metal and composite aircraft crash dynamics are addressed. Finite element models are examined to determine their degree of corroboration by experimental data and to reveal deficiencies requiring further development.
Pilot testing of SHRP 2 reliability data and analytical products: Florida.
DOT National Transportation Integrated Search
2015-01-01
Transportation agencies have realized the importance of performance estimation, measurement, and management. The Moving Ahead for Progress in the 21st Century Act legislation identifies travel time reliability as one of the goals of the federal highw...
Estimating and Enhancing Public Transit Accessibility for People with Mobility Limitations
DOT National Transportation Integrated Search
2017-06-30
This two-part study employs fine-scale performance measures and analytical techniques designed to evaluate and improve transit services for people experiencing disability. Part one puts forth a series of time-sensitive, general transit feed system (G...
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.; ...
2017-08-25
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keiter, David A.; Davis, Amy J.; Rhodes, Olin E.
Knowledge of population density is necessary for effective management and conservation of wildlife, yet rarely are estimators compared in their robustness to effects of ecological and observational processes, which can greatly influence accuracy and precision of density estimates. For this study, we simulate biological and observational processes using empirical data to assess effects of animal scale of movement, true population density, and probability of detection on common density estimators. We also apply common data collection and analytical techniques in the field and evaluate their ability to estimate density of a globally widespread species. We find that animal scale of movementmore » had the greatest impact on accuracy of estimators, although all estimators suffered reduced performance when detection probability was low, and we provide recommendations as to when each field and analytical technique is most appropriately employed. The large influence of scale of movement on estimator accuracy emphasizes the importance of effective post-hoc calculation of area sampled or use of methods that implicitly account for spatial variation. In particular, scale of movement impacted estimators substantially, such that area covered and spacing of detectors (e.g. cameras, traps, etc.) must reflect movement characteristics of the focal species to reduce bias in estimates of movement and thus density.« less
Assessing precision, bias and sigma-metrics of 53 measurands of the Alinity ci system.
Westgard, Sten; Petrides, Victoria; Schneider, Sharon; Berman, Marvin; Herzogenrath, Jörg; Orzechowski, Anthony
2017-12-01
Assay performance is dependent on the accuracy and precision of a given method. These attributes can be combined into an analytical Sigma-metric, providing a simple value for laboratorians to use in evaluating a test method's capability to meet its analytical quality requirements. Sigma-metrics were determined for 37 clinical chemistry assays, 13 immunoassays, and 3 ICT methods on the Alinity ci system. Analytical Performance Specifications were defined for the assays, following a rationale of using CLIA goals first, then Ricos Desirable goals when CLIA did not regulate the method, and then other sources if the Ricos Desirable goal was unrealistic. A precision study was conducted at Abbott on each assay using the Alinity ci system following the CLSI EP05-A2 protocol. Bias was estimated following the CLSI EP09-A3 protocol using samples with concentrations spanning the assay's measuring interval tested in duplicate on the Alinity ci system and ARCHITECT c8000 and i2000 SR systems, where testing was also performed at Abbott. Using the regression model, the %bias was estimated at an important medical decisions point. Then the Sigma-metric was estimated for each assay and was plotted on a method decision chart. The Sigma-metric was calculated using the equation: Sigma-metric=(%TEa-|%bias|)/%CV. The Sigma-metrics and Normalized Method Decision charts demonstrate that a majority of the Alinity assays perform at least at five Sigma or higher, at or near critical medical decision levels. More than 90% of the assays performed at Five and Six Sigma. None performed below Three Sigma. Sigma-metrics plotted on Normalized Method Decision charts provide useful evaluations of performance. The majority of Alinity ci system assays had sigma values >5 and thus laboratories can expect excellent or world class performance. Laboratorians can use these tools as aids in choosing high-quality products, further contributing to the delivery of excellent quality healthcare for patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Assessment of catchments' flooding potential: a physically-based analytical tool
NASA Astrophysics Data System (ADS)
Botter, G.; Basso, S.; Schirmer, M.
2016-12-01
The assessment of the flooding potential of river catchments is critical in many research and applied fields, ranging from river science and geomorphology to urban planning and the insurance industry. Predicting magnitude and frequency of floods is key to prevent and mitigate the negative effects of high flows, and has therefore long been the focus of hydrologic research. Here, the recurrence intervals of seasonal flow maxima are estimated through a novel physically-based analytic approach, which links the extremal distribution of streamflows to the stochastic dynamics of daily discharge. An analytical expression of the seasonal flood-frequency curve is provided, whose parameters embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which expresses catchment saturation prior to rainfall events, needs to be calibrated on the observed maxima. The method has been tested in a set of catchments featuring heterogeneous daily flow regimes. The model is able to reproduce characteristic shapes of flood-frequency curves emerging in erratic and persistent flow regimes and provides good estimates of seasonal flow maxima in different climatic regions. Performances are steady when the magnitude of events with return times longer than the available sample size is estimated. This makes the approach especially valuable for regions affected by data scarcity.
NASA Technical Reports Server (NTRS)
Bowles, Roland L.; Buck, Bill K.
2009-01-01
The objective of the research developed and presented in this document was to statistically assess turbulence hazard detection performance employing airborne pulse Doppler radar systems. The FAA certification methodology for forward looking airborne turbulence radars will require estimating the probabilities of missed and false hazard indications under operational conditions. Analytical approaches must be used due to the near impossibility of obtaining sufficient statistics experimentally. This report describes an end-to-end analytical technique for estimating these probabilities for Enhanced Turbulence (E-Turb) Radar systems under noise-limited conditions, for a variety of aircraft types, as defined in FAA TSO-C134. This technique provides for one means, but not the only means, by which an applicant can demonstrate compliance to the FAA directed ATDS Working Group performance requirements. Turbulence hazard algorithms were developed that derived predictive estimates of aircraft hazards from basic radar observables. These algorithms were designed to prevent false turbulence indications while accurately predicting areas of elevated turbulence risks to aircraft, passengers, and crew; and were successfully flight tested on a NASA B757-200 and a Delta Air Lines B737-800. Application of this defined methodology for calculating the probability of missed and false hazard indications taking into account the effect of the various algorithms used, is demonstrated for representative transport aircraft and radar performance characteristics.
Hawkins, Robert C; Badrick, Tony
2015-08-01
In this study we aimed to compare the reporting unit size used by Australian laboratories for routine chemistry and haematology tests to the unit size used by learned authorities and in standard laboratory textbooks and to the justified unit size based on measurement uncertainty (MU) estimates from quality assurance program data. MU was determined from Royal College of Pathologists of Australasia (RCPA) - Australasian Association of Clinical Biochemists (AACB) and RCPA Haematology Quality Assurance Program survey reports. The reporting unit size implicitly suggested in authoritative textbooks, the RCPA Manual, and the General Serum Chemistry program itself was noted. We also used published data on Australian laboratory practices.The best performing laboratories could justify their chemistry unit size for 55% of analytes while comparable figures for the 50% and 90% laboratories were 14% and 8%, respectively. Reporting unit size was justifiable for all laboratories for red cell count, >50% for haemoglobin but only the top 10% for haematocrit. Few, if any, could justify their mean cell volume (MCV) and mean cell haemoglobin concentration (MCHC) reporting unit sizes.The reporting unit size used by many laboratories is not justified by present analytical performance. Using MU estimates to determine the reporting interval for quantitative laboratory results ensures reporting practices match local analytical performance and recognises the inherent error of the measurement process.
NASA Astrophysics Data System (ADS)
Lyubimov, V. V.; Kurkina, E. V.
2018-05-01
The authors consider the problem of a dynamic system passing through a low-order resonance, describing an uncontrolled atmospheric descent of an asymmetric nanosatellite in the Earth's atmosphere. The authors perform mathematical and numerical modeling of the motion of the nanosatellite with a small mass-aerodynamic asymmetry relative to the center of mass. The aim of the study is to obtain new reliable approximate analytical estimates of perturbations of the angle of attack of a nanosatellite passing through resonance at angles of attack of not more than 0.5π. By using the stationary phase method, the authors were able to investigate a discontinuous perturbation in the angle of attack of a nanosatellite passing through a resonance with two different nanosatellite designs. Comparison of the results of the numerical modeling and new approximate analytical estimates of the perturbation of the angle of attack confirms the reliability of the said estimates.
Pateras, Konstantinos; Nikolakopoulos, Stavros; Mavridis, Dimitris; Roes, Kit C B
2018-03-01
When a meta-analysis consists of a few small trials that report zero events, accounting for heterogeneity in the (interval) estimation of the overall effect is challenging. Typically, we predefine meta-analytical methods to be employed. In practice, data poses restrictions that lead to deviations from the pre-planned analysis, such as the presence of zero events in at least one study arm. We aim to explore heterogeneity estimators behaviour in estimating the overall effect across different levels of sparsity of events. We performed a simulation study that consists of two evaluations. We considered an overall comparison of estimators unconditional on the number of observed zero cells and an additional one by conditioning on the number of observed zero cells. Estimators that performed modestly robust when (interval) estimating the overall treatment effect across a range of heterogeneity assumptions were the Sidik-Jonkman, Hartung-Makambi and improved Paul-Mandel. The relative performance of estimators did not materially differ between making a predefined or data-driven choice. Our investigations confirmed that heterogeneity in such settings cannot be estimated reliably. Estimators whose performance depends strongly on the presence of heterogeneity should be avoided. The choice of estimator does not need to depend on whether or not zero cells are observed.
Quantifying the measurement uncertainty of results from environmental analytical methods.
Moser, J; Wegscheider, W; Sperka-Gottlieb, C
2001-07-01
The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.
Generic Vehicle Speed Models Based On Traffic Simulation: Development and Application (Revision #1)
DOT National Transportation Integrated Search
1994-12-15
The findings of a research project to develop new methods of estimating speeds for inclusion in the Highway Performance Monitoring System (HPMS) Analytical Process are summarized. The paper focuses on the effects of traffic conditions excluding incid...
Analytic Methods for Adjusting Subjective Rating Schemes
1976-06-01
individual performance. The approach developed here is a variant of the classical linear regression model. Specifically, it la proposed that...values of y and X. Moreover, this difference la gener- ally independent of sample size, so that LS estimates are different from ML estimates at...baervationa. H^ever, aa T. -. - ,„ aU . th(. Hit (4.10) la aatlafled, and EKV and ML eatlnatea are equlvalent A practical proble, in applying
Werner, S.L.; Johnson, S.M.
1994-01-01
As part of its primary responsibility concerning water as a national resource, the U.S. Geological Survey collects and analyzes samples of ground water and surface water to determine water quality. This report describes the method used since June 1987 to determine selected total-recoverable carbamate pesticides present in water samples. High- performance liquid chromatography is used to separate N-methyl carbamates, N-methyl carbamoyloximes, and an N-phenyl carbamate which have been extracted from water and concentrated in dichloromethane. Analytes, surrogate compounds, and reference compounds are eluted from the analytical column within 25 minutes. Two modes of analyte detection are used: (1) a photodiode-array detector measures and records ultraviolet-absorbance profiles, and (2) a fluorescence detector measures and records fluorescence from an analyte derivative produced when analyte hydrolysis is combined with chemical derivatization. Analytes are identified and confirmed in a three-stage process by use of chromatographic retention time, ultraviolet (UV) spectral comparison, and derivatization/fluorescence detection. Quantitative results are based on the integration of single-wavelength UV-absorbance chromatograms and on comparison with calibration curves derived from external analyte standards that are run with samples as part of an instrumental analytical sequence. Estimated method detection limits vary for each analyte, depending on the sample matrix conditions, and range from 0.5 microgram per liter to as low as 0.01 microgram per liter. Reporting levels for all analytes have been set at 0.5 microgram per liter for this method. Corrections on the basis of percentage recoveries of analytes spiked into distilled water are not applied to values calculated for analyte concentration in samples. These values for analyte concentrations instead indicate the quantities recovered by the method from a particular sample matrix.
Aguirre-Urreta, Miguel I; Ellis, Michael E; Sun, Wenying
2012-03-01
This research investigates the performance of a proportion-based approach to meta-analytic moderator estimation through a series of Monte Carlo simulations. This approach is most useful when the moderating potential of a categorical variable has not been recognized in primary research and thus heterogeneous groups have been pooled together as a single sample. Alternative scenarios representing different distributions of group proportions are examined along with varying numbers of studies, subjects per study, and correlation combinations. Our results suggest that the approach is largely unbiased in its estimation of the magnitude of between-group differences and performs well with regard to statistical power and type I error. In particular, the average percentage bias of the estimated correlation for the reference group is positive and largely negligible, in the 0.5-1.8% range; the average percentage bias of the difference between correlations is also minimal, in the -0.1-1.2% range. Further analysis also suggests both biases decrease as the magnitude of the underlying difference increases, as the number of subjects in each simulated primary study increases, and as the number of simulated studies in each meta-analysis increases. The bias was most evident when the number of subjects and the number of studies were the smallest (80 and 36, respectively). A sensitivity analysis that examines its performance in scenarios down to 12 studies and 40 primary subjects is also included. This research is the first that thoroughly examines the adequacy of the proportion-based approach. Copyright © 2012 John Wiley & Sons, Ltd. Copyright © 2012 John Wiley & Sons, Ltd.
Analytical Description of the H/D Exchange Kinetic of Macromolecule.
Kostyukevich, Yury; Kononikhin, Alexey; Popov, Igor; Nikolaev, Eugene
2018-04-17
We present the accurate analytical solution obtained for the system of rate equations describing the isotope exchange process for molecules containing an arbitrary number of equivalent labile atoms. The exact solution was obtained using Mathematica 7.0 software, and this solution has the form of the time-dependent Gaussian distribution. For the case when forward exchange considerably overlaps the back exchange, it is possible to estimate the activation energy of the reaction by obtaining a temperature dependence of the reaction degree. Using a previously developed approach for performing H/D exchange directly in the ESI source, we have estimated the activation energies for ions with different functional groups and they were found to be in a range 0.04-0.3 eV. Since the value of the activation energy depends on the type of functional group, the developed approach can have potential analytical applications for determining types of functional groups in complex mixtures, such as petroleum, humic substances, bio-oil, and so on.
Kochunov, Peter; Jahanshad, Neda; Sprooten, Emma; Nichols, Thomas E; Mandl, René C; Almasy, Laura; Booth, Tom; Brouwer, Rachel M; Curran, Joanne E; de Zubicaray, Greig I; Dimitrova, Rali; Duggirala, Ravi; Fox, Peter T; Hong, L Elliot; Landman, Bennett A; Lemaitre, Hervé; Lopez, Lorna M; Martin, Nicholas G; McMahon, Katie L; Mitchell, Braxton D; Olvera, Rene L; Peterson, Charles P; Starr, John M; Sussmann, Jessika E; Toga, Arthur W; Wardlaw, Joanna M; Wright, Margaret J; Wright, Susan N; Bastin, Mark E; McIntosh, Andrew M; Boomsma, Dorret I; Kahn, René S; den Braber, Anouk; de Geus, Eco J C; Deary, Ian J; Hulshoff Pol, Hilleke E; Williamson, Douglas E; Blangero, John; van 't Ent, Dennis; Thompson, Paul M; Glahn, David C
2014-07-15
Combining datasets across independent studies can boost statistical power by increasing the numbers of observations and can achieve more accurate estimates of effect sizes. This is especially important for genetic studies where a large number of observations are required to obtain sufficient power to detect and replicate genetic effects. There is a need to develop and evaluate methods for joint-analytical analyses of rich datasets collected in imaging genetics studies. The ENIGMA-DTI consortium is developing and evaluating approaches for obtaining pooled estimates of heritability through meta-and mega-genetic analytical approaches, to estimate the general additive genetic contributions to the intersubject variance in fractional anisotropy (FA) measured from diffusion tensor imaging (DTI). We used the ENIGMA-DTI data harmonization protocol for uniform processing of DTI data from multiple sites. We evaluated this protocol in five family-based cohorts providing data from a total of 2248 children and adults (ages: 9-85) collected with various imaging protocols. We used the imaging genetics analysis tool, SOLAR-Eclipse, to combine twin and family data from Dutch, Australian and Mexican-American cohorts into one large "mega-family". We showed that heritability estimates may vary from one cohort to another. We used two meta-analytical (the sample-size and standard-error weighted) approaches and a mega-genetic analysis to calculate heritability estimates across-population. We performed leave-one-out analysis of the joint estimates of heritability, removing a different cohort each time to understand the estimate variability. Overall, meta- and mega-genetic analyses of heritability produced robust estimates of heritability. Copyright © 2014 Elsevier Inc. All rights reserved.
Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua
2018-01-01
Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinsky, Benjamin A.; Sahoo, Malaya K.; Sandlund, Johanna
The recently developed Xpert® Ebola Assay is a novel nucleic acid amplification test for simplified detection of Ebola virus (EBOV) in whole blood and buccal swab samples. The assay targets sequences in two EBOV genes, lowering the risk for new variants to escape detection in the test. The objective of this report is to present analytical characteristics of the Xpert® Ebola Assay on whole blood samples. Our study evaluated the assay’s analytical sensitivity, analytical specificity, inclusivity and exclusivity performance in whole blood specimens. EBOV RNA, inactivated EBOV, and infectious EBOV were used as targets. The dynamic range of the assay,more » the inactivation of virus, and specimen stability were also evaluated. The lower limit of detection (LoD) for the assay using inactivated virus was estimated to be 73 copies/mL (95% CI: 51–97 copies/mL). The LoD for infectious virus was estimated to be 1 plaque-forming unit/mL, and for RNA to be 232 copies/mL (95% CI 163–302 copies/mL). The assay correctly identified five different Ebola viruses, Yambuku-Mayinga, Makona-C07, Yambuku-Ecran, Gabon-Ilembe, and Kikwit-956210, and correctly excluded all non-EBOV isolates tested. The conditions used by Xpert® Ebola for inactivation of infectious virus reduced EBOV titer by ≥6 logs. In conclusion, we found the Xpert® Ebola Assay to have high analytical sensitivity and specificity for the detection of EBOV in whole blood. It offers ease of use, fast turnaround time, and remote monitoring. The test has an efficient viral inactivation protocol, fulfills inclusivity and exclusivity criteria, and has specimen stability characteristics consistent with the need for decentralized testing. The simplicity of the assay should enable testing in a wide variety of laboratory settings, including remote laboratories that are not capable of performing highly complex nucleic acid amplification tests, and during outbreaks where time to detection is critical.« less
Pinsky, Benjamin A.; Sahoo, Malaya K.; Sandlund, Johanna; ...
2015-11-12
The recently developed Xpert® Ebola Assay is a novel nucleic acid amplification test for simplified detection of Ebola virus (EBOV) in whole blood and buccal swab samples. The assay targets sequences in two EBOV genes, lowering the risk for new variants to escape detection in the test. The objective of this report is to present analytical characteristics of the Xpert® Ebola Assay on whole blood samples. Our study evaluated the assay’s analytical sensitivity, analytical specificity, inclusivity and exclusivity performance in whole blood specimens. EBOV RNA, inactivated EBOV, and infectious EBOV were used as targets. The dynamic range of the assay,more » the inactivation of virus, and specimen stability were also evaluated. The lower limit of detection (LoD) for the assay using inactivated virus was estimated to be 73 copies/mL (95% CI: 51–97 copies/mL). The LoD for infectious virus was estimated to be 1 plaque-forming unit/mL, and for RNA to be 232 copies/mL (95% CI 163–302 copies/mL). The assay correctly identified five different Ebola viruses, Yambuku-Mayinga, Makona-C07, Yambuku-Ecran, Gabon-Ilembe, and Kikwit-956210, and correctly excluded all non-EBOV isolates tested. The conditions used by Xpert® Ebola for inactivation of infectious virus reduced EBOV titer by ≥6 logs. In conclusion, we found the Xpert® Ebola Assay to have high analytical sensitivity and specificity for the detection of EBOV in whole blood. It offers ease of use, fast turnaround time, and remote monitoring. The test has an efficient viral inactivation protocol, fulfills inclusivity and exclusivity criteria, and has specimen stability characteristics consistent with the need for decentralized testing. The simplicity of the assay should enable testing in a wide variety of laboratory settings, including remote laboratories that are not capable of performing highly complex nucleic acid amplification tests, and during outbreaks where time to detection is critical.« less
Empirically Optimized Flow Cytometric Immunoassay Validates Ambient Analyte Theory
Parpia, Zaheer A.; Kelso, David M.
2010-01-01
Ekins’ ambient analyte theory predicts, counter intuitively, that an immunoassay’s limit of detection can be improved by reducing the amount of capture antibody. In addition, it also anticipates that results should be insensitive to the volume of sample as well as the amount of capture antibody added. The objective of this study is to empirically validate all of the performance characteristics predicted by Ekins’ theory. Flow cytometric analysis was used to detect binding between a fluorescent ligand and capture microparticles since it can directly measure fractional occupancy, the primary response variable in ambient analyte theory. After experimentally determining ambient analyte conditions, comparisons were carried out between ambient and non-ambient assays in terms of their signal strengths, limits of detection, and their sensitivity to variations in reaction volume and number of particles. The critical number of binding sites required for an assay to be in the ambient analyte region was estimated to be 0.1VKd. As predicted, such assays exhibited superior signal/noise levels and limits of detection; and were not affected by variations in sample volume and number of binding sites. When the signal detected measures fractional occupancy, ambient analyte theory is an excellent guide to developing assays with superior performance characteristics. PMID:20152793
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Tabak, D.
1979-01-01
The study involves the bank of filters approach to analytical redundancy management since this is amenable to microelectronic implementation. Attention is given to a study of the UD factorized filter to determine if it gives more accurate estimates than the standard Kalman filter when data processing word size is reduced. It is reported that, as the word size is reduced, the effect of modeling error dominates the filter performance of the two filters. However, the UD filter is shown to maintain a slight advantage in tracking performance. It is concluded that because of the UD filter's stability in the serial processing mode, it remains the leading candidate for microelectronic implementation.
Pilot testing of SHRP 2 reliability data and analytical products: Washington. [supporting datasets
DOT National Transportation Integrated Search
2014-01-01
The Washington site used the reliability guide from Project L02, analysis tools for forecasting reliability and estimating impacts from Project L07, Project L08, and Project C11 as well as the guide on reliability performance measures from the Projec...
Performance monitoring can boost turboexpander efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntire, R.
1982-07-05
Focuses on the turboexpander/refrigeration system's radial expander and radial compressor. Explains that radial expander efficiency depends on mass flow rate, inlet pressure, inlet temperature, discharge pressure, gas composition, and shaft speed. Discusses quantifying the performance of the separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. Emphasizes antisurge control and modifying Q/N (flow rate/ shaft speed).
Comparison of three methods for wind turbine capacity factor estimation.
Ditkovich, Y; Kuperman, A
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first "quasiexact" approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second "analytic" approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third "approximate" approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation.
Structure of the classical scrape-off layer of a tokamak
NASA Astrophysics Data System (ADS)
Rozhansky, V.; Kaveeva, E.; Senichenkov, I.; Vekshina, E.
2018-03-01
The structure of the scrape-off layer (SOL) of a tokamak with little or no turbulent transport is analyzed. The analytical estimates of the density and electron temperature fall-off lengths of the SOL are put forward. It is demonstrated that the SOL width could be of the order of the ion poloidal gyroradius, as suggested in Goldston (2012 Nuclear Fusion 52 013009). The analytical results are supported by the results of the 2D simulations of the edge plasma with reduced transport coefficients performed by SOLPS-ITER transport code.
Chen, Jun; Quan, Wenting; Cui, Tingwei
2015-01-01
In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).
Performance characterization of material identification systems
NASA Astrophysics Data System (ADS)
Brown, Christopher D.; Green, Robert L.
2006-10-01
In recent years a number of analytical devices have been proposed and marketed specifically to enable field-based material identification. Technologies reliant on mass, near- and mid-infrared, and Raman spectroscopies are available today, and other platforms are imminent. These systems tend to perform material recognition based on an on-board library of material signatures. While figures of merit for traditional quantitative analytical sensors are broadly established (e.g., SNR, selectivity, sensitivity, limit of detection/decision), measures of performance for material identification systems have not been systematically discussed. In this paper we present an approach to performance characterization similar in spirit to ROC curves, but including elements of precision-recall curves and specialized for the intended-use of material identification systems. Important experimental considerations are discussed, including study design, sources of bias, uncertainty estimation, and cross-validation and the approach as a whole is illustrated using a commercially available handheld Raman material identification system.
Statistical Power in Meta-Analysis
ERIC Educational Resources Information Center
Liu, Jin
2015-01-01
Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…
Johner, S A; Boeing, H; Thamm, M; Remer, T
2015-12-01
The assessment of urinary excretion of specific nutrients (e.g. iodine, sodium) is frequently used to monitor a population's nutrient status. However, when only spot urines are available, always a risk of hydration-status-dependent dilution effects and related misinterpretations exists. The aim of the present study was to establish mean values of 24-h creatinine excretion widely applicable for an appropriate estimation of 24-h excretion rates of analytes from spot urines in adults. Twenty-four-hour creatinine excretion from the formerly representative cross-sectional German VERA Study (n=1463, 20-79 years old) was analysed. Linear regression analysis was performed to identify the most important influencing factors of creatinine excretion. In a subsample of the German DONALD Study (n=176, 20-29 years old), the applicability of the 24-h creatinine excretion values of VERA for the estimation of 24-h sodium and iodine excretion from urinary concentration measurements was tested. In the VERA Study, mean 24-h creatinine excretion was 15.4 mmol per day in men and 11.1 mmol per day in women, significantly dependent on sex, age, body weight and body mass index. Based on the established 24-h creatinine excretion values, mean 24-h iodine and sodium excretions could be estimated from respective analyte/creatinine concentrations, with average deviations <10% compared with the actual 24-h means. The present mean values of 24-h creatinine excretion are suggested as a useful tool to derive realistic hydration-status-independent average 24-h excretion rates from urinary analyte/creatinine ratios. We propose to apply these creatinine reference means routinely in biomarker-based studies aiming at characterizing the nutrient or metabolite status of adult populations by simply measuring metabolite/creatinine ratios in spot urines.
NASA Astrophysics Data System (ADS)
Trauth, N.; Schmidt, C.; Munz, M.
2016-12-01
Heat as a natural tracer to quantify water fluxes between groundwater and surface water has evolved to a standard hydrological method. Typically, time series of temperatures in the surface water and in the sediment are observed and are subsequently evaluated by a vertical 1D representation of heat transport by advection and dispersion. Several analytical solutions as well as their implementation into user-friendly software exist in order to estimate water fluxes from the observed temperatures. Analytical solutions can be easily implemented but assumptions on the boundary conditions have to be made a priori, e.g. sinusoidal upper temperature boundary. Numerical models offer more flexibility and can handle temperature data which is characterized by irregular variations such as storm-event induced temperature changes and thus cannot readily be incorporated in analytical solutions. This also reduced the effort of data preprocessing such as the extraction of the diurnal temperature variation. We developed a software to estimate water FLUXes Based On Temperatures- FLUX-BOT. FLUX-BOT is a numerical code written in MATLAB which is intended to calculate vertical water fluxes in saturated sediments, based on the inversion of measured temperature time series observed at multiple depths. It applies a cell-centered Crank-Nicolson implicit finite difference scheme to solve the one-dimensional heat advection-conduction equation. Besides its core inverse numerical routines, FLUX-BOT includes functions visualizing the results and functions for performing uncertainty analysis. We provide applications of FLUX-BOT to generic as well as to measured temperature data to demonstrate its performance.
Milky Way mass and potential recovery using tidal streams in a realistic halo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonaca, Ana; Geha, Marla; Küpper, Andreas H. W.
2014-11-01
We present a new method for determining the Galactic gravitational potential based on forward modeling of tidal stellar streams. We use this method to test the performance of smooth and static analytic potentials in representing realistic dark matter halos, which have substructure and are continually evolving by accretion. Our FAST-FORWARD method uses a Markov Chain Monte Carlo algorithm to compare, in six-dimensional phase space, an 'observed' stream to models created in trial analytic potentials. We analyze a large sample of streams that evolved in the Via Lactea II (VL2) simulation, which represents a realistic Galactic halo potential. The recovered potentialmore » parameters are in agreement with the best fit to the global, present-day VL2 potential. However, merely assuming an analytic potential limits the dark matter halo mass measurement to an accuracy of 5%-20%, depending on the choice of analytic parameterization. Collectively, the mass estimates using streams from our sample reach this fundamental limit, but individually they can be highly biased. Individual streams can both under- and overestimate the mass, and the bias is progressively worse for those with smaller perigalacticons, motivating the search for tidal streams at galactocentric distances larger than 70 kpc. We estimate that the assumption of a static and smooth dark matter potential in modeling of the GD-1- and Pal5-like streams introduces an error of up to 50% in the Milky Way mass estimates.« less
NASA Astrophysics Data System (ADS)
Nawar, Said; Buddenbaum, Henning; Hill, Joachim
2014-05-01
A rapid and inexpensive soil analytical technique is needed for soil quality assessment and accurate mapping. This study investigated a method for improved estimation of soil clay (SC) and organic matter (OM) using reflectance spectroscopy. Seventy soil samples were collected from Sinai peninsula in Egypt to estimate the soil clay and organic matter relative to the soil spectra. Soil samples were scanned with an Analytical Spectral Devices (ASD) spectrometer (350-2500 nm). Three spectral formats were used in the calibration models derived from the spectra and the soil properties: (1) original reflectance spectra (OR), (2) first-derivative spectra smoothened using the Savitzky-Golay technique (FD-SG) and (3) continuum-removed reflectance (CR). Partial least-squares regression (PLSR) models using the CR of the 400-2500 nm spectral region resulted in R2 = 0.76 and 0.57, and RPD = 2.1 and 1.5 for estimating SC and OM, respectively, indicating better performance than that obtained using OR and SG. The multivariate adaptive regression splines (MARS) calibration model with the CR spectra resulted in an improved performance (R2 = 0.89 and 0.83, RPD = 3.1 and 2.4) for estimating SC and OM, respectively. The results show that the MARS models have a great potential for estimating SC and OM compared with PLSR models. The results obtained in this study have potential value in the field of soil spectroscopy because they can be applied directly to the mapping of soil properties using remote sensing imagery in arid environment conditions. Key Words: soil clay, organic matter, PLSR, MARS, reflectance spectroscopy.
Shi, Chenguang; Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-10-18
In this study, the modified Cramér-Rao lower bounds (MCRLBs) on the joint estimation of target position and velocity is investigated for a universal mobile telecommunication system (UMTS)-based passive multistatic radar system with antenna arrays. First, we analyze the log-likelihood redfunction of the received signal for a complex Gaussian extended target. Then, due to the non-deterministic transmitted data symbols, the analytically closed-form expressions of the MCRLBs on the Cartesian coordinates of target position and velocity are derived for a multistatic radar system with N t UMTS-based transmit station of L t antenna elements and N r receive stations of L r antenna elements. With the aid of numerical simulations, it is shown that increasing the number of receiving elements in each receive station can reduce the estimation errors. In addition, it is demonstrated that the MCRLB is not only a function of signal-to-noise ratio (SNR), the number of receiving antenna elements and the properties of the transmitted UMTS signals, but also a function of the relative geometric configuration between the target and the multistatic radar system.The analytical expressions for MCRLB will open up a new dimension for passive multistatic radar system by aiding the optimal placement of receive stations to improve the target parameter estimation performance.
Wang, Fei; Salous, Sana; Zhou, Jianjiang
2017-01-01
In this study, the modified Cramér-Rao lower bounds (MCRLBs) on the joint estimation of target position and velocity is investigated for a universal mobile telecommunication system (UMTS)-based passive multistatic radar system with antenna arrays. First, we analyze the log-likelihood redfunction of the received signal for a complex Gaussian extended target. Then, due to the non-deterministic transmitted data symbols, the analytically closed-form expressions of the MCRLBs on the Cartesian coordinates of target position and velocity are derived for a multistatic radar system with Nt UMTS-based transmit station of Lt antenna elements and Nr receive stations of Lr antenna elements. With the aid of numerical simulations, it is shown that increasing the number of receiving elements in each receive station can reduce the estimation errors. In addition, it is demonstrated that the MCRLB is not only a function of signal-to-noise ratio (SNR), the number of receiving antenna elements and the properties of the transmitted UMTS signals, but also a function of the relative geometric configuration between the target and the multistatic radar system.The analytical expressions for MCRLB will open up a new dimension for passive multistatic radar system by aiding the optimal placement of receive stations to improve the target parameter estimation performance. PMID:29057805
Development of PARMA: PHITS-based analytical radiation model in the atmosphere.
Sato, Tatsuhiko; Yasuda, Hiroshi; Niita, Koji; Endo, Akira; Sihver, Lembit
2008-08-01
Estimation of cosmic-ray spectra in the atmosphere has been essential for the evaluation of aviation doses. We therefore calculated these spectra by performing Monte Carlo simulation of cosmic-ray propagation in the atmosphere using the PHITS code. The accuracy of the simulation was well verified by experimental data taken under various conditions, even near sea level. Based on a comprehensive analysis of the simulation results, we proposed an analytical model for estimating the cosmic-ray spectra of neutrons, protons, helium ions, muons, electrons, positrons and photons applicable to any location in the atmosphere at altitudes below 20 km. Our model, named PARMA, enables us to calculate the cosmic radiation doses rapidly with a precision equivalent to that of the Monte Carlo simulation, which requires much more computational time. With these properties, PARMA is capable of improving the accuracy and efficiency of the cosmic-ray exposure dose estimations not only for aircrews but also for the public on the ground.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sato, Tatsuhiko; Satoh, Daiki; Endo, Akira
Estimation of cosmic-ray spectra in the atmosphere has been an essential issue in the evaluation of the aircrew doses. We therefore developed an analytical model that can predict the terrestrial neutron, proton, He nucleus, muon, electron, positron and photon spectra at altitudes below 20 km, based on the Monte Carlo simulation results of cosmic-ray propagation in the atmosphere performed by the PHITS code. The model was designated PARMA. In order to examine the accuracy of PARMA in terms of the neutron dose estimation, we measured the neutron dose rates at the altitudes between 20 to 10400 m, using our developedmore » dose monitor DARWIN mounted on an aircraft. Excellent agreement was observed between the measured dose rates and the corresponding data calculated by PARMA coupled with the fluence-to-dose conversion coefficients, indicating the applicability of the model to be utilized in the route-dose calculation.« less
Estimation of the diagnostic threshold accounting for decision costs and sampling uncertainty.
Skaltsa, Konstantina; Jover, Lluís; Carrasco, Josep Lluís
2010-10-01
Medical diagnostic tests are used to classify subjects as non-diseased or diseased. The classification rule usually consists of classifying subjects using the values of a continuous marker that is dichotomised by means of a threshold. Here, the optimum threshold estimate is found by minimising a cost function that accounts for both decision costs and sampling uncertainty. The cost function is optimised either analytically in a normal distribution setting or empirically in a free-distribution setting when the underlying probability distributions of diseased and non-diseased subjects are unknown. Inference of the threshold estimates is based on approximate analytically standard errors and bootstrap-based approaches. The performance of the proposed methodology is assessed by means of a simulation study, and the sample size required for a given confidence interval precision and sample size ratio is also calculated. Finally, a case example based on previously published data concerning the diagnosis of Alzheimer's patients is provided in order to illustrate the procedure.
Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.
2017-01-01
Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237
Analysis of high-aspect-ratio jet-flap wings of arbitrary geometry
NASA Technical Reports Server (NTRS)
Lissaman, P. B. S.
1973-01-01
An analytical technique to compute the performance of an arbitrary jet-flapped wing is developed. The solution technique is based on the method of Maskell and Spence in which the well-known lifting-line approach is coupled with an auxiliary equation providing the extra function needed in jet-flap theory. The present method is generalized to handle straight, uncambered wings of arbitrary planform, twist, and blowing (including unsymmetrical cases). An analytical procedure is developed for continuous variations in the above geometric data with special functions to exactly treat discontinuities in any of the geometric and blowing data. A rational theory for the effect of finite wing thickness is introduced as well as simplified concepts of effective aspect ratio for rapid estimation of performance.
Analytical and numerical performance models of a Heisenberg Vortex Tube
NASA Astrophysics Data System (ADS)
Bunge, C. D.; Cavender, K. A.; Matveev, K. I.; Leachman, J. W.
2017-12-01
Analytical and numerical investigations of a Heisenberg Vortex Tube (HVT) are performed to estimate the cooling potential with cryogenic hydrogen. The Ranque-Hilsch Vortex Tube (RHVT) is a device that tangentially injects a compressed fluid stream into a cylindrical geometry to promote enthalpy streaming and temperature separation between inner and outer flows. The HVT is the result of lining the inside of a RHVT with a hydrogen catalyst. This is the first concept to utilize the endothermic heat of para-orthohydrogen conversion to aid primary cooling. A review of 1st order vortex tube models available in the literature is presented and adapted to accommodate cryogenic hydrogen properties. These first order model predictions are compared with 2-D axisymmetric Computational Fluid Dynamics (CFD) simulations.
Sliding mode control of magnetic suspensions for precision pointing and tracking applications
NASA Technical Reports Server (NTRS)
Misovec, Kathleen M.; Flynn, Frederick J.; Johnson, Bruce G.; Hedrick, J. Karl
1991-01-01
A recently developed nonlinear control method, sliding mode control, is examined as a means of advancing the achievable performance of space-based precision pointing and tracking systems that use nonlinear magnetic actuators. Analytic results indicate that sliding mode control improves performance compared to linear control approaches. In order to realize these performance improvements, precise knowledge of the plant is required. Additionally, the interaction of an estimating scheme and the sliding mode controller has not been fully examined in the literature. Estimation schemes were designed for use with this sliding mode controller that do not seriously degrade system performance. The authors designed and built a laboratory testbed to determine the feasibility of utilizing sliding mode control in these types of applications. Using this testbed, experimental verification of the authors' analyses is ongoing.
Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.
Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya
2018-05-05
This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.
Extending birthday paradox theory to estimate the number of tags in RFID systems.
Shakiba, Masoud; Singh, Mandeep Jit; Sundararajan, Elankovan; Zavvari, Azam; Islam, Mohammad Tariqul
2014-01-01
The main objective of Radio Frequency Identification systems is to provide fast identification for tagged objects. However, there is always a chance of collision, when tags transmit their data to the reader simultaneously. Collision is a time-consuming event that reduces the performance of RFID systems. Consequently, several anti-collision algorithms have been proposed in the literature. Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular of these algorithms. DFSA dynamically modifies the frame size based on the number of tags. Since the real number of tags is unknown, it needs to be estimated. Therefore, an accurate tag estimation method has an important role in increasing the efficiency and overall performance of the tag identification process. In this paper, we propose a novel estimation technique for DFSA anti-collision algorithms that applies birthday paradox theory to estimate the number of tags accurately. The analytical discussion and simulation results prove that the proposed method increases the accuracy of tag estimation and, consequently, outperforms previous schemes.
Extending Birthday Paradox Theory to Estimate the Number of Tags in RFID Systems
Shakiba, Masoud; Singh, Mandeep Jit; Sundararajan, Elankovan; Zavvari, Azam; Islam, Mohammad Tariqul
2014-01-01
The main objective of Radio Frequency Identification systems is to provide fast identification for tagged objects. However, there is always a chance of collision, when tags transmit their data to the reader simultaneously. Collision is a time-consuming event that reduces the performance of RFID systems. Consequently, several anti-collision algorithms have been proposed in the literature. Dynamic Framed Slotted ALOHA (DFSA) is one of the most popular of these algorithms. DFSA dynamically modifies the frame size based on the number of tags. Since the real number of tags is unknown, it needs to be estimated. Therefore, an accurate tag estimation method has an important role in increasing the efficiency and overall performance of the tag identification process. In this paper, we propose a novel estimation technique for DFSA anti-collision algorithms that applies birthday paradox theory to estimate the number of tags accurately. The analytical discussion and simulation results prove that the proposed method increases the accuracy of tag estimation and, consequently, outperforms previous schemes. PMID:24752285
Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà
2010-03-01
Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D; Goodall, John R
2014-01-01
Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective evaluation methods are needed to evaluate how well provenance tools support analyst s memory and communication of analytic processes. In this paper, we present several methods for the evaluation of process memory, and we discuss the advantages and limitations of each. We discuss methods for determining a baseline process for comparison, and we describe various methods that can be used to elicit processmore » recall, step ordering, and time estimations. Additionally, we discuss methods for conducting quantitative and qualitative analyses of process memory. By organizing possible memory evaluation methods and providing a meta-analysis of the potential benefits and drawbacks of different approaches, this paper can inform study design and encourage objective evaluation of process memory and communication.« less
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization. PMID:27243005
Rosenblatt, Marcus; Timmer, Jens; Kaschek, Daniel
2016-01-01
Ordinary differential equation models have become a wide-spread approach to analyze dynamical systems and understand underlying mechanisms. Model parameters are often unknown and have to be estimated from experimental data, e.g., by maximum-likelihood estimation. In particular, models of biological systems contain a large number of parameters. To reduce the dimensionality of the parameter space, steady-state information is incorporated in the parameter estimation process. For non-linear models, analytical steady-state calculation typically leads to higher-order polynomial equations for which no closed-form solutions can be obtained. This can be circumvented by solving the steady-state equations for kinetic parameters, which results in a linear equation system with comparatively simple solutions. At the same time multiplicity of steady-state solutions is avoided, which otherwise is problematic for optimization. When solved for kinetic parameters, however, steady-state constraints tend to become negative for particular model specifications, thus, generating new types of optimization problems. Here, we present an algorithm based on graph theory that derives non-negative, analytical steady-state expressions by stepwise removal of cyclic dependencies between dynamical variables. The algorithm avoids multiple steady-state solutions by construction. We show that our method is applicable to most common classes of biochemical reaction networks containing inhibition terms, mass-action and Hill-type kinetic equations. Comparing the performance of parameter estimation for different analytical and numerical methods of incorporating steady-state information, we show that our approach is especially well-tailored to guarantee a high success rate of optimization.
Relating Vegetation Aerodynamic Roughness Length to Interferometric SAR Measurements
NASA Technical Reports Server (NTRS)
Saatchi, Sassan; Rodriquez, Ernesto
1998-01-01
In this paper, we investigate the feasibility of estimating aerodynamic roughness parameter from interferometric SAR (INSAR) measurements. The relation between the interferometric correlation and the rms height of the surface is presented analytically. Model simulations performed over realistic canopy parameters obtained from field measurements in boreal forest environment demonstrate the capability of the INSAR measurements for estimating and mapping surface roughness lengths over forests and/or other vegetation types. The procedure for estimating this parameter over boreal forests using the INSAR data is discussed and the possibility of extending the methodology over tropical forests is examined.
NASA Astrophysics Data System (ADS)
Kanoglu, U.; Wronna, M.; Baptista, M. A.; Miranda, J. M. A.
2017-12-01
The one-dimensional analytical runup theory in combination with near shore synthetic waveforms is a promising tool for tsunami rapid early warning systems. Its application in realistic cases with complex bathymetry and initial wave condition from inverse modelling have shown that maximum runup values can be estimated reasonably well. In this study we generate a simplistic bathymetry domains which resemble realistic near-shore features. We investigate the accuracy of the analytical runup formulae to the variation of fault source parameters and near-shore bathymetric features. To do this we systematically vary the fault plane parameters to compute the initial tsunami wave condition. Subsequently, we use the initial conditions to run the numerical tsunami model using coupled system of four nested grids and compare the results to the analytical estimates. Variation of the dip angle of the fault plane showed that analytical estimates have less than 10% difference for angles 5-45 degrees in a simple bathymetric domain. These results shows that the use of analytical formulae for fast run up estimates constitutes a very promising approach in a simple bathymetric domain and might be implemented in Hazard Mapping and Early Warning.
NASA Astrophysics Data System (ADS)
Bu, Xiangwei; Wu, Xiaoyan; Huang, Jiaqi; Wei, Daozhi
2016-11-01
This paper investigates the design of a novel estimation-free prescribed performance non-affine control strategy for the longitudinal dynamics of an air-breathing hypersonic vehicle (AHV) via back-stepping. The proposed control scheme is capable of guaranteeing tracking errors of velocity, altitude, flight-path angle, pitch angle and pitch rate with prescribed performance. By prescribed performance, we mean that the tracking error is limited to a predefined arbitrarily small residual set, with convergence rate no less than a certain constant, exhibiting maximum overshoot less than a given value. Unlike traditional back-stepping designs, there is no need of an affine model in this paper. Moreover, both the tedious analytic and numerical computations of time derivatives of virtual control laws are completely avoided. In contrast to estimation-based strategies, the presented estimation-free controller possesses much lower computational costs, while successfully eliminating the potential problem of parameter drifting. Owing to its independence on an accurate AHV model, the studied methodology exhibits excellent robustness against system uncertainties. Finally, simulation results from a fully nonlinear model clarify and verify the design.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
NASA Technical Reports Server (NTRS)
Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.
1992-01-01
An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.
Optimization of a coaxial electron cyclotron resonance plasma thruster with an analytical model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cannat, F., E-mail: felix.cannat@onera.fr, E-mail: felix.cannat@gmail.com; Lafleur, T.; Laboratoire de Physique des Plasmas, CNRS, Sorbonne Universites, UPMC Univ Paris 06, Univ Paris-Sud, Ecole Polytechnique, 91128 Palaiseau
2015-05-15
A new cathodeless plasma thruster currently under development at Onera is presented and characterized experimentally and analytically. The coaxial thruster consists of a microwave antenna immersed in a magnetic field, which allows electron heating via cyclotron resonance. The magnetic field diverges at the thruster exit and forms a nozzle that accelerates the quasi-neutral plasma to generate a thrust. Different thruster configurations are tested, and in particular, the influence of the source diameter on the thruster performance is investigated. At microwave powers of about 30 W and a xenon flow rate of 0.1 mg/s (1 SCCM), a mass utilization of 60% and amore » thrust of 1 mN are estimated based on angular electrostatic probe measurements performed downstream of the thruster in the exhaust plume. Results are found to be in fair agreement with a recent analytical helicon thruster model that has been adapted for the coaxial geometry used here.« less
Mechanical behavior of regular open-cell porous biomaterials made of diamond lattice unit cells.
Ahmadi, S M; Campoli, G; Amin Yavari, S; Sajadi, B; Wauthle, R; Schrooten, J; Weinans, H; Zadpoor, A A
2014-06-01
Cellular structures with highly controlled micro-architectures are promising materials for orthopedic applications that require bone-substituting biomaterials or implants. The availability of additive manufacturing techniques has enabled manufacturing of biomaterials made of one or multiple types of unit cells. The diamond lattice unit cell is one of the relatively new types of unit cells that are used in manufacturing of regular porous biomaterials. As opposed to many other types of unit cells, there is currently no analytical solution that could be used for prediction of the mechanical properties of cellular structures made of the diamond lattice unit cells. In this paper, we present new analytical solutions and closed-form relationships for predicting the elastic modulus, Poisson׳s ratio, critical buckling load, and yield (plateau) stress of cellular structures made of the diamond lattice unit cell. The mechanical properties predicted using the analytical solutions are compared with those obtained using finite element models. A number of solid and porous titanium (Ti6Al4V) specimens were manufactured using selective laser melting. A series of experiments were then performed to determine the mechanical properties of the matrix material and cellular structures. The experimentally measured mechanical properties were compared with those obtained using analytical solutions and finite element (FE) models. It has been shown that, for small apparent density values, the mechanical properties obtained using analytical and numerical solutions are in agreement with each other and with experimental observations. The properties estimated using an analytical solution based on the Euler-Bernoulli theory markedly deviated from experimental results for large apparent density values. The mechanical properties estimated using FE models and another analytical solution based on the Timoshenko beam theory better matched the experimental observations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Determining the Carbon-Carbon Distance in an Organic Molecule with a Ruler
ERIC Educational Resources Information Center
Simoni, Jose A.; Tubino, Matthieu; Ricchi, Reinaldo Alberto, Jr.
2004-01-01
The procedure to estimate the carbon-carbon bond distance in the naphthalene molecule is described. The procedure is easily performed and can be done either at home or in the classroom, with the restriction that the mass of the naphthalene must be determined using an analytical or a precise balance.
COBRA ATD multispectral camera response model
NASA Astrophysics Data System (ADS)
Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.
2000-08-01
A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.
Human performance modeling for system of systems analytics: combat performance-shaping factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawton, Craig R.; Miller, Dwight Peter
The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate theymore » would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.« less
Development of a Risk-Based Comparison Methodology of Carbon Capture Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engel, David W.; Dalton, Angela C.; Dale, Crystal
2014-06-01
Given the varying degrees of maturity among existing carbon capture (CC) technology alternatives, an understanding of the inherent technical and financial risk and uncertainty associated with these competing technologies is requisite to the success of carbon capture as a viable solution to the greenhouse gas emission challenge. The availability of tools and capabilities to conduct rigorous, risk–based technology comparisons is thus highly desirable for directing valuable resources toward the technology option(s) with a high return on investment, superior carbon capture performance, and minimum risk. To address this research need, we introduce a novel risk-based technology comparison method supported by anmore » integrated multi-domain risk model set to estimate risks related to technological maturity, technical performance, and profitability. Through a comparison between solid sorbent and liquid solvent systems, we illustrate the feasibility of estimating risk and quantifying uncertainty in a single domain (modular analytical capability) as well as across multiple risk dimensions (coupled analytical capability) for comparison. This method brings technological maturity and performance to bear on profitability projections, and carries risk and uncertainty modeling across domains via inter-model sharing of parameters, distributions, and input/output. The integration of the models facilitates multidimensional technology comparisons within a common probabilistic risk analysis framework. This approach and model set can equip potential technology adopters with the necessary computational capabilities to make risk-informed decisions about CC technology investment. The method and modeling effort can also be extended to other industries where robust tools and analytical capabilities are currently lacking for evaluating nascent technologies.« less
Pinsky, Benjamin A.; Sahoo, Malaya K.; Sandlund, Johanna; Kleman, Marika; Kulkarni, Medha; Grufman, Per; Nygren, Malin; Kwiatkowski, Robert; Baron, Ellen Jo; Tenover, Fred; Denison, Blake; Higuchi, Russell; Van Atta, Reuel; Beer, Neil Reginald; Carrillo, Alda Celena; Naraghi-Arani, Pejman; Mire, Chad E.; Ranadheera, Charlene; Grolla, Allen; Lagerqvist, Nina; Persing, David H.
2015-01-01
Background The recently developed Xpert® Ebola Assay is a novel nucleic acid amplification test for simplified detection of Ebola virus (EBOV) in whole blood and buccal swab samples. The assay targets sequences in two EBOV genes, lowering the risk for new variants to escape detection in the test. The objective of this report is to present analytical characteristics of the Xpert® Ebola Assay on whole blood samples. Methods and Findings This study evaluated the assay’s analytical sensitivity, analytical specificity, inclusivity and exclusivity performance in whole blood specimens. EBOV RNA, inactivated EBOV, and infectious EBOV were used as targets. The dynamic range of the assay, the inactivation of virus, and specimen stability were also evaluated. The lower limit of detection (LoD) for the assay using inactivated virus was estimated to be 73 copies/mL (95% CI: 51–97 copies/mL). The LoD for infectious virus was estimated to be 1 plaque-forming unit/mL, and for RNA to be 232 copies/mL (95% CI 163–302 copies/mL). The assay correctly identified five different Ebola viruses, Yambuku-Mayinga, Makona-C07, Yambuku-Ecran, Gabon-Ilembe, and Kikwit-956210, and correctly excluded all non-EBOV isolates tested. The conditions used by Xpert® Ebola for inactivation of infectious virus reduced EBOV titer by ≥6 logs. Conclusion In summary, we found the Xpert® Ebola Assay to have high analytical sensitivity and specificity for the detection of EBOV in whole blood. It offers ease of use, fast turnaround time, and remote monitoring. The test has an efficient viral inactivation protocol, fulfills inclusivity and exclusivity criteria, and has specimen stability characteristics consistent with the need for decentralized testing. The simplicity of the assay should enable testing in a wide variety of laboratory settings, including remote laboratories that are not capable of performing highly complex nucleic acid amplification tests, and during outbreaks where time to detection is critical. PMID:26562786
NASA Astrophysics Data System (ADS)
Ege, Kerem; Roozen, N. B.; Leclère, Quentin; Rinaldi, Renaud G.
2018-07-01
In the context of aeronautics, automotive and construction applications, the design of light multilayer plates with optimized vibroacoustical damping and isolation performances remains a major industrial challenge and a hot topic of research. This paper focuses on the vibrational behavior of three-layered sandwich composite plates in a broad-band frequency range. Several aspects are studied through measurement techniques and analytical modelling of a steel/polymer/steel plate sandwich system. A contactless measurement of the velocity field of plates using a scanning laser vibrometer is performed, from which the equivalent single layer complex rigidity (apparent bending stiffness and apparent damping) in the mid/high frequency ranges is estimated. The results are combined with low/mid frequency estimations obtained with a high-resolution modal analysis method so that the frequency dependent equivalent Young's modulus and equivalent loss factor of the composite plate are identified for the whole [40 Hz-20 kHz] frequency band. The results are in very good agreement with an equivalent single layer analytical modelling based on wave propagation analysis (model of Guyader). The comparison with this model allows identifying the frequency dependent complex modulus of the polymer core layer through inverse resolution. Dynamical mechanical analysis measurements are also performed on the polymer layer alone and compared with the values obtained through the inverse method. Again, a good agreement between these two estimations over the broad-band frequency range demonstrates the validity of the approach.
MRMPlus: an open source quality control and assessment tool for SRM/MRM assay development.
Aiyetan, Paul; Thomas, Stefani N; Zhang, Zhen; Zhang, Hui
2015-12-12
Selected and multiple reaction monitoring involves monitoring a multiplexed assay of proteotypic peptides and associated transitions in mass spectrometry runs. To describe peptide and associated transitions as stable, quantifiable, and reproducible representatives of proteins of interest, experimental and analytical validation is required. However, inadequate and disparate analytical tools and validation methods predispose assay performance measures to errors and inconsistencies. Implemented as a freely available, open-source tool in the platform independent Java programing language, MRMPlus computes analytical measures as recommended recently by the Clinical Proteomics Tumor Analysis Consortium Assay Development Working Group for "Tier 2" assays - that is, non-clinical assays sufficient enough to measure changes due to both biological and experimental perturbations. Computed measures include; limit of detection, lower limit of quantification, linearity, carry-over, partial validation of specificity, and upper limit of quantification. MRMPlus streamlines assay development analytical workflow and therefore minimizes error predisposition. MRMPlus may also be used for performance estimation for targeted assays not described by the Assay Development Working Group. MRMPlus' source codes and compiled binaries can be freely downloaded from https://bitbucket.org/paiyetan/mrmplusgui and https://bitbucket.org/paiyetan/mrmplusgui/downloads respectively.
Yelland, Lisa N; Salter, Amy B; Ryan, Philip
2011-10-15
Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.
Analytical formulation of impulsive collision avoidance dynamics
NASA Astrophysics Data System (ADS)
Bombardelli, Claudio
2014-02-01
The paper deals with the problem of impulsive collision avoidance between two colliding objects in three dimensions and assuming elliptical Keplerian orbits. Closed-form analytical expressions are provided that accurately predict the relative dynamics of the two bodies in the encounter b-plane following an impulsive delta-V manoeuvre performed by one object at a given orbit location prior to the impact and with a generic three-dimensional orientation. After verifying the accuracy of the analytical expressions for different orbital eccentricities and encounter geometries the manoeuvre direction that maximises the miss distance is obtained numerically as a function of the arc length separation between the manoeuvre point and the predicted collision point. The provided formulas can be used for high-accuracy instantaneous estimation of the outcome of a generic impulsive collision avoidance manoeuvre and its optimisation.
Trujillo-Rodríguez, María J; Yu, Honglian; Cole, William T S; Ho, Tien D; Pino, Verónica; Anderson, Jared L; Afonso, Ana M
2014-04-01
The extraction performance of four polymeric ionic liquid (PIL)-based solid-phase microextraction (SPME) coatings has been studied and compared to that of commercial SPME coatings for the extraction of 16 volatile compounds in cheeses. The analytes include 2 free fatty acids, 2 aldehydes, 2 ketones and 10 phenols and were determined by headspace (HS)-SPME coupled to gas chromatography (GC) with flame-ionization detection (FID). The PIL-based coatings produced by UV co-polymerization were more efficient than PIL-based coatings produced by thermal AIBN polymerization. Partition coefficients of analytes between the sample and the coating (Kfs) were estimated for all PIL-based coatings and the commercial SPME fiber showing the best performance among the commercial fibers tested: carboxen-polydimethylsyloxane (CAR-PDMS). For the PIL-based fibers, the highest K(fs) value (1.96 ± 0.03) was obtained for eugenol. The normalized calibration slope, which takes into account the SPME coating thickness, was also used as a simpler approximate tool to compare the nature of the coating within the determinations, with results entirely comparable to those obtained with estimated K(fs) values. The PIL-based materials obtained by UV co-polymerization containing the 1-vinyl-3-hexylimidazolium chloride IL monomer and 1,12-di(3-vinylimiazolium)dodecane dibromide IL crosslinker exhibited the best performance in the extraction of the select analytes from cheeses. Despite a coating thickness of only 7 µm, this copolymeric sorbent coating was capable of quantitating analytes in HS-SPME in a 30 to 2000 µg L(-1) concentration range, with correlation coefficient (R) values higher than 0.9938, inter-day precision values (as relative standard deviation in %) varying from 6.1 to 20%, and detection limits down to 1.6 µg L(-1). Copyright © 2013 Elsevier B.V. All rights reserved.
A physically based analytical model of flood frequency curves
NASA Astrophysics Data System (ADS)
Basso, S.; Schirmer, M.; Botter, G.
2016-09-01
Predicting magnitude and frequency of floods is a key issue in hydrology, with implications in many fields ranging from river science and geomorphology to the insurance industry. In this paper, a novel physically based approach is proposed to estimate the recurrence intervals of seasonal flow maxima. The method links the extremal distribution of streamflows to the stochastic dynamics of daily discharge, providing an analytical expression of the seasonal flood frequency curve. The parameters involved in the formulation embody climate and landscape attributes of the contributing catchment and can be estimated from daily rainfall and streamflow data. Only one parameter, which is linked to the antecedent wetness condition in the watershed, needs to be calibrated on the observed maxima. The performance of the method is discussed through a set of applications in four rivers featuring heterogeneous daily flow regimes. The model provides reliable estimates of seasonal maximum flows in different climatic settings and is able to capture diverse shapes of flood frequency curves emerging in erratic and persistent flow regimes. The proposed method exploits experimental information on the full range of discharges experienced by rivers. As a consequence, model performances do not deteriorate when the magnitude of events with return times longer than the available sample size is estimated. The approach provides a framework for the prediction of floods based on short data series of rainfall and daily streamflows that may be especially valuable in data scarce regions of the world.
Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2016-06-01
Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.
Form of prior for constrained thermodynamic processes with uncertainty
NASA Astrophysics Data System (ADS)
Aneja, Preety; Johal, Ramandeep S.
2015-05-01
We consider the quasi-static thermodynamic processes with constraints, but with additional uncertainty about the control parameters. Motivated by inductive reasoning, we assign prior distribution that provides a rational guess about likely values of the uncertain parameters. The priors are derived explicitly for both the entropy-conserving and the energy-conserving processes. The proposed form is useful when the constraint equation cannot be treated analytically. The inference is performed using spin-1/2 systems as models for heat reservoirs. Analytical results are derived in the high-temperatures limit. An agreement beyond linear response is found between the estimates of thermal quantities and their optimal values obtained from extremum principles. We also seek an intuitive interpretation for the prior and the estimated value of temperature obtained therefrom. We find that the prior over temperature becomes uniform over the quantity kept conserved in the process.
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
Paoloni, Angela; Alunni, Sabrina; Pelliccia, Alessandro; Pecorelli, Ivan
2016-01-01
A simple and straightforward method for simultaneous determination of residues of 13 pesticides in honey samples (acrinathrin, bifenthrin, bromopropylate, cyhalothrin-lambda, cypermethrin, chlorfenvinphos, chlorpyrifos, coumaphos, deltamethrin, fluvalinate-tau, malathion, permethrin and tetradifon) from different pesticide classes has been developed and validated. The analytical method provides dissolution of honey in water and an extraction of pesticide residues by n-Hexane followed by clean-up on a Florisil SPE column. The extract was evaporated and taken up by a solution of an injection internal standard (I-IS), ethion, and finally analyzed by capillary gas chromatography with electron capture detection (GC-µECD). Identification for qualitative purpose was conducted by gas chromatography with triple quadrupole mass spectrometer (GC-MS/MS). A matrix-matched calibration curve was performed for quantitative purposes by plotting the area ratio (analyte/I-IS) against concentration using a GC-µECD instrument. According to document No. SANCO/12571/2013, the method was validated by testing the following parameters: linearity, matrix effect, specificity, precision, trueness (bias) and measurement uncertainty. The analytical process was validated analyzing blank honey samples spiked at levels equal to and greater than 0.010 mg/kg (limit of quantification). All parameters were satisfactorily compared with the values established by document No. SANCO/12571/2013. The analytical performance was verified by participating in eight multi-residue proficiency tests organized by BIPEA, obtaining satisfactory z-scores in all 70 determinations. Measurement uncertainty was estimated according to the top-down approaches described in Appendix C of the SANCO document using the within-laboratory reproducibility relative standard deviation combined with laboratory bias using the proficiency test data.
Analytical Aspects Relating to the Estimation of Carbon Filter Performance for Military Applications
2013-07-01
materials having relatively low room-temperature vapor pressures and exhibiting Type I Brunauer, Edward, and Teller (BET) isotherms . Many compounds that...47 7.2.4 Breakthrough Time Relationship ...................................................49 7.3 Sorption of...various flow rates (from ref 3) ......51 10. Illustration of BET isotherm types (from ref 6) .........................................58 11
ERIC Educational Resources Information Center
Galbraith, Craig S.; Merrill, Gregory B.
2015-01-01
We examine the impact of university student burnout on academic achievement. With a longitudinal sample of working undergraduate university business and economics students, we use a two-step analytical process to estimate the efficient frontiers of student productivity given inputs of labour and capital and then analyse the potential determinants…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasser, D.W.
1978-03-01
EASI (Estimate of Adversary Sequence Interruption) is an analytical technique for measuring the effectiveness of physical protection systems. EASI Graphics is a computer graphics extension of EASI which provides a capability for performing sensitivity and trade-off analyses of the parameters of a physical protection system. This document reports on the implementation of EASI Graphics and illustrates its application with some examples.
The Locus analytical framework for indoor localization and tracking applications
NASA Astrophysics Data System (ADS)
Segou, Olga E.; Thomopoulos, Stelios C. A.
2015-05-01
Obtaining location information can be of paramount importance in the context of pervasive and context-aware computing applications. Many systems have been proposed to date, e.g. GPS that has been proven to offer satisfying results in outdoor areas. The increased effect of large and small scale fading in indoor environments, however, makes localization a challenge. This is particularly reflected in the multitude of different systems that have been proposed in the context of indoor localization (e.g. RADAR, Cricket etc). The performance of such systems is often validated on vastly different test beds and conditions, making performance comparisons difficult and often irrelevant. The Locus analytical framework incorporates algorithms from multiple disciplines such as channel modeling, non-uniform random number generation, computational geometry, localization, tracking and probabilistic modeling etc. in order to provide: (a) fast and accurate signal propagation simulation, (b) fast experimentation with localization and tracking algorithms and (c) an in-depth analysis methodology for estimating the performance limits of any Received Signal Strength localization system. Simulation results for the well-known Fingerprinting and Trilateration algorithms are herein presented and validated with experimental data collected in real conditions using IEEE 802.15.4 ZigBee modules. The analysis shows that the Locus framework accurately predicts the underlying distribution of the localization error and produces further estimates of the system's performance limitations (in a best-case/worst-case scenario basis).
Kriston, Levente; Meister, Ramona
2014-03-01
Judging applicability (relevance) of meta-analytical findings to particular clinical decision-making situations remains challenging. We aimed to describe an evidence synthesis method that accounts for possible uncertainty regarding applicability of the evidence. We conceptualized uncertainty regarding applicability of the meta-analytical estimates to a decision-making situation as the result of uncertainty regarding applicability of the findings of the trials that were included in the meta-analysis. This trial-level applicability uncertainty can be directly assessed by the decision maker and allows for the definition of trial inclusion probabilities, which can be used to perform a probabilistic meta-analysis with unequal probability resampling of trials (adaptive meta-analysis). A case study with several fictitious decision-making scenarios was performed to demonstrate the method in practice. We present options to elicit trial inclusion probabilities and perform the calculations. The result of an adaptive meta-analysis is a frequency distribution of the estimated parameters from traditional meta-analysis that provides individually tailored information according to the specific needs and uncertainty of the decision maker. The proposed method offers a direct and formalized combination of research evidence with individual clinical expertise and may aid clinicians in specific decision-making situations. Copyright © 2014 Elsevier Inc. All rights reserved.
Sato, Tatsuhiko
2015-01-01
By extending our previously established model, here we present a new model called "PHITS-based Analytical Radiation Model in the Atmosphere (PARMA) version 3.0," which can instantaneously estimate terrestrial cosmic ray fluxes of neutrons, protons, ions with charge up to 28 (Ni), muons, electrons, positrons, and photons nearly anytime and anywhere in the Earth's atmosphere. The model comprises numerous analytical functions with parameters whose numerical values were fitted to reproduce the results of the extensive air shower (EAS) simulation performed by Particle and Heavy Ion Transport code System (PHITS). The accuracy of the EAS simulation was well verified using various experimental data, while that of PARMA3.0 was confirmed by the high R2 values of the fit. The models to be used for estimating radiation doses due to cosmic ray exposure, cosmic ray induced ionization rates, and count rates of neutron monitors were validated by investigating their capability to reproduce those quantities measured under various conditions. PARMA3.0 is available freely and is easy to use, as implemented in an open-access software program EXcel-based Program for Calculating Atmospheric Cosmic ray Spectrum (EXPACS). Because of these features, the new version of PARMA/EXPACS can be an important tool in various research fields such as geosciences, cosmic ray physics, and radiation research.
Sato, Tatsuhiko
2015-01-01
By extending our previously established model, here we present a new model called “PHITS-based Analytical Radiation Model in the Atmosphere (PARMA) version 3.0,” which can instantaneously estimate terrestrial cosmic ray fluxes of neutrons, protons, ions with charge up to 28 (Ni), muons, electrons, positrons, and photons nearly anytime and anywhere in the Earth’s atmosphere. The model comprises numerous analytical functions with parameters whose numerical values were fitted to reproduce the results of the extensive air shower (EAS) simulation performed by Particle and Heavy Ion Transport code System (PHITS). The accuracy of the EAS simulation was well verified using various experimental data, while that of PARMA3.0 was confirmed by the high R 2 values of the fit. The models to be used for estimating radiation doses due to cosmic ray exposure, cosmic ray induced ionization rates, and count rates of neutron monitors were validated by investigating their capability to reproduce those quantities measured under various conditions. PARMA3.0 is available freely and is easy to use, as implemented in an open-access software program EXcel-based Program for Calculating Atmospheric Cosmic ray Spectrum (EXPACS). Because of these features, the new version of PARMA/EXPACS can be an important tool in various research fields such as geosciences, cosmic ray physics, and radiation research. PMID:26674183
Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan
2017-01-01
In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.
NASA Technical Reports Server (NTRS)
Della-Corte, Christopher
2012-01-01
Foil gas bearings are a key technology in many commercial and emerging oilfree turbomachinery systems. These bearings are nonlinear and have been difficult to analytically model in terms of performance characteristics such as load capacity, power loss, stiffness, and damping. Previous investigations led to an empirically derived method to estimate load capacity. This method has been a valuable tool in system development. The current work extends this tool concept to include rules for stiffness and damping coefficient estimation. It is expected that these rules will further accelerate the development and deployment of advanced oil-free machines operating on foil gas bearings.
Using Neural Networks for Sensor Validation
NASA Technical Reports Server (NTRS)
Mattern, Duane L.; Jaw, Link C.; Guo, Ten-Huei; Graham, Ronald; McCoy, William
1998-01-01
This paper presents the results of applying two different types of neural networks in two different approaches to the sensor validation problem. The first approach uses a functional approximation neural network as part of a nonlinear observer in a model-based approach to analytical redundancy. The second approach uses an auto-associative neural network to perform nonlinear principal component analysis on a set of redundant sensors to provide an estimate for a single failed sensor. The approaches are demonstrated using a nonlinear simulation of a turbofan engine. The fault detection and sensor estimation results are presented and the training of the auto-associative neural network to provide sensor estimates is discussed.
Bockman, Alexander; Fackler, Cameron; Xiang, Ning
2015-04-01
Acoustic performance for an interior requires an accurate description of the boundary materials' surface acoustic impedance. Analytical methods may be applied to a small class of test geometries, but inverse numerical methods provide greater flexibility. The parameter estimation problem requires minimizing prediction vice observed acoustic field pressure. The Bayesian-network sampling approach presented here mitigates other methods' susceptibility to noise inherent to the experiment, model, and numerics. A geometry agnostic method is developed here and its parameter estimation performance is demonstrated for an air-backed micro-perforated panel in an impedance tube. Good agreement is found with predictions from the ISO standard two-microphone, impedance-tube method, and a theoretical model for the material. Data by-products exclusive to a Bayesian approach are analyzed to assess sensitivity of the method to nuisance parameters.
Comparative Kinetic Analysis of Closed-Ended and Open-Ended Porous Sensors
NASA Astrophysics Data System (ADS)
Zhao, Yiliang; Gaur, Girija; Mernaugh, Raymond L.; Laibinis, Paul E.; Weiss, Sharon M.
2016-09-01
Efficient mass transport through porous networks is essential for achieving rapid response times in sensing applications utilizing porous materials. In this work, we show that open-ended porous membranes can overcome diffusion challenges experienced by closed-ended porous materials in a microfluidic environment. A theoretical model including both transport and reaction kinetics is employed to study the influence of flow velocity, bulk analyte concentration, analyte diffusivity, and adsorption rate on the performance of open-ended and closed-ended porous sensors integrated with flow cells. The analysis shows that open-ended pores enable analyte flow through the pores and greatly reduce the response time and analyte consumption for detecting large molecules with slow diffusivities compared with closed-ended pores for which analytes largely flow over the pores. Experimental confirmation of the results was carried out with open- and closed-ended porous silicon (PSi) microcavities fabricated in flow-through and flow-over sensor configurations, respectively. The adsorption behavior of small analytes onto the inner surfaces of closed-ended and open-ended PSi membrane microcavities was similar. However, for large analytes, PSi membranes in a flow-through scheme showed significant improvement in response times due to more efficient convective transport of analytes. The experimental results and theoretical analysis provide quantitative estimates of the benefits offered by open-ended porous membranes for different analyte systems.
Berry, Christopher M; Zhao, Peng
2015-01-01
Predictive bias studies have generally suggested that cognitive ability test scores overpredict job performance of African Americans, meaning these tests are not predictively biased against African Americans. However, at least 2 issues call into question existing over-/underprediction evidence: (a) a bias identified by Aguinis, Culpepper, and Pierce (2010) in the intercept test typically used to assess over-/underprediction and (b) a focus on the level of observed validity instead of operational validity. The present study developed and utilized a method of assessing over-/underprediction that draws on the math of subgroup regression intercept differences, does not rely on the biased intercept test, allows for analysis at the level of operational validity, and can use meta-analytic estimates as input values. Therefore, existing meta-analytic estimates of key parameters, corrected for relevant statistical artifacts, were used to determine whether African American job performance remains overpredicted at the level of operational validity. African American job performance was typically overpredicted by cognitive ability tests across levels of job complexity and across conditions wherein African American and White regression slopes did and did not differ. Because the present study does not rely on the biased intercept test and because appropriate statistical artifact corrections were carried out, the present study's results are not affected by the 2 issues mentioned above. The present study represents strong evidence that cognitive ability tests generally overpredict job performance of African Americans. (c) 2015 APA, all rights reserved.
Performance criteria and quality indicators for the post-analytical phase.
Sciacovelli, Laura; Aita, Ada; Padoan, Andrea; Pelloso, Michela; Antonelli, Giorgia; Piva, Elisa; Chiozza, Maria Laura; Plebani, Mario
2016-07-01
Quality indicators (QIs) used as performance measurements are an effective tool in accurately estimating quality, identifying problems that may need to be addressed, and monitoring the processes over time. In Laboratory Medicine, QIs should cover all steps of the testing process, as error studies have confirmed that most errors occur in the pre- and post-analytical phase of testing. Aim of the present study is to provide preliminary results on QIs and related performance criteria in the post-analytical phase. This work was conducted according to a previously described study design based on the voluntary participation of clinical laboratories in the project on QIs of the Working Group "Laboratory Errors and Patient Safety" (WG-LEPS) of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC). Overall, data collected highlighted an improvement or stability in performances over time for all reported indicators thus demonstrating that the use of QIs is effective in the quality improvement strategy. Moreover, QIs data are an important source for defining the state-of-the-art concerning the error rate in the total testing process. The definition of performance specifications based on the state-of-the-art, as suggested by consensus documents, is a valuable benchmark point in evaluating the performance of each laboratory. Laboratory tests play a relevant role in the monitoring and evaluation of the efficacy of patient outcome thus assisting clinicians in decision-making. Laboratory performance evaluation is therefore crucial to providing patients with safe, effective and efficient care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pool, K.H.; Evans, J.C.; Olsen, K.B.
1997-08-01
This report presents the results from analyses of samples taken from the headspace of waste storage tank 241-S-102 (Tank S-102) at the Hanford Site in Washington State. Tank headspace samples collected by SGN Eurisys Service Corporation (SESC) were analyzed by Pacific Northwest National Laboratory (PNNL) to determine headspace concentrations of selected non-radioactive analytes. Analyses were performed by the Vapor Analytical Laboratory (VAL) at PNNL. Vapor concentrations from sorbent trap samples are based on measured sample volumes provided by SESC. Ammonia was determined to be above the immediate notification limit of 150 ppm as specified by the sampling and analysis planmore » (SAP). Hydrogen was the principal flammable constituent of the Tank S-102 headspace, determined to be present at approximately 2.410% of its lower flammability limit (LFL). Total headspace flammability was estimated to be <2.973% of its lower flammability limit (LFL). Total headspace flammability was estimated to be <2.973% of the LFL. Average measured concentrations of targeted gases, inorganic vapors, and selected organic vapors are provided in Table S.1. A summary of experimental methods, including sampling methodology, analytical procedures, and quality assurance and control methods are presented in Section 2.0. Detailed descriptions of the analytical results are provided in Section 3.0.« less
The rate of bubble growth in a superheated liquid in pool boiling
NASA Astrophysics Data System (ADS)
Abdollahi, Mohammad Reza; Jafarian, Mehdi; Jamialahmadi, Mohammad
2017-12-01
A semi-empirical model for the estimation of the rate of bubble growth in nucleate pool boiling is presented, considering a new equation to estimate the temperature history of the bubble in the bulk of liquid. The conservation equations of energy, mass and momentum have been firstly derived and solved analytically. The present analytical model of the bubble growth predicts that the radius of the bubble grows as a function of √{t}.{\\operatorname{erf}}( N√{t}) , while so far the bubble growth rate has been mainly correlated to √{t} in the previous studies. In the next step, the analytical solutions were used to develop a new semi-empirical equation. To achieve this, firstly the analytical solution were non-dimensionalised and then the experimental data, available in the literature, were applied to tune the dimensionless coefficients appeared in the dimensionless equation. Finally, the reliability of the proposed semi-empirical model was assessed through comparison of the model predictions with the available experimental data in the literature, which were not applied in the tuning of the dimensionless parameters of the model. The comparison of the model predictions with other proposed models in the literature was also performed. These comparisons show that this model enables more accurate predictions than previously proposed models with a deviation of less than 10% in a wide range of operating conditions.
Benn, Peter A; Makowski, Gregory S; Egan, James F X; Wright, Dave
2006-11-01
Analytical error affects 2nd-trimester maternal serum screening for Down syndrome risk estimation. We analyzed the between-laboratory reproducibility of risk estimates from 2 laboratories. Laboratory 1 used Bayer ACS180 immunoassays for alpha-fetoprotein (AFP) and human chorionic gonadotropin (hCG), Diagnostic Systems Laboratories (DSL) RIA for unconjugated estriol (uE3), and DSL enzyme immunoassay for inhibin-A (INH-A). Laboratory 2 used Beckman immunoassays for AFP, hCG, and uE3, and DSL enzyme immunoassay for INH-A. Analyte medians were separately established for each laboratory. We used the same computational algorithm for all risk calculations, and we used Monte Carlo methods for computer modeling. For 462 samples tested, risk figures from the 2 laboratories differed >2-fold for 44.7%, >5-fold for 7.1%, and >10-fold for 1.7%. Between-laboratory differences in analytes were greatest for uE3 and INH-A. The screen-positive rates were 9.3% for laboratory 1 and 11.5% for laboratory 2, with a significant difference in the patients identified as screen-positive vs screen-negative (McNemar test, P<0.001). Computer modeling confirmed the large between-laboratory risk differences. Differences in performance of assays and laboratory procedures can have a large effect on patient-specific risks. Screening laboratories should minimize test imprecision and ensure that each assay performs in a manner similar to that assumed in the risk computational algorithm.
Ramírez, Juan Carlos; Cura, Carolina Inés; Moreira, Otacilio da Cruz; Lages-Silva, Eliane; Juiz, Natalia; Velázquez, Elsa; Ramírez, Juan David; Alberti, Anahí; Pavia, Paula; Flores-Chávez, María Delmans; Muñoz-Calderón, Arturo; Pérez-Morales, Deyanira; Santalla, José; Guedes, Paulo Marcos da Matta; Peneau, Julie; Marcet, Paula; Padilla, Carlos; Cruz-Robles, David; Valencia, Edward; Crisante, Gladys Elena; Greif, Gonzalo; Zulantay, Inés; Costales, Jaime Alfredo; Alvarez-Martínez, Miriam; Martínez, Norma Edith; Villarroel, Rodrigo; Villarroel, Sandro; Sánchez, Zunilda; Bisio, Margarita; Parrado, Rudy; Galvão, Lúcia Maria da Cunha; da Câmara, Antonia Cláudia Jácome; Espinoza, Bertha; de Noya, Belkisyole Alarcón; Puerta, Concepción; Riarte, Adelina; Diosque, Patricio; Sosa-Estani, Sergio; Guhl, Felipe; Ribeiro, Isabela; Aznar, Christine; Britto, Constança; Yadón, Zaida Estela; Schijman, Alejandro G.
2015-01-01
An international study was performed by 26 experienced PCR laboratories from 14 countries to assess the performance of duplex quantitative real-time PCR (qPCR) strategies on the basis of TaqMan probes for detection and quantification of parasitic loads in peripheral blood samples from Chagas disease patients. Two methods were studied: Satellite DNA (SatDNA) qPCR and kinetoplastid DNA (kDNA) qPCR. Both methods included an internal amplification control. Reportable range, analytical sensitivity, limits of detection and quantification, and precision were estimated according to international guidelines. In addition, inclusivity and exclusivity were estimated with DNA from stocks representing the different Trypanosoma cruzi discrete typing units and Trypanosoma rangeli and Leishmania spp. Both methods were challenged against 156 blood samples provided by the participant laboratories, including samples from acute and chronic patients with varied clinical findings, infected by oral route or vectorial transmission. kDNA qPCR showed better analytical sensitivity than SatDNA qPCR with limits of detection of 0.23 and 0.70 parasite equivalents/mL, respectively. Analyses of clinical samples revealed a high concordance in terms of sensitivity and parasitic loads determined by both SatDNA and kDNA qPCRs. This effort is a major step toward international validation of qPCR methods for the quantification of T. cruzi DNA in human blood samples, aiming to provide an accurate surrogate biomarker for diagnosis and treatment monitoring for patients with Chagas disease. PMID:26320872
Multiple Imputation of Cognitive Performance as a Repeatedly Measured Outcome
Rawlings, Andreea M.; Sang, Yingying; Sharrett, A. Richey; Coresh, Josef; Griswold, Michael; Kucharska-Newton, Anna M.; Palta, Priya; Wruck, Lisa M.; Gross, Alden L.; Deal, Jennifer A.; Power, Melinda C.; Bandeen-Roche, Karen
2016-01-01
Background Longitudinal studies of cognitive performance are sensitive to dropout, as participants experiencing cognitive deficits are less likely to attend study visits, which may bias estimated associations between exposures of interest and cognitive decline. Multiple imputation is a powerful tool for handling missing data, however its use for missing cognitive outcome measures in longitudinal analyses remains limited. Methods We use multiple imputation by chained equations (MICE) to impute cognitive performance scores of participants who did not attend the 2011-2013 exam of the Atherosclerosis Risk in Communities Study. We examined the validity of imputed scores using observed and simulated data under varying assumptions. We examined differences in the estimated association between diabetes at baseline and 20-year cognitive decline with and without imputed values. Lastly, we discuss how different analytic methods (mixed models and models fit using generalized estimate equations) and choice of for whom to impute result in different estimands. Results Validation using observed data showed MICE produced unbiased imputations. Simulations showed a substantial reduction in the bias of the 20-year association between diabetes and cognitive decline comparing MICE (3-4% bias) to analyses of available data only (16-23% bias) in a construct where missingness was strongly informative but realistic. Associations between diabetes and 20-year cognitive decline were substantially stronger with MICE than in available-case analyses. Conclusions Our study suggests when informative data are available for non-examined participants, MICE can be an effective tool for imputing cognitive performance and improving assessment of cognitive decline, though careful thought should be given to target imputation population and analytic model chosen, as they may yield different estimands. PMID:27619926
Cost-estimating relationships for space programs
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.
1992-01-01
Cost-estimating relationships (CERs) are defined and discussed as they relate to the estimation of theoretical costs for space programs. The paper primarily addresses CERs based on analogous relationships between physical and performance parameters to estimate future costs. Analytical estimation principles are reviewed examining the sources of errors in cost models, and the use of CERs is shown to be affected by organizational culture. Two paradigms for cost estimation are set forth: (1) the Rand paradigm for single-culture single-system methods; and (2) the Price paradigms that incorporate a set of cultural variables. For space programs that are potentially subject to even small cultural changes, the Price paradigms are argued to be more effective. The derivation and use of accurate CERs is important for developing effective cost models to analyze the potential of a given space program.
Pang, Susan; Cowen, Simon
2017-12-13
We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.
THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au
Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less
Ding, Yuqi; Kawakita, Kento; Xu, Jiawei; Akiyama, Kazuhiko; Fujino, Tatsuya
2015-08-04
Smectite, a synthetic inorganic polymer with a saponite structure, was subjected to matrix-assisted laser desorption/ionization mass spectrometry (MALDI MS). Typical organic matrix molecules 2,4,6-trihydroxyacetophenone (THAP) and 2,5-dihydroxybenzoic acid (DHBA) were intercalated into the layer spacing of cation-exchanged smectite, and the complex was used as a new matrix for laser desorption/ionization mass spectrometry. Because of layer spacing limitations, only a small analyte that could enter the layer and bind to THAP or DHBA could be ionized. This was confirmed by examining different analyte/matrix preparation methods and by measuring saccharides with different molecular sizes. Because of the homogeneous distribution of THAP molecules in the smectite layer spacing, high reproducibility of the analyte peak intensity was achieved. By using isotope-labeled (13)C6-d-glucose as the internal standard, quantitative analysis of monosaccharides in pretreated human plasma sample was performed, and the value of 8.6 ± 0.3 μg/mg was estimated.
Dubský, Pavel; Ördögová, Magda; Malý, Michal; Riesová, Martina
2016-05-06
We introduce CEval software (downloadable for free at echmet.natur.cuni.cz) that was developed for quicker and easier electrophoregram evaluation and further data processing in (affinity) capillary electrophoresis. This software allows for automatic peak detection and evaluation of common peak parameters, such as its migration time, area, width etc. Additionally, the software includes a nonlinear regression engine that performs peak fitting with the Haarhoff-van der Linde (HVL) function, including automated initial guess of the HVL function parameters. HVL is a fundamental peak-shape function in electrophoresis, based on which the correct effective mobility of the analyte represented by the peak is evaluated. Effective mobilities of an analyte at various concentrations of a selector can be further stored and plotted in an affinity CE mode. Consequently, the mobility of the free analyte, μA, mobility of the analyte-selector complex, μAS, and the apparent complexation constant, K('), are first guessed automatically from the linearized data plots and subsequently estimated by the means of nonlinear regression. An option that allows two complexation dependencies to be fitted at once is especially convenient for enantioseparations. Statistical processing of these data is also included, which allowed us to: i) express the 95% confidence intervals for the μA, μAS and K(') least-squares estimates, ii) do hypothesis testing on the estimated parameters for the first time. We demonstrate the benefits of the CEval software by inspecting complexation of tryptophan methyl ester with two cyclodextrins, neutral heptakis(2,6-di-O-methyl)-β-CD and charged heptakis(6-O-sulfo)-β-CD. Copyright © 2016 Elsevier B.V. All rights reserved.
Development of advanced methods for analysis of experimental data in diffusion
NASA Astrophysics Data System (ADS)
Jaques, Alonso V.
There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.
NASA Astrophysics Data System (ADS)
Chen, G.; Chacón, L.
2013-08-01
We propose a 1D analytical particle mover for the recent charge- and energy-conserving electrostatic particle-in-cell (PIC) algorithm in Ref. [G. Chen, L. Chacón, D.C. Barnes, An energy- and charge-conserving, implicit, electrostatic particle-in-cell algorithm, Journal of Computational Physics 230 (2011) 7018-7036]. The approach computes particle orbits exactly for a given piece-wise linear electric field. The resulting PIC algorithm maintains the exact charge and energy conservation properties of the original algorithm, but with improved performance (both in efficiency and robustness against the number of particles and timestep). We demonstrate the advantageous properties of the scheme with a challenging multiscale numerical test case, the ion acoustic wave. Using the analytical mover as a reference, we demonstrate that the choice of error estimator in the Crank-Nicolson mover has significant impact on the overall performance of the implicit PIC algorithm. The generalization of the approach to the multi-dimensional case is outlined, based on a novel and simple charge conserving interpolation scheme.
An Analytical Framework for Fast Estimation of Capacity and Performance in Communication Networks
2012-01-25
standard random graph (due to Erdos- Renyi ) in the regime where the average degrees remain fixed (and above 1) and the number of nodes get large, is not...abs/1010.3305 (Oct 2010). [6] O. Narayan, I. Saniee, G. H. Tucci, “Lack of Spectral Gap and Hyperbolicity in Asymptotic Erdös- Renyi Random Graphs
Level 1 environmental assessment performance evaluation. Final report jun 77-oct 78
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estes, E.D.; Smith, F.; Wagoner, D.E.
1979-02-01
The report gives results of a two-phased evaluation of Level 1 environmental assessment procedures. Results from Phase I, a field evaluation of the Source Assessment Sampling System (SASS), showed that the SASS train performed well within the desired factor of 3 Level 1 accuracy limit. Three sample runs were made with two SASS trains sampling simultaneously and from approximately the same sampling point in a horizontal duct. A Method-5 train was used to estimate the 'true' particulate loading. The sampling systems were upstream of the control devices to ensure collection of sufficient material for comparison of total particulate, particle sizemore » distribution, organic classes, and trace elements. Phase II consisted of providing each of three organizations with three types of control samples to challenge the spectrum of Level 1 analytical procedures: an artificial sample in methylene chloride, an artificial sample on a flyash matrix, and a real sample composed of the combined XAD-2 resin extracts from all Phase I runs. Phase II results showed that when the Level 1 analytical procedures are carefully applied, data of acceptable accuracy is obtained. Estimates of intralaboratory and interlaboratory precision are made.« less
Yu, Huapeng; Zhu, Hai; Gao, Dayuan; Yu, Meng; Wu, Wenqi
2015-01-01
The Kalman filter (KF) has always been used to improve north-finding performance under practical conditions. By analyzing the characteristics of the azimuth rotational inertial measurement unit (ARIMU) on a stationary base, a linear state equality constraint for the conventional KF used in the fine north-finding filtering phase is derived. Then, a constrained KF using the state equality constraint is proposed and studied in depth. Estimation behaviors of the concerned navigation errors when implementing the conventional KF scheme and the constrained KF scheme during stationary north-finding are investigated analytically by the stochastic observability approach, which can provide explicit formulations of the navigation errors with influencing variables. Finally, multiple practical experimental tests at a fixed position are done on a postulate system to compare the stationary north-finding performance of the two filtering schemes. In conclusion, this study has successfully extended the utilization of the stochastic observability approach for analytic descriptions of estimation behaviors of the concerned navigation errors, and the constrained KF scheme has demonstrated its superiority over the conventional KF scheme for ARIMU stationary north-finding both theoretically and practically. PMID:25688588
Long Term Evolution of Planetary Systems with a Terrestrial Planet and a Giant Planet
NASA Technical Reports Server (NTRS)
Georgakarakos, Nikolaos; Dobbs-Dixon, Ian; Way, Michael J.
2016-01-01
We study the long term orbital evolution of a terrestrial planet under the gravitational perturbations of a giant planet. In particular, we are interested in situations where the two planets are in the same plane and are relatively close. We examine both possible configurations: the giant planet orbit being either outside or inside the orbit of the smaller planet. The perturbing potential is expanded to high orders and an analytical solution of the terrestrial planetary orbit is derived. The analytical estimates are then compared against results from the numerical integration of the full equations of motion and we find that the analytical solution works reasonably well. An interesting finding is that the new analytical estimates improve greatly the predictions for the timescales of the orbital evolution of the terrestrial planet compared to an octupole order expansion. Finally, we briefly discuss possible applications of the analytical estimates in astrophysical problems.
Research on bathymetry estimation by Worldview-2 based with the semi-analytical model
NASA Astrophysics Data System (ADS)
Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.
2015-04-01
South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.
Methods for Estimating Uncertainty in Factor Analytic Solutions
The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DI...
Uncertainty in temperature-based determination of time of death
NASA Astrophysics Data System (ADS)
Weiser, Martin; Erdmann, Bodo; Schenkl, Sebastian; Muggenthaler, Holger; Hubig, Michael; Mall, Gita; Zachow, Stefan
2018-03-01
Temperature-based estimation of time of death (ToD) can be performed either with the help of simple phenomenological models of corpse cooling or with detailed mechanistic (thermodynamic) heat transfer models. The latter are much more complex, but allow a higher accuracy of ToD estimation as in principle all relevant cooling mechanisms can be taken into account. The potentially higher accuracy depends on the accuracy of tissue and environmental parameters as well as on the geometric resolution. We investigate the impact of parameter variations and geometry representation on the estimated ToD. For this, numerical simulation of analytic heat transport models is performed on a highly detailed 3D corpse model, that has been segmented and geometrically reconstructed from a computed tomography (CT) data set, differentiating various organs and tissue types. From that and prior information available on thermal parameters and their variability, we identify the most crucial parameters to measure or estimate, and obtain an a priori uncertainty quantification for the ToD.
Hyperspectral image reconstruction for x-ray fluorescence tomography
Gürsoy, Doǧa; Biçer, Tekin; Lanzirotti, Antonio; ...
2015-01-01
A penalized maximum-likelihood estimation is proposed to perform hyperspectral (spatio-spectral) image reconstruction for X-ray fluorescence tomography. The approach minimizes a Poisson-based negative log-likelihood of the observed photon counts, and uses a penalty term that has the effect of encouraging local continuity of model parameter estimates in both spatial and spectral dimensions simultaneously. The performance of the reconstruction method is demonstrated with experimental data acquired from a seed of arabidopsis thaliana collected at the 13-ID-E microprobe beamline at the Advanced Photon Source. The resulting element distribution estimates with the proposed approach show significantly better reconstruction quality than the conventional analytical inversionmore » approaches, and allows for a high data compression factor which can reduce data acquisition times remarkably. In particular, this technique provides the capability to tomographically reconstruct full energy dispersive spectra without compromising reconstruction artifacts that impact the interpretation of results.« less
Conclusions on measurement uncertainty in microbiology.
Forster, Lynne I
2009-01-01
Since its first issue in 1999, testing laboratories wishing to comply with all the requirements of ISO/IEC 17025 have been collecting data for estimating uncertainty of measurement for quantitative determinations. In the microbiological field of testing, some debate has arisen as to whether uncertainty needs to be estimated for each method performed in the laboratory for each type of sample matrix tested. Queries also arise concerning the estimation of uncertainty when plate/membrane filter colony counts are below recommended method counting range limits. A selection of water samples (with low to high contamination) was tested in replicate with the associated uncertainty of measurement being estimated from the analytical results obtained. The analyses performed on the water samples included total coliforms, fecal coliforms, fecal streptococci by membrane filtration, and heterotrophic plate counts by the pour plate technique. For those samples where plate/membrane filter colony counts were > or =20, uncertainty estimates at a 95% confidence level were very similar for the methods, being estimated as 0.13, 0.14, 0.14, and 0.12, respectively. For those samples where plate/membrane filter colony counts were <20, estimated uncertainty values for each sample showed close agreement with published confidence limits established using a Poisson distribution approach.
NASA Technical Reports Server (NTRS)
DellaCorte, Christopher
2010-01-01
Foil gas bearings are a key technology in many commercial and emerging Oil-Free turbomachinery systems. These bearings are non-linear and have been difficult to analytically model in terms of performance characteristics such as load capacity, power loss, stiffness and damping. Previous investigations led to an empirically derived method, a rule-of-thumb, to estimate load capacity. This method has been a valuable tool in system development. The current paper extends this tool concept to include rules for stiffness and damping coefficient estimation. It is expected that these rules will further accelerate the development and deployment of advanced Oil-Free machines operating on foil gas bearings
NASA Astrophysics Data System (ADS)
Trombetti, Tomaso
This thesis presents an Experimental/Analytical approach to modeling and calibrating shaking tables for structural dynamic applications. This approach was successfully applied to the shaking table recently built in the structural laboratory of the Civil Engineering Department at Rice University. This shaking table is capable of reproducing model earthquake ground motions with a peak acceleration of 6 g's, a peak velocity of 40 inches per second, and a peak displacement of 3 inches, for a maximum payload of 1500 pounds. It has a frequency bandwidth of approximately 70 Hz and is designed to test structural specimens up to 1/5 scale. The rail/table system is mounted on a reaction mass of about 70,000 pounds consisting of three 12 ft x 12 ft x 1 ft reinforced concrete slabs, post-tensioned together and connected to the strong laboratory floor. The slip table is driven by a hydraulic actuator governed by a 407 MTS controller which employs a proportional-integral-derivative-feedforward-differential pressure algorithm to control the actuator displacement. Feedback signals are provided by two LVDT's (monitoring the slip table relative displacement and the servovalve main stage spool position) and by one differential pressure transducer (monitoring the actuator force). The dynamic actuator-foundation-specimen system is modeled and analyzed by combining linear control theory and linear structural dynamics. The analytical model developed accounts for the effects of actuator oil compressibility, oil leakage in the actuator, time delay in the response of the servovalve spool to a given electrical signal, foundation flexibility, and dynamic characteristics of multi-degree-of-freedom specimens. In order to study the actual dynamic behavior of the shaking table, the transfer function between target and actual table accelerations were identified using experimental results and spectral estimation techniques. The power spectral density of the system input and the cross power spectral density of the table input and output were estimated using the Bartlett's spectral estimation method. The experimentally-estimated table acceleration transfer functions obtained for different working conditions are correlated with their analytical counterparts. As a result of this comprehensive correlation study, a thorough understanding of the shaking table dynamics and its sensitivities to control and payload parameters is obtained. Moreover, the correlation study leads to a calibrated analytical model of the shaking table of high predictive ability. It is concluded that, in its present conditions, the Rice shaking table is able to reproduce, with a high degree of accuracy, model earthquake accelerations time histories in the frequency bandwidth from 0 to 75 Hz. Furthermore, the exhaustive analysis performed indicates that the table transfer function is not significantly affected by the presence of a large (in terms of weight) payload with a fundamental frequency up to 20 Hz. Payloads having a higher fundamental frequency do affect significantly the shaking table performance and require a modification of the table control gain setting that can be easily obtained using the predictive analytical model of the shaking table. The complete description of a structural dynamic experiment performed using the Rice shaking table facility is also reported herein. The object of this experimentation was twofold: (1) to verify the testing capability of the shaking table and, (2) to experimentally validate a simplified theory developed by the author, which predicts the maximum rotational response developed by seismic isolated building structures characterized by non-coincident centers of mass and rigidity, when subjected to strong earthquake ground motions.
Kuczynska, Paulina; Jemiola-Rzeminska, Malgorzata
2017-01-01
Two diatom-specific carotenoids are engaged in the diadinoxanthin cycle, an important mechanism which protects these organisms against photoinhibition caused by absorption of excessive light energy. A high-performance and economical procedure of isolation and purification of diadinoxanthin and diatoxanthin from the marine diatom Phaeodactylum tricornutum using a four-step procedure has been developed. It is based on the use of commonly available materials and does not require advanced technology. Extraction of pigments, saponification, separation by partition and then open column chromatography, which comprise the complete experimental procedure, can be performed within 2 days. This method allows HPLC grade diadinoxanthin and diatoxanthin of a purity of 99 % or more to be obtained, and the efficiency was estimated to be 63 % for diadinoxanthin and 73 % for diatoxanthin. Carefully selected diatom culture conditions as well as analytical ones ensure highly reproducible performance. A protocol can be used to isolate and purify the diadinoxanthin cycle pigments both on analytical and preparative scale.
Performance evaluation of the croissant production line with reparable machines
NASA Astrophysics Data System (ADS)
Tsarouhas, Panagiotis H.
2015-03-01
In this study, the analytical probability models for an automated serial production system, bufferless that consists of n-machines in series with common transfer mechanism and control system was developed. Both time to failure and time to repair a failure are assumed to follow exponential distribution. Applying those models, the effect of system parameters on system performance in actual croissant production line was studied. The production line consists of six workstations with different numbers of reparable machines in series. Mathematical models of the croissant production line have been developed using Markov process. The strength of this study is in the classification of the whole system in states, representing failures of different machines. Failure and repair data from the actual production environment have been used to estimate reliability and maintainability for each machine, workstation, and the entire line is based on analytical models. The analysis provides a useful insight into the system's behaviour, helps to find design inherent faults and suggests optimal modifications to upgrade the system and improve its performance.
Local Spatial Obesity Analysis and Estimation Using Online Social Network Sensors.
Sun, Qindong; Wang, Nan; Li, Shancang; Zhou, Hongyi
2018-03-15
Recently, the online social networks (OSNs) have received considerable attentions as a revolutionary platform to offer users massive social interaction among users that enables users to be more involved in their own healthcare. The OSNs have also promoted increasing interests in the generation of analytical, data models in health informatics. This paper aims at developing an obesity identification, analysis, and estimation model, in which each individual user is regarded as an online social network 'sensor' that can provide valuable health information. The OSN-based obesity analytic model requires each sensor node in an OSN to provide associated features, including dietary habit, physical activity, integral/incidental emotions, and self-consciousness. Based on the detailed measurements on the correlation of obesity and proposed features, the OSN obesity analytic model is able to estimate the obesity rate in certain urban areas and the experimental results demonstrate a high success estimation rate. The measurements and estimation experimental findings created by the proposed obesity analytic model show that the online social networks could be used in analyzing the local spatial obesity problems effectively. Copyright © 2018. Published by Elsevier Inc.
Loss Factor Estimation Using the Impulse Response Decay Method on a Stiffened Structure
NASA Technical Reports Server (NTRS)
Cabell, Randolph; Schiller, Noah; Allen, Albert; Moeller, Mark
2009-01-01
High-frequency vibroacoustic modeling is typically performed using energy-based techniques such as Statistical Energy Analysis (SEA). Energy models require an estimate of the internal damping loss factor. Unfortunately, the loss factor is difficult to estimate analytically, and experimental methods such as the power injection method can require extensive measurements over the structure of interest. This paper discusses the implications of estimating damping loss factors using the impulse response decay method (IRDM) from a limited set of response measurements. An automated procedure for implementing IRDM is described and then evaluated using data from a finite element model of a stiffened, curved panel. Estimated loss factors are compared with loss factors computed using a power injection method and a manual curve fit. The paper discusses the sensitivity of the IRDM loss factor estimates to damping of connected subsystems and the number and location of points in the measurement ensemble.
Aquatic concentrations of chemical analytes compared to ecotoxicity estimates
We describe screening level estimates of potential aquatic toxicity posed by 227 chemical analytes that were measured in 25 ambient water samples collected as part of a joint USGS/USEPA drinking water plant study. Measured concentrations were compared to biological effect concent...
DOT National Transportation Integrated Search
1975-05-01
The report describes an analytical approach to estimation of fuel consumption in rail transportation, and provides sample computer calculations suggesting the sensitivity of fuel usage to various parameters. The model used is based upon careful delin...
Potential benefits of propulsion and flight control integration for supersonic cruise vehicles
NASA Technical Reports Server (NTRS)
Berry, D. T.; Schweikhard, W. G.
1976-01-01
Typical airframe/propulsion interactions such as Mach/altitude excursions and inlet unstarts are reviewed. The improvements in airplane performance and flight control that can be achieved by improving the interfaces between propulsion and flight control are estimated. A research program to determine the feasibility of integrating propulsion and flight control is described. This program includes analytical studies and YF-12 flight tests.
On the Probability of Error and Stochastic Resonance in Discrete Memoryless Channels
2013-12-01
Information - Driven Doppler Shift Estimation and Compensation Methods for Underwater Wireless Sensor Networks ”, which is to analyze and develop... underwater wireless sensor networks . We formulated an analytic relationship that relates the average probability of error to the systems parameters, the...thesis, we studied the performance of Discrete Memoryless Channels (DMC), arising in the context of cooperative underwater wireless sensor networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourdon, Christopher Jay; Olsen, Michael G.; Gorby, Allen D.
The analytical model for the depth of correlation (measurement depth) of a microscopic particle image velocimetry (micro-PIV) experiment derived by Olsen and Adrian (Exp. Fluids, 29, pp. S166-S174, 2000) has been modified to be applicable to experiments using high numerical aperture optics. A series of measurements are presented that experimentally quantify the depth of correlation of micro-PIV velocity measurements which employ high numerical aperture and magnification optics. These measurements demonstrate that the modified analytical model is quite accurate in estimating the depth of correlation in micro-PIV measurements using this class of optics. Additionally, it was found that the Gaussian particlemore » approximation made in this model does not significantly affect the model's performance. It is also demonstrated that this modified analytical model easily predicts the depth of correlation when viewing into a medium of a different index of refraction than the immersion medium.« less
Thompson, Craig M.; Royle, J. Andrew; Garner, James D.
2012-01-01
Wildlife management often hinges upon an accurate assessment of population density. Although undeniably useful, many of the traditional approaches to density estimation such as visual counts, livetrapping, or mark–recapture suffer from a suite of methodological and analytical weaknesses. Rare, secretive, or highly mobile species exacerbate these problems through the reality of small sample sizes and movement on and off study sites. In response to these difficulties, there is growing interest in the use of non-invasive survey techniques, which provide the opportunity to collect larger samples with minimal increases in effort, as well as the application of analytical frameworks that are not reliant on large sample size arguments. One promising survey technique, the use of scat detecting dogs, offers a greatly enhanced probability of detection while at the same time generating new difficulties with respect to non-standard survey routes, variable search intensity, and the lack of a fixed survey point for characterizing non-detection. In order to account for these issues, we modified an existing spatially explicit, capture–recapture model for camera trap data to account for variable search intensity and the lack of fixed, georeferenced trap locations. We applied this modified model to a fisher (Martes pennanti) dataset from the Sierra National Forest, California, and compared the results (12.3 fishers/100 km2) to more traditional density estimates. We then evaluated model performance using simulations at 3 levels of population density. Simulation results indicated that estimates based on the posterior mode were relatively unbiased. We believe that this approach provides a flexible analytical framework for reconciling the inconsistencies between detector dog survey data and density estimation procedures.
Computational Fluid Dynamics Uncertainty Analysis Applied to Heat Transfer over a Flat Plate
NASA Technical Reports Server (NTRS)
Groves, Curtis Edward; Ilie, Marcel; Schallhorn, Paul A.
2013-01-01
There have been few discussions on using Computational Fluid Dynamics (CFD) without experimental validation. Pairing experimental data, uncertainty analysis, and analytical predictions provides a comprehensive approach to verification and is the current state of the art. With pressed budgets, collecting experimental data is rare or non-existent. This paper investigates and proposes a method to perform CFD uncertainty analysis only from computational data. The method uses current CFD uncertainty techniques coupled with the Student-T distribution to predict the heat transfer coefficient over a at plate. The inputs to the CFD model are varied from a specified tolerance or bias error and the difference in the results are used to estimate the uncertainty. The variation in each input is ranked from least to greatest to determine the order of importance. The results are compared to heat transfer correlations and conclusions drawn about the feasibility of using CFD without experimental data. The results provide a tactic to analytically estimate the uncertainty in a CFD model when experimental data is unavailable
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
ANALYTICAL METHOD COMPARISONS BY ESTIMATES OF PRECISION AND LOWER DETECTION LIMIT
The paper describes the use of principal component analysis to estimate the operating precision of several different analytical instruments or methods simultaneously measuring a common sample of a material whose actual value is unknown. This approach is advantageous when none of ...
Characterizing nonconstant instrumental variance in emerging miniaturized analytical techniques.
Noblitt, Scott D; Berg, Kathleen E; Cate, David M; Henry, Charles S
2016-04-07
Measurement variance is a crucial aspect of quantitative chemical analysis. Variance directly affects important analytical figures of merit, including detection limit, quantitation limit, and confidence intervals. Most reported analyses for emerging analytical techniques implicitly assume constant variance (homoskedasticity) by using unweighted regression calibrations. Despite the assumption of constant variance, it is known that most instruments exhibit heteroskedasticity, where variance changes with signal intensity. Ignoring nonconstant variance results in suboptimal calibrations, invalid uncertainty estimates, and incorrect detection limits. Three techniques where homoskedasticity is often assumed were covered in this work to evaluate if heteroskedasticity had a significant quantitative impact-naked-eye, distance-based detection using paper-based analytical devices (PADs), cathodic stripping voltammetry (CSV) with disposable carbon-ink electrode devices, and microchip electrophoresis (MCE) with conductivity detection. Despite these techniques representing a wide range of chemistries and precision, heteroskedastic behavior was confirmed for each. The general variance forms were analyzed, and recommendations for accounting for nonconstant variance discussed. Monte Carlo simulations of instrument responses were performed to quantify the benefits of weighted regression, and the sensitivity to uncertainty in the variance function was tested. Results show that heteroskedasticity should be considered during development of new techniques; even moderate uncertainty (30%) in the variance function still results in weighted regression outperforming unweighted regressions. We recommend utilizing the power model of variance because it is easy to apply, requires little additional experimentation, and produces higher-precision results and more reliable uncertainty estimates than assuming homoskedasticity. Copyright © 2016 Elsevier B.V. All rights reserved.
Estimating Aquifer Properties Using Sinusoidal Pumping Tests
NASA Astrophysics Data System (ADS)
Rasmussen, T. C.; Haborak, K. G.; Young, M. H.
2001-12-01
We develop the theoretical and applied framework for using sinusoidal pumping tests to estimate aquifer properties for confined, leaky, and partially penetrating conditions. The framework 1) derives analytical solutions for three boundary conditions suitable for many practical applications, 2) validates the analytical solutions against a finite element model, 3) establishes a protocol for conducting sinusoidal pumping tests, and 4) estimates aquifer hydraulic parameters based on the analytical solutions. The analytical solutions to sinusoidal stimuli in radial coordinates are derived for boundary value problems that are analogous to the Theis (1935) confined aquifer solution, the Hantush and Jacob (1955) leaky aquifer solution, and the Hantush (1964) partially penetrated confined aquifer solution. The analytical solutions compare favorably to a finite-element solution of a simulated flow domain, except in the region immediately adjacent to the pumping well where the implicit assumption of zero borehole radius is violated. The procedure is demonstrated in one unconfined and two confined aquifer units near the General Separations Area at the Savannah River Site, a federal nuclear facility located in South Carolina. Aquifer hydraulic parameters estimated using this framework provide independent confirmation of parameters obtained from conventional aquifer tests. The sinusoidal approach also resulted in the elimination of investigation-derived wastes.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models.
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S; Wu, Xiaowei; Müller, Rolf
2018-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design.
A Unified Analysis of Structured Sonar-terrain Data using Bayesian Functional Mixed Models
Zhu, Hongxiao; Caspers, Philip; Morris, Jeffrey S.; Wu, Xiaowei; Müller, Rolf
2017-01-01
Sonar emits pulses of sound and uses the reflected echoes to gain information about target objects. It offers a low cost, complementary sensing modality for small robotic platforms. While existing analytical approaches often assume independence across echoes, real sonar data can have more complicated structures due to device setup or experimental design. In this paper, we consider sonar echo data collected from multiple terrain substrates with a dual-channel sonar head. Our goals are to identify the differential sonar responses to terrains and study the effectiveness of this dual-channel design in discriminating targets. We describe a unified analytical framework that achieves these goals rigorously, simultaneously, and automatically. The analysis was done by treating the echo envelope signals as functional responses and the terrain/channel information as covariates in a functional regression setting. We adopt functional mixed models that facilitate the estimation of terrain and channel effects while capturing the complex hierarchical structure in data. This unified analytical framework incorporates both Gaussian models and robust models. We fit the models using a full Bayesian approach, which enables us to perform multiple inferential tasks under the same modeling framework, including selecting models, estimating the effects of interest, identifying significant local regions, discriminating terrain types, and describing the discriminatory power of local regions. Our analysis of the sonar-terrain data identifies time regions that reflect differential sonar responses to terrains. The discriminant analysis suggests that a multi- or dual-channel design achieves target identification performance comparable with or better than a single-channel design. PMID:29749977
Selecting Statistical Procedures for Quality Control Planning Based on Risk Management.
Yago, Martín; Alcover, Silvia
2016-07-01
According to the traditional approach to statistical QC planning, the performance of QC procedures is assessed in terms of its probability of rejecting an analytical run that contains critical size errors (PEDC). Recently, the maximum expected increase in the number of unacceptable patient results reported during the presence of an undetected out-of-control error condition [Max E(NUF)], has been proposed as an alternative QC performance measure because it is more related to the current introduction of risk management concepts for QC planning in the clinical laboratory. We used a statistical model to investigate the relationship between PEDC and Max E(NUF) for simple QC procedures widely used in clinical laboratories and to construct charts relating Max E(NUF) with the capability of the analytical process that allow for QC planning based on the risk of harm to a patient due to the report of erroneous results. A QC procedure shows nearly the same Max E(NUF) value when used for controlling analytical processes with the same capability, and there is a close relationship between PEDC and Max E(NUF) for simple QC procedures; therefore, the value of PEDC can be estimated from the value of Max E(NUF) and vice versa. QC procedures selected by their high PEDC value are also characterized by a low value for Max E(NUF). The PEDC value can be used for estimating the probability of patient harm, allowing for the selection of appropriate QC procedures in QC planning based on risk management. © 2016 American Association for Clinical Chemistry.
Heath, Garvin A; O'Donoughue, Patrick; Arent, Douglas J; Bazilian, Morgan
2014-08-05
Recent technological advances in the recovery of unconventional natural gas, particularly shale gas, have served to dramatically increase domestic production and reserve estimates for the United States and internationally. This trend has led to lowered prices and increased scrutiny on production practices. Questions have been raised as to how greenhouse gas (GHG) emissions from the life cycle of shale gas production and use compares with that of conventionally produced natural gas or other fuel sources such as coal. Recent literature has come to different conclusions on this point, largely due to differing assumptions, comparison baselines, and system boundaries. Through a meta-analytical procedure we call harmonization, we develop robust, analytically consistent, and updated comparisons of estimates of life cycle GHG emissions for electricity produced from shale gas, conventionally produced natural gas, and coal. On a per-unit electrical output basis, harmonization reveals that median estimates of GHG emissions from shale gas-generated electricity are similar to those for conventional natural gas, with both approximately half that of the central tendency of coal. Sensitivity analysis on the harmonized estimates indicates that assumptions regarding liquids unloading and estimated ultimate recovery (EUR) of wells have the greatest influence on life cycle GHG emissions, whereby shale gas life cycle GHG emissions could approach the range of best-performing coal-fired generation under certain scenarios. Despite clarification of published estimates through harmonization, these initial assessments should be confirmed through methane emissions measurements at components and in the atmosphere and through better characterization of EUR and practices.
Heath, Garvin A.; O’Donoughue, Patrick; Arent, Douglas J.; Bazilian, Morgan
2014-01-01
Recent technological advances in the recovery of unconventional natural gas, particularly shale gas, have served to dramatically increase domestic production and reserve estimates for the United States and internationally. This trend has led to lowered prices and increased scrutiny on production practices. Questions have been raised as to how greenhouse gas (GHG) emissions from the life cycle of shale gas production and use compares with that of conventionally produced natural gas or other fuel sources such as coal. Recent literature has come to different conclusions on this point, largely due to differing assumptions, comparison baselines, and system boundaries. Through a meta-analytical procedure we call harmonization, we develop robust, analytically consistent, and updated comparisons of estimates of life cycle GHG emissions for electricity produced from shale gas, conventionally produced natural gas, and coal. On a per-unit electrical output basis, harmonization reveals that median estimates of GHG emissions from shale gas-generated electricity are similar to those for conventional natural gas, with both approximately half that of the central tendency of coal. Sensitivity analysis on the harmonized estimates indicates that assumptions regarding liquids unloading and estimated ultimate recovery (EUR) of wells have the greatest influence on life cycle GHG emissions, whereby shale gas life cycle GHG emissions could approach the range of best-performing coal-fired generation under certain scenarios. Despite clarification of published estimates through harmonization, these initial assessments should be confirmed through methane emissions measurements at components and in the atmosphere and through better characterization of EUR and practices. PMID:25049378
NASA Astrophysics Data System (ADS)
Sævik, P. N.; Nixon, C. W.
2017-11-01
We demonstrate how topology-based measures of connectivity can be used to improve analytical estimates of effective permeability in 2-D fracture networks, which is one of the key parameters necessary for fluid flow simulations at the reservoir scale. Existing methods in this field usually compute fracture connectivity using the average fracture length. This approach is valid for ideally shaped, randomly distributed fractures, but is not immediately applicable to natural fracture networks. In particular, natural networks tend to be more connected than randomly positioned fractures of comparable lengths, since natural fractures often terminate in each other. The proposed topological connectivity measure is based on the number of intersections and fracture terminations per sampling area, which for statistically stationary networks can be obtained directly from limited outcrop exposures. To evaluate the method, numerical permeability upscaling was performed on a large number of synthetic and natural fracture networks, with varying topology and geometry. The proposed method was seen to provide much more reliable permeability estimates than the length-based approach, across a wide range of fracture patterns. We summarize our results in a single, explicit formula for the effective permeability.
Hyltoft Petersen, Per; Klee, George G
2014-03-20
Diagnostic decisions based on decision limits according to medical guidelines are different from the majority of clinical decisions due to the strict dichotomization of patients into diseased and non-diseased. Consequently, the influence of analytical performance is more critical than for other diagnostic decisions where much other information is included. The aim of this opinion paper is to investigate consequences of analytical quality and other circumstances for the outcome of "Guideline-Driven Medical Decision Limits". Effects of analytical bias and imprecision should be investigated separately and analytical quality specifications should be estimated accordingly. Use of sharp decision limits doesn't consider biological variation and effects of this variation are closely connected with the effects of analytical performance. Such relationships are investigated for the guidelines for HbA1c in diagnosis of diabetes and in risk of coronary heart disease based on serum cholesterol. The effects of a second sampling in diagnosis give dramatic reduction in the effects of analytical quality showing minimal influence of imprecision up to 3 to 5% for two independent samplings, whereas the reduction in bias is more moderate and a 2% increase in concentration doubles the percentage of false positive diagnoses, both for HbA1c and cholesterol. An alternative approach comes from the current application of guidelines for follow-up laboratory tests according to clinical procedure orders, e.g. frequency of parathyroid hormone requests as a function of serum calcium concentrations. Here, the specifications for bias can be evaluated from the functional increase in requests for increasing serum calcium concentrations. In consequence of the difficulties with biological variation and the practical utilization of concentration dependence of frequency of follow-up laboratory tests already in use, a kind of probability function for diagnosis as function of the key-analyte is proposed. Copyright © 2013 Elsevier B.V. All rights reserved.
Hyltoft Petersen, Per; Klee, George G
2014-05-15
Diagnostic decisions based on decision limits according to medical guidelines are different from the majority of clinical decisions due to the strict dichotomization of patients into diseased and non-diseased. Consequently, the influence of analytical performance is more critical than for other diagnostic decisions where much other information is included. The aim of this opinion paper is to investigate consequences of analytical quality and other circumstances for the outcome of "Guideline-Driven Medical Decision Limits". Effects of analytical bias and imprecision should be investigated separately and analytical quality specifications should be estimated accordingly. Use of sharp decision limits doesn't consider biological variation and effects of this variation are closely connected with the effects of analytical performance. Such relationships are investigated for the guidelines for HbA1c in diagnosis of diabetes and in risk of coronary heart disease based on serum cholesterol. The effects of a second sampling in diagnosis give dramatic reduction in the effects of analytical quality showing minimal influence of imprecision up to 3 to 5% for two independent samplings, whereas the reduction in bias is more moderate and a 2% increase in concentration doubles the percentage of false positive diagnoses, both for HbA1c and cholesterol. An alternative approach comes from the current application of guidelines for follow-up laboratory tests according to clinical procedure orders, e.g. frequency of parathyroid hormone requests as a function of serum calcium concentrations. Here, the specifications for bias can be evaluated from the functional increase in requests for increasing serum calcium concentrations. In consequence of the difficulties with biological variation and the practical utilization of concentration dependence of frequency of follow-up laboratory tests already in use, a kind of probability function for diagnosis as function of the key-analyte is proposed. Copyright © 2014. Published by Elsevier B.V.
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
GLONASS orbit/clock combination in VNIIFTRI
NASA Astrophysics Data System (ADS)
Bezmenov, I.; Pasynok, S.
2015-08-01
An algorithm and a program for GLONASS satellites orbit/clock combination based on daily precise orbits submitted by several Analytic Centers were developed. Some theoretical estimates for combine orbit positions RMS were derived. It was shown that under condition that RMS of satellite orbits provided by the Analytic Centers during a long time interval are commensurable the RMS of combine orbit positions is no greater than RMS of other satellite positions estimated by any of the Analytic Centers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ligotke, M.W.; Pool, K.H.; Lucke, R.B.
1995-10-01
This report describes inorganic and organic analyses results from in situ samples obtained from the headspace of the Hanford waste storage Tank 241-TY-104 (referred to as Tank TY-104). The results described here were obtained to support safety and toxicological evaluations. A summary of the results for inorganic and organic analytes is listed in Table 1. Detailed descriptions of the results appear in the text. Quantitative results were obtained for the inorganic compounds ammonia (NH{sub 3}), nitrogen dioxide (NO{sub 2}), nitric oxide (NO), and water (H{sub 2}O). Sampling for hydrogen cyanide (HCN) and sulfur oxides (SO{sub x}) was not performed. Inmore » addition, the authors looked for the 39 TO-14 compounds plus an additional 14 analytes. Of these, eight were observed above the 5-ppbv reporting cutoff. Twenty-four organic tentatively identified compounds (TICs) were observed above the reporting cutoff of (ca.) 10 ppbv and are reported with concentrations that are semiquantitative estimates based on internal standard response factors. The 10 organic analytes with the highest estimated concentrations are listed in Table 1 and account for approximately 86% of the total organic components in Tank TY-104. Tank TY-104 is on the Ferrocyanide Watch List.« less
Metallic Rotor Sizing and Performance Model for Flywheel Systems
NASA Technical Reports Server (NTRS)
Moore, Camille J.; Kraft, Thomas G.
2012-01-01
The NASA Glenn Research Center (GRC) is developing flywheel system requirements and designs for terrestrial and spacecraft applications. Several generations of flywheels have been designed and tested at GRC using in-house expertise in motors, magnetic bearings, controls, materials and power electronics. The maturation of a flywheel system from the concept phase to the preliminary design phase is accompanied by maturation of the Integrated Systems Performance model, where estimating relationships are replaced by physics based analytical techniques. The modeling can incorporate results from engineering model testing and emerging detail from the design process.
Cache-based error recovery for shared memory multiprocessor systems
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1989-01-01
A multiprocessor cache-based checkpointing and recovery scheme for of recovering from transient processor errors in a shared-memory multiprocessor with private caches is presented. New implementation techniques that use checkpoint identifiers and recovery stacks to reduce performance degradation in processor utilization during normal execution are examined. This cache-based checkpointing technique prevents rollback propagation, provides for rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions that take error latency into account are presented.
NASA Astrophysics Data System (ADS)
Ghorbani, A.; Farahani, M. Mahmoodi; Rabbani, M.; Aflaki, F.; Waqifhosain, Syed
2008-01-01
In this paper we propose uncertainty estimation for the analytical results we obtained from determination of Ni, Pb and Al by solidphase extraction and inductively coupled plasma optical emission spectrometry (SPE-ICP-OES). The procedure is based on the retention of analytes in the form of 8-hydroxyquinoline (8-HQ) complexes on a mini column of XAD-4 resin and subsequent elution with nitric acid. The influence of various analytical parameters including the amount of solid phase, pH, elution factors (concentration and volume of eluting solution), volume of sample solution, and amount of ligand on the extraction efficiency of analytes was investigated. To estimate the uncertainty of analytical result obtained, we propose assessing trueness by employing spiked sample. Two types of bias are calculated in the assessment of trueness: a proportional bias and a constant bias. We applied Nested design for calculating proportional bias and Youden method to calculate the constant bias. The results we obtained for proportional bias are calculated from spiked samples. In this case, the concentration found is plotted against the concentration added and the slop of standard addition curve is an estimate of the method recovery. Estimated method of average recovery in Karaj river water is: (1.004±0.0085) for Ni, (0.999±0.010) for Pb and (0.987±0.008) for Al.
Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T
2017-01-01
Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient performance when compared to CrCl. Although clinicians should be aware of applying a GFR formula that is compatible with the locally used analytical method for determining Cys C and creatinine, other factors might be more crucial for the calculation of correct GFR values.
Prompt radiation, shielding and induced radioactivity in a high-power 160 MeV proton linac
NASA Astrophysics Data System (ADS)
Magistris, Matteo; Silari, Marco
2006-06-01
CERN is designing a 160 MeV proton linear accelerator, both for a future intensity upgrade of the LHC and as a possible first stage of a 2.2 GeV superconducting proton linac. A first estimate of the required shielding was obtained by means of a simple analytical model. The source terms and the attenuation lengths used in the present study were calculated with the Monte Carlo cascade code FLUKA. Detailed FLUKA simulations were performed to investigate the contribution of neutron skyshine and backscattering to the expected dose rate in the areas around the linac tunnel. An estimate of the induced radioactivity in the magnets, vacuum chamber, the cooling system and the concrete shield was performed. A preliminary thermal study of the beam dump is also discussed.
NASA Astrophysics Data System (ADS)
Banet, Matthias T.; Spencer, Mark F.
2017-09-01
Spatial-heterodyne interferometry is a robust solution for deep-turbulence wavefront sensing. With that said, this paper analyzes the focal-plane array sampling requirements for spatial-heterodyne systems operating in the off-axis pupil plane recording geometry. To assess spatial-heterodyne performance, we use a metric referred to as the field-estimated Strehl ratio. We first develop an analytical description of performance with respect to the number of focal-plane array pixels across the Fried coherence diameter and then verify our results with wave-optics simulations. The analysis indicates that at approximately 5 focal-plane array pixels across the Fried coherence diameter, the field-estimated Strehl ratios begin to exceed 0:9 which is indicative of largely diffraction-limited results.
Building pit dewatering: application of transient analytic elements.
Zaadnoordijk, Willem J
2006-01-01
Analytic elements are well suited for the design of building pit dewatering. Wells and drains can be modeled accurately by analytic elements, both nearby to determine the pumping level and at some distance to verify the targeted drawdown at the building site and to estimate the consequences in the vicinity. The ability to shift locations of wells or drains easily makes the design process very flexible. The temporary pumping has transient effects, for which transient analytic elements may be used. This is illustrated using the free, open-source, object-oriented analytic element simulator Tim(SL) for the design of a building pit dewatering near a canal. Steady calculations are complemented with transient calculations. Finally, the bandwidths of the results are estimated using linear variance analysis.
Evaluating Trends in Historical PM2.5 Element Concentrations by Reanalyzing a 15-Year Sample Archive
NASA Astrophysics Data System (ADS)
Hyslop, N. P.; White, W. H.; Trzepla, K.
2014-12-01
The IMPROVE (Interagency Monitoring of PROtected Visual Environments) network monitors aerosol concentrations at 170 remote sites throughout the United States. Twenty-four-hour filter samples of particulate matter are collected every third day and analyzed for chemical composition. About 30 of the sites have operated continuously since 1988, and the sustained data record (http://views.cira.colostate.edu/web/) offers a unique window on regional aerosol trends. All elemental analyses have been performed by Crocker Nuclear Laboratory at the University of California in Davis, and sample filters collected since 1995 are archived on campus. The suite of reported elements has remained constant, but the analytical methods employed for their determination have evolved. For example, the elements Na - Mn were determined by PIXE until November 2001, then by XRF analysis in a He-flushed atmosphere through 2004, and by XRF analysis in vacuum since January 2005. In addition to these fundamental changes, incompletely-documented operational factors such as detector performance and calibration details have introduced variations in the measurements. Because the past analytical methods were non-destructive, the archived filters can be re-analyzed with the current analytical systems and protocols. The 15-year sample archives from Great Smoky Mountains (GRSM), Mount Rainier (MORA), and Point Reyes National Parks (PORE) were selected for reanalysis. The agreement between the new analyses and original determinations varies with element and analytical era. The graph below compares the trend estimates for all the elements measured by IMPROVE based on the original and repeat analyses; the elements identified in color are measured above the detection limit more than 90% of the time. The trend estimates are sensitive to the treatment of non-detect data. The original and reanalysis trends are indistinguishable (have overlapping confidence intervals) for most of the well-detected elements.
Post-standardization of routine creatinine assays: are they suitable for clinical applications.
Jassam, Nuthar; Weykamp, Cas; Thomas, Annette; Secchiero, Sandra; Sciacovelli, Laura; Plebani, Mario; Thelen, Marc; Cobbaert, Christa; Perich, Carmen; Ricós, Carmen; Paula, Faria A; Barth, Julian H
2017-05-01
Introduction Reliable serum creatinine measurements are of vital importance for the correct classification of chronic kidney disease and early identification of kidney injury. The National Kidney Disease Education Programme working group and other groups have defined clinically acceptable analytical limits for creatinine methods. The aim of this study was to re-evaluate the performance of routine creatinine methods in the light of these defined limits so as to assess their suitability for clinical practice. Method In collaboration with the Dutch External Quality Assurance scheme, six frozen commutable samples, with a creatinine concentration ranging from 80 to 239 μmol/L and traceable to isotope dilution mass spectrometry, were circulated to 91 laboratories in four European countries for creatinine measurement and estimated glomerular filtration rate calculation. Two out of the six samples were spiked with glucose to give high and low final concentrations of glucose. Results Results from 89 laboratories were analysed for bias, imprecision (%CV) for each creatinine assay and total error for estimated glomerular filtration rate. The participating laboratories used analytical instruments from four manufacturers; Abbott, Beckman, Roche and Siemens. All enzymatic methods in this study complied with the National Kidney Disease Education Programme working group recommended limits of bias of 5% above a creatinine concentration of 100 μmol/L. They also did not show any evidence of interference from glucose. In addition, they also showed compliance with the clinically recommended %CV of ≤4% across the analytical range. In contrast, the Jaffe methods showed variable performance with regard to the interference of glucose and unsatisfactory bias and precision. Conclusion Jaffe-based creatinine methods still exhibit considerable analytical variability in terms of bias, imprecision and lack of specificity, and this variability brings into question their clinical utility. We believe that clinical laboratories and manufacturers should work together to phase out the use of relatively non-specific Jaffe methods and replace them with more specific methods that are enzyme based.
An isotope-dilution standard GC/MS/MS method for steroid hormones in water
Foreman, William T.; Gray, James L.; ReVello, Rhiannon C.; Lindley, Chris E.; Losche, Scott A.
2013-01-01
An isotope-dilution quantification method was developed for 20 natural and synthetic steroid hormones and additional compounds in filtered and unfiltered water. Deuterium- or carbon-13-labeled isotope-dilution standards (IDSs) are added to the water sample, which is passed through an octadecylsilyl solid-phase extraction (SPE) disk. Following extract cleanup using Florisil SPE, method compounds are converted to trimethylsilyl derivatives and analyzed by gas chromatography with tandem mass spectrometry. Validation matrices included reagent water, wastewater-affected surface water, and primary (no biological treatment) and secondary wastewater effluent. Overall method recovery for all analytes in these matrices averaged 100%; with overall relative standard deviation of 28%. Mean recoveries of the 20 individual analytes for spiked reagent-water samples prepared along with field samples analyzed in 2009–2010 ranged from 84–104%, with relative standard deviations of 6–36%. Detection levels estimated using ASTM International’s D6091–07 procedure range from 0.4 to 4 ng/L for 17 analytes. Higher censoring levels of 100 ng/L for bisphenol A and 200 ng/L for cholesterol and 3-beta-coprostanol are used to prevent bias and false positives associated with the presence of these analytes in blanks. Absolute method recoveries of the IDSs provide sample-specific performance information and guide data reporting. Careful selection of labeled compounds for use as IDSs is important because both inexact IDS-analyte matches and deuterium label loss affect an IDS’s ability to emulate analyte performance. Six IDS compounds initially tested and applied in this method exhibited deuterium loss and are not used in the final method.
Olariu, Elena; Cadwell, Kevin K; Hancock, Elizabeth; Trueman, David; Chevrou-Severac, Helene
2017-01-01
Although Markov cohort models represent one of the most common forms of decision-analytic models used in health care decision-making, correct implementation of such models requires reliable estimation of transition probabilities. This study sought to identify consensus statements or guidelines that detail how such transition probability matrices should be estimated. A literature review was performed to identify relevant publications in the following databases: Medline, Embase, the Cochrane Library, and PubMed. Electronic searches were supplemented by manual-searches of health technology assessment (HTA) websites in Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and the UK. One reviewer assessed studies for eligibility. Of the 1,931 citations identified in the electronic searches, no studies met the inclusion criteria for full-text review, and no guidelines on transition probabilities in Markov models were identified. Manual-searching of the websites of HTA agencies identified ten guidelines on economic evaluations (Australia, Belgium, Canada, France, Germany, Ireland, Norway, Portugal, Sweden, and UK). All identified guidelines provided general guidance on how to develop economic models, but none provided guidance on the calculation of transition probabilities. One relevant publication was identified following review of the reference lists of HTA agency guidelines: the International Society for Pharmacoeconomics and Outcomes Research taskforce guidance. This provided limited guidance on the use of rates and probabilities. There is limited formal guidance available on the estimation of transition probabilities for use in decision-analytic models. Given the increasing importance of cost-effectiveness analysis in the decision-making processes of HTA bodies and other medical decision-makers, there is a need for additional guidance to inform a more consistent approach to decision-analytic modeling. Further research should be done to develop more detailed guidelines on the estimation of transition probabilities.
Estimation and Fusion for Tracking Over Long-Haul Links Using Artificial Neural Networks
Liu, Qiang; Brigham, Katharine; Rao, Nageswara S. V.
2017-02-01
In a long-haul sensor network, sensors are remotely deployed over a large geographical area to perform certain tasks, such as tracking and/or monitoring of one or more dynamic targets. A remote fusion center fuses the information provided by these sensors so that a final estimate of certain target characteristics – such as the position – is expected to possess much improved quality. In this paper, we pursue learning-based approaches for estimation and fusion of target states in longhaul sensor networks. In particular, we consider learning based on various implementations of artificial neural networks (ANNs). Finally, the joint effect of (i)more » imperfect communication condition, namely, link-level loss and delay, and (ii) computation constraints, in the form of low-quality sensor estimates, on ANN-based estimation and fusion, is investigated by means of analytical and simulation studies.« less
Estimation and Fusion for Tracking Over Long-Haul Links Using Artificial Neural Networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Qiang; Brigham, Katharine; Rao, Nageswara S. V.
In a long-haul sensor network, sensors are remotely deployed over a large geographical area to perform certain tasks, such as tracking and/or monitoring of one or more dynamic targets. A remote fusion center fuses the information provided by these sensors so that a final estimate of certain target characteristics – such as the position – is expected to possess much improved quality. In this paper, we pursue learning-based approaches for estimation and fusion of target states in longhaul sensor networks. In particular, we consider learning based on various implementations of artificial neural networks (ANNs). Finally, the joint effect of (i)more » imperfect communication condition, namely, link-level loss and delay, and (ii) computation constraints, in the form of low-quality sensor estimates, on ANN-based estimation and fusion, is investigated by means of analytical and simulation studies.« less
Performance analysis of cross-layer design with average PER constraint over MIMO fading channels
NASA Astrophysics Data System (ADS)
Dang, Xiaoyu; Liu, Yan; Yu, Xiangbin
2015-12-01
In this article, a cross-layer design (CLD) scheme for multiple-input and multiple-output system with the dual constraints of imperfect feedback and average packet error rate (PER) is presented, which is based on the combination of the adaptive modulation and the automatic repeat request protocols. The design performance is also evaluated over wireless Rayleigh fading channel. With the constraint of target PER and average PER, the optimum switching thresholds (STs) for attaining maximum spectral efficiency (SE) are developed. An effective iterative algorithm for finding the optimal STs is proposed via Lagrange multiplier optimisation. With different thresholds available, the analytical expressions of the average SE and PER are provided for the performance evaluation. To avoid the performance loss caused by the conventional single estimate, multiple outdated estimates (MOE) method, which utilises multiple previous channel estimation information, is presented for CLD to improve the system performance. It is shown that numerical simulations for average PER and SE are in consistent with the theoretical analysis and that the developed CLD with average PER constraint can meet the target PER requirement and show better performance in comparison with the conventional CLD with instantaneous PER constraint. Especially, the CLD based on the MOE method can obviously increase the system SE and reduce the impact of feedback delay greatly.
Saeidi, Iman; Barfi, Behruz; Payrovi, Moazameh; Feizy, Javid; Sheibani, Hojat A; Miri, Mina; Ghollasi Moud, Farahnaz
2015-01-01
With polyamide (PA) as an efficient sorbent for solid phase extraction (SPE) of Sudan dyes II, III and Red 7B from saffron and urine, their determination by HPLC was performed. The optimum conditions for SPE were achieved using 7 mL methanol/water (1:9, v/v, pH 7) as the washing solvent and 3 mL tetrahydrofuran for elution. Good clean-up and high (above 90%) recoveries were observed for all the analytes. The optimized mobile phase composition for HPLC analysis of these compounds was methanol-water (70:30, v/v). The SPE parameters, such as the maximum loading capacity and breakthrough volume, were also determined for each analyte. The limits of detection (LODs), limits of quantification (LOQs), linear ranges and recoveries for the analytes were 4.6-6.6 microg/L, 13.0-19.8 microg/L, 13.0-5000 microg/L (r2>0.99) and 92.5%-113.4%, respectively. The precisions (RSDs) of the overall analytical procedure, estimated by five replicate measurements for Sudan II, III and Red 7B in saffron and urine samples were 2.3%, 1.8% and 3.6%, respectively. The developed method is simple and successful in the application to the determination of Sudan dyes in saffron and urine samples with HPLC coupled with UV detection.
Estimating reliable paediatric reference intervals in clinical chemistry and haematology.
Ridefelt, Peter; Hellberg, Dan; Aldrimer, Mattias; Gustafsson, Jan
2014-01-01
Very few high-quality studies on paediatric reference intervals for general clinical chemistry and haematology analytes have been performed. Three recent prospective community-based projects utilising blood samples from healthy children in Sweden, Denmark and Canada have substantially improved the situation. The present review summarises current reference interval studies for common clinical chemistry and haematology analyses. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Switching transients in high-frequency high-power converters using power MOSFET's
NASA Technical Reports Server (NTRS)
Sloane, T. H.; Owen, H. A., Jr.; Wilson, T. G.
1979-01-01
The use of MOSFETs in a high-frequency high-power dc-to-dc converter is investigated. Consideration is given to the phenomena associated with the paralleling of MOSFETs and to the effect of stray circuit inductances on the converter circuit performance. Analytical relationships between various time constants during the turning-on and turning-off intervals are derived which provide estimates of plateau and peak levels during these intervals.
Comparison of Three Methods for Wind Turbine Capacity Factor Estimation
Ditkovich, Y.; Kuperman, A.
2014-01-01
Three approaches to calculating capacity factor of fixed speed wind turbines are reviewed and compared using a case study. The first “quasiexact” approach utilizes discrete wind raw data (in the histogram form) and manufacturer-provided turbine power curve (also in discrete form) to numerically calculate the capacity factor. On the other hand, the second “analytic” approach employs a continuous probability distribution function, fitted to the wind data as well as continuous turbine power curve, resulting from double polynomial fitting of manufacturer-provided power curve data. The latter approach, while being an approximation, can be solved analytically thus providing a valuable insight into aspects, affecting the capacity factor. Moreover, several other merits of wind turbine performance may be derived based on the analytical approach. The third “approximate” approach, valid in case of Rayleigh winds only, employs a nonlinear approximation of the capacity factor versus average wind speed curve, only requiring rated power and rotor diameter of the turbine. It is shown that the results obtained by employing the three approaches are very close, enforcing the validity of the analytically derived approximations, which may be used for wind turbine performance evaluation. PMID:24587755
Stretchy binary classification.
Toh, Kar-Ann; Lin, Zhiping; Sun, Lei; Li, Zhengguo
2018-01-01
In this article, we introduce an analytic formulation for compressive binary classification. The formulation seeks to solve the least ℓ p -norm of the parameter vector subject to a classification error constraint. An analytic and stretchable estimation is conjectured where the estimation can be viewed as an extension of the pseudoinverse with left and right constructions. Our variance analysis indicates that the estimation based on the left pseudoinverse is unbiased and the estimation based on the right pseudoinverse is biased. Sparseness can be obtained for the biased estimation under certain mild conditions. The proposed estimation is investigated numerically using both synthetic and real-world data. Copyright © 2017 Elsevier Ltd. All rights reserved.
Gamage, K A A; Joyce, M J
2011-10-01
A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Holdeman, J. D.
1979-01-01
Three analytical problems in estimating the frequency at which commercial airline flights will encounter high cabin ozone levels are formulated and solved: namely, estimating flight-segment mean levels, estimating maximum-per-flight levels, and estimating the maximum average level over a specified flight interval. For each problem, solution procedures are given for different levels of input information - from complete cabin ozone data, which provides a direct solution, to limited ozone information, such as ambient ozone means and standard deviations, with which several assumptions are necessary to obtain the required estimates. Each procedure is illustrated by an example case calculation that uses simultaneous cabin and ambient ozone data obtained by the NASA Global Atmospheric Sampling Program. Critical assumptions are discussed and evaluated, and the several solutions for each problem are compared. Example calculations are also performed to illustrate how variations in lattitude, altitude, season, retention ratio, flight duration, and cabin ozone limits affect the estimated probabilities.
Fasoula, S; Zisi, Ch; Gika, H; Pappa-Louisi, A; Nikitas, P
2015-05-22
A package of Excel VBA macros have been developed for modeling multilinear gradient retention data obtained in single or double gradient elution mode by changing organic modifier(s) content and/or eluent pH. For this purpose, ten chromatographic models were used and four methods were adopted for their application. The methods were based on (a) the analytical expression of the retention time, provided that this expression is available, (b) the retention times estimated using the Nikitas-Pappa approach, (c) the stepwise approximation, and (d) a simple numerical approximation involving the trapezoid rule for integration of the fundamental equation for gradient elution. For all these methods, Excel VBA macros have been written and implemented using two different platforms; the fitting and the optimization platform. The fitting platform calculates not only the adjustable parameters of the chromatographic models, but also the significance of these parameters and furthermore predicts the analyte elution times. The optimization platform determines the gradient conditions that lead to the optimum separation of a mixture of analytes by using the Solver evolutionary mode, provided that proper constraints are set in order to obtain the optimum gradient profile in the minimum gradient time. The performance of the two platforms was tested using experimental and artificial data. It was found that using the proposed spreadsheets, fitting, prediction, and optimization can be performed easily and effectively under all conditions. Overall, the best performance is exhibited by the analytical and Nikitas-Pappa's methods, although the former cannot be used under all circumstances. Copyright © 2015 Elsevier B.V. All rights reserved.
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuemann, J; Grassberger, C; Paganetti, H
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less
Danezis, G P; Anagnostopoulos, C J; Liapis, K; Koupparis, M A
2016-10-26
One of the recent trends in Analytical Chemistry is the development of economic, quick and easy hyphenated methods to be used in a field that includes analytes of different classes and physicochemical properties. In this work a multi-residue method was developed for the simultaneous determination of 28 xenobiotics (polar and hydrophilic) using hydrophilic interaction liquid chromatography technique (HILIC) coupled with triple quadrupole mass spectrometry (LC-MS/MS) technology. The scope of the method includes plant growth regulators (chlormequat, daminozide, diquat, maleic hydrazide, mepiquat, paraquat), pesticides (cyromazine, the metabolite of the fungicide propineb PTU (propylenethiourea), amitrole), various multiclass antibiotics (tetracyclines, sulfonamides quinolones, kasugamycin and mycotoxins (aflatoxin B1, B2, fumonisin B1 and ochratoxin A). Isolation of the analytes from the matrix was achieved with a fast and effective technique. The validation of the multi-residue method was performed at the levels: 10 μg/kg and 100 μg/kg in the following representative substrates: fruits-vegetables (apples, apricots, lettuce and onions), cereals and pulses (flour and chickpeas), animal products (milk and meat) and cereal based baby foods. The method was validated taking into consideration EU guidelines and showed acceptable linearity (r ≥ 0.99), accuracy with recoveries between 70 and 120% and precision with RSD ≤ 20% for the majority of the analytes studied. For the analytes that presented accuracy and precision values outside the acceptable limits the method still is able to serve as a semi-quantitative method. The matrix effect, the limits of detection and quantification were also estimated and compared with the current EU MRLs (Maximum Residue Levels) and FAO/WHO MLs (Maximum Levels) or CXLs (Codex Maximum Residue Limits). The combined and expanded uncertainty of the method for each analyte per substrate, was also estimated. Copyright © 2016 Elsevier B.V. All rights reserved.
A comparison of two indices for the intraclass correlation coefficient.
Shieh, Gwowen
2012-12-01
In the present study, we examined the behavior of two indices for measuring the intraclass correlation in the one-way random effects model: the prevailing ICC(1) (Fisher, 1938) and the corrected eta-squared (Bliese & Halverson, 1998). These two procedures differ both in their methods of estimating the variance components that define the intraclass correlation coefficient and in their performance of bias and mean squared error in the estimation of the intraclass correlation coefficient. In contrast with the natural unbiased principle used to construct ICC(1), in the present study it was analytically shown that the corrected eta-squared estimator is identical to the maximum likelihood estimator and the pairwise estimator under equal group sizes. Moreover, the empirical results obtained from the present Monte Carlo simulation study across various group structures revealed the mutual dominance relationship between their truncated versions for negative values. The corrected eta-squared estimator performs better than the ICC(1) estimator when the underlying population intraclass correlation coefficient is small. Conversely, ICC(1) has a clear advantage over the corrected eta-squared for medium and large magnitudes of population intraclass correlation coefficient. The conceptual description and numerical investigation provide guidelines to help researchers choose between the two indices for more accurate reliability analysis in multilevel research.
On the reconciliation of missing heritability for genome-wide association studies
Chen, Guo-Bo
2016-01-01
The definition of heritability has been unique and clear, but its estimation and estimates vary across studies. Linear mixed model (LMM) and Haseman–Elston (HE) regression analyses are commonly used for estimating heritability from genome-wide association data. This study provides an analytical resolution that can be used to reconcile the differences between LMM and HE in the estimation of heritability given the genetic architecture, which is responsible for these differences. The genetic architecture was classified into three forms via thought experiments: (i) coupling genetic architecture that the quantitative trait loci (QTLs) in the linkage disequilibrium (LD) had a positive covariance; (ii) repulsion genetic architecture that the QTLs in the LD had a negative covariance; (iii) and neutral genetic architecture that the QTLs in the LD had a covariance with a summation of zero. The neutral genetic architecture is so far most embraced, whereas the coupling and the repulsion genetic architecture have not been well investigated. For a quantitative trait under the coupling genetic architecture, HE overestimated the heritability and LMM underestimated the heritability; under the repulsion genetic architecture, HE underestimated but LMM overestimated the heritability for a quantitative trait. These two methods gave identical results under the neutral genetic architecture. A general analytical result for the statistic estimated under HE is given regardless of genetic architecture. In contrast, the performance of LMM remained elusive, such as further depended on the ratio between the sample size and the number of markers, but LMM converged to HE with increased sample size. PMID:27436266
Analytical estimation show low depth-independent water loss due to vapor flux from deep aquifers
NASA Astrophysics Data System (ADS)
Selker, John S.
2017-06-01
Recent articles have provided estimates of evaporative flux from water tables in deserts that span 5 orders of magnitude. In this paper, we present an analytical calculation that indicates aquifer vapor flux to be limited to 0.01 mm/yr for sites where there is negligible recharge and the water table is well over 20 m below the surface. This value arises from the geothermal gradient, and therefore, is nearly independent of the actual depth of the aquifer. The value is in agreement with several numerical studies, but is 500 times lower than recently reported experimental values, and 100 times larger than an earlier analytical estimate.
Wickenberg-Bolin, Ulrika; Göransson, Hanna; Fryknäs, Mårten; Gustafsson, Mats G; Isaksson, Anders
2006-03-13
Supervised learning for classification of cancer employs a set of design examples to learn how to discriminate between tumors. In practice it is crucial to confirm that the classifier is robust with good generalization performance to new examples, or at least that it performs better than random guessing. A suggested alternative is to obtain a confidence interval of the error rate using repeated design and test sets selected from available examples. However, it is known that even in the ideal situation of repeated designs and tests with completely novel samples in each cycle, a small test set size leads to a large bias in the estimate of the true variance between design sets. Therefore different methods for small sample performance estimation such as a recently proposed procedure called Repeated Random Sampling (RSS) is also expected to result in heavily biased estimates, which in turn translates into biased confidence intervals. Here we explore such biases and develop a refined algorithm called Repeated Independent Design and Test (RIDT). Our simulations reveal that repeated designs and tests based on resampling in a fixed bag of samples yield a biased variance estimate. We also demonstrate that it is possible to obtain an improved variance estimate by means of a procedure that explicitly models how this bias depends on the number of samples used for testing. For the special case of repeated designs and tests using new samples for each design and test, we present an exact analytical expression for how the expected value of the bias decreases with the size of the test set. We show that via modeling and subsequent reduction of the small sample bias, it is possible to obtain an improved estimate of the variance of classifier performance between design sets. However, the uncertainty of the variance estimate is large in the simulations performed indicating that the method in its present form cannot be directly applied to small data sets.
NASA Astrophysics Data System (ADS)
Hobeichi, Sanaa; Abramowitz, Gab; Evans, Jason; Ukkola, Anna
2018-02-01
Accurate global gridded estimates of evapotranspiration (ET) are key to understanding water and energy budgets, in addition to being required for model evaluation. Several gridded ET products have already been developed which differ in their data requirements, the approaches used to derive them and their estimates, yet it is not clear which provides the most reliable estimates. This paper presents a new global ET dataset and associated uncertainty with monthly temporal resolution for 2000-2009. Six existing gridded ET products are combined using a weighting approach trained by observational datasets from 159 FLUXNET sites. The weighting method is based on a technique that provides an analytically optimal linear combination of ET products compared to site data and accounts for both the performance differences and error covariance between the participating ET products. We examine the performance of the weighting approach in several in-sample and out-of-sample tests that confirm that point-based estimates of flux towers provide information on the grid scale of these products. We also provide evidence that the weighted product performs better than its six constituent ET product members in four common metrics. Uncertainty in the ET estimate is derived by rescaling the spread of participating ET products so that their spread reflects the ability of the weighted mean estimate to match flux tower data. While issues in observational data and any common biases in participating ET datasets are limitations to the success of this approach, future datasets can easily be incorporated and enhance the derived product.
Numerical Modeling of Pulse Detonation Rocket Engine Gasdynamics and Performance
NASA Technical Reports Server (NTRS)
Morris, C. I.
2003-01-01
Pulse detonation engines (PDB) have generated considerable research interest in recent years as a chemical propulsion system potentially offering improved performance and reduced complexity compared to conventional gas turbines and rocket engines. The detonative mode of combustion employed by these devices offers a theoretical thermodynamic advantage over the constant-pressure deflagrative combustion mode used in conventional engines. However, the unsteady blowdown process intrinsic to all pulse detonation devices has made realistic estimates of the actual propulsive performance of PDES problematic. The recent review article by Kailasanath highlights some of the progress that has been made in comparing the available experimental measurements with analytical and numerical models.
Dynamic remapping of parallel computations with varying resource demands
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Saltz, J. H.
1986-01-01
A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.
Robust electroencephalogram phase estimation with applications in brain-computer interface systems.
Seraj, Esmaeil; Sameni, Reza
2017-03-01
In this study, a robust method is developed for frequency-specific electroencephalogram (EEG) phase extraction using the analytic representation of the EEG. Based on recent theoretical findings in this area, it is shown that some of the phase variations-previously associated to the brain response-are systematic side-effects of the methods used for EEG phase calculation, especially during low analytical amplitude segments of the EEG. With this insight, the proposed method generates randomized ensembles of the EEG phase using minor perturbations in the zero-pole loci of narrow-band filters, followed by phase estimation using the signal's analytical form and ensemble averaging over the randomized ensembles to obtain a robust EEG phase and frequency. This Monte Carlo estimation method is shown to be very robust to noise and minor changes of the filter parameters and reduces the effect of fake EEG phase jumps, which do not have a cerebral origin. As proof of concept, the proposed method is used for extracting EEG phase features for a brain computer interface (BCI) application. The results show significant improvement in classification rates using rather simple phase-related features and a standard K-nearest neighbors and random forest classifiers, over a standard BCI dataset. The average performance was improved between 4-7% (in absence of additive noise) and 8-12% (in presence of additive noise). The significance of these improvements was statistically confirmed by a paired sample t-test, with 0.01 and 0.03 p-values, respectively. The proposed method for EEG phase calculation is very generic and may be applied to other EEG phase-based studies.
NASA Astrophysics Data System (ADS)
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
A simple method for estimating frequency response corrections for eddy covariance systems
W. J. Massman
2000-01-01
A simple analytical formula is developed for estimating the frequency attenuation of eddy covariance fluxes due to sensor response, path-length averaging, sensor separation, signal processing, and flux averaging periods. Although it is an approximation based on flat terrain cospectra, this analytical formula should have broader applicability than just flat-terrain...
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
Choosing the best index for the average score intraclass correlation coefficient.
Shieh, Gwowen
2016-09-01
The intraclass correlation coefficient (ICC)(2) index from a one-way random effects model is widely used to describe the reliability of mean ratings in behavioral, educational, and psychological research. Despite its apparent utility, the essential property of ICC(2) as a point estimator of the average score intraclass correlation coefficient is seldom mentioned. This article considers several potential measures and compares their performance with ICC(2). Analytical derivations and numerical examinations are presented to assess the bias and mean square error of the alternative estimators. The results suggest that more advantageous indices can be recommended over ICC(2) for their theoretical implication and computational ease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loon, W.M.G.M. van; Hermens, J.L.M.
1994-12-31
A large part of all aquatic pollutants can be classified as narcosis-type (baseline toxicity) chemicals. Many chemicals contribute to a joint baseline aquatic toxicity even at trace concentrations. A novel surrogate parameter, which simulated bioconcentration of hydrophobic substances from water and estimates internal molar concentrations, has been explored by Verhaar et al.. These estimated biological concentrations can be used to predict narcosis-type toxic effects, using the Lethal Body Burden (LBB) concept. The authors applied this toxicological-analytical concept to river water, and some recent technological developments and field results are pointed out. The simulation of bioconcentration is performed by extracting watermore » samples with empore{trademark} disks. The authors developed two extraction procedures; i.e., laboratory extraction and field extraction. Molar concentrations measurements are performed using vapor pressure osmometry, GC-FID and GC-MS. Results on the molar concentrations of hydrophobic compounds which can be bioaccumulated from several Dutch river systems will be presented.« less
Enhancement of low-temperature thermometry by strong coupling
NASA Astrophysics Data System (ADS)
Correa, Luis A.; Perarnau-Llobet, Martí; Hovhannisyan, Karen V.; Hernández-Santana, Senaida; Mehboudi, Mohammad; Sanpera, Anna
2017-12-01
We consider the problem of estimating the temperature T of a very cold equilibrium sample. The temperature estimates are drawn from measurements performed on a quantum Brownian probe strongly coupled to it. We model this scenario by resorting to the canonical Caldeira-Leggett Hamiltonian and find analytically the exact stationary state of the probe for arbitrary coupling strength. In general, the probe does not reach thermal equilibrium with the sample, due to their nonperturbative interaction. We argue that this is advantageous for low-temperature thermometry, as we show in our model that (i) the thermometric precision at low T can be significantly enhanced by strengthening the probe-sampling coupling, (ii) the variance of a suitable quadrature of our Brownian thermometer can yield temperature estimates with nearly minimal statistical uncertainty, and (iii) the spectral density of the probe-sample coupling may be engineered to further improve thermometric performance. These observations may find applications in practical nanoscale thermometry at low temperatures—a regime which is particularly relevant to quantum technologies.
Yadav, Nand K; Raghuvanshi, Ashish; Sharma, Gajanand; Beg, Sarwar; Katare, Om P; Nanda, Sanju
2016-03-01
The current studies entail systematic quality by design (QbD)-based development of simple, precise, cost-effective and stability-indicating high-performance liquid chromatography method for estimation of ketoprofen. Analytical target profile was defined and critical analytical attributes (CAAs) were selected. Chromatographic separation was accomplished with an isocratic, reversed-phase chromatography using C-18 column, pH 6.8, phosphate buffer-methanol (50 : 50v/v) as a mobile phase at a flow rate of 1.0 mL/min and UV detection at 258 nm. Systematic optimization of chromatographic method was performed using central composite design by evaluating theoretical plates and peak tailing as the CAAs. The method was validated as per International Conference on Harmonization guidelines with parameters such as high sensitivity, specificity of the method with linearity ranging between 0.05 and 250 µg/mL, detection limit of 0.025 µg/mL and quantification limit of 0.05 µg/mL. Precision was demonstrated using relative standard deviation of 1.21%. Stress degradation studies performed using acid, base, peroxide, thermal and photolytic methods helped in identifying the degradation products in the proniosome delivery systems. The results successfully demonstrated the utility of QbD for optimizing the chromatographic conditions for developing highly sensitive liquid chromatographic method for ketoprofen. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Development of a plasma sprayed ceramic gas path seal for high pressure turbine applications
NASA Technical Reports Server (NTRS)
Shiembob, L. T.
1977-01-01
The plasma sprayed graded layered yittria stabilized zirconia (ZrO2)/metal(CoCrAlY) seal system for gas turbine blade tip applications up to 1589 K (2400 F) seal temperatures was studied. Abradability, erosion, and thermal fatigue characteristics of the graded layered system were evaluated by rig tests. Satisfactory abradability and erosion resistance was demonstrated. Encouraging thermal fatigue tolerance was shown. Initial properties for the plasma sprayed materials in the graded, layered seal system was obtained, and thermal stress analyses were performed. Sprayed residual stresses were determined. Thermal stability of the sprayed layer materials was evaluated at estimated maximum operating temperatures in each layer. Anisotropic behavior in the layer thickness direction was demonstrated by all layers. Residual stresses and thermal stability effects were not included in the analyses. Analytical results correlated reasonably well with results of the thermal fatigue tests. Analytical application of the seal system to a typical gas turbine engine application predicted performance similar to rig specimen thermal fatigue performance. A model for predicting crack propagation in the sprayed ZrO2/CoCrAlY seal system was proposed, and recommendations for improving thermal fatigue resistance were made. Seal system layer thicknesses were analytically optimized to minimize thermal stresses in the abradability specimen during thermal fatigue testing. Rig tests on the optimized seal configuration demonstrated some improvement in thermal fatigue characteristics.
Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.
Jain, Ram B
2016-08-01
Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.
SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Kenny S K; Lee, Louis K Y; Xing, L
2015-06-15
Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
NASA Astrophysics Data System (ADS)
Tiwari, Vaibhav
2018-07-01
The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.
NASA Astrophysics Data System (ADS)
Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper
2016-04-01
Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.
Kim, Dalho; Han, Jungho; Choi, Yongwook
2013-01-01
A method using on-line solid-phase microextraction (SPME) on a carbowax-templated fiber followed by liquid chromatography (LC) with ultraviolet (UV) detection was developed for the determination of triclosan in environmental water samples. Along with triclosan, other selected phenolic compounds, bisphenol A, and acidic pharmaceuticals were studied. Previous SPME/LC or stir-bar sorptive extraction/LC-UV for polar analytes showed lack of sensitivity. In this study, the calculated octanol-water distribution coefficient (log D) values of the target analytes at different pH values were used to estimate polarity of the analytes. The lack of sensitivity observed in earlier studies is identified as a lack of desorption by strong polar-polar interactions between analyte and solid-phase. Calculated log D values were useful to understand or predict the interaction between analyte and solid phase. Under the optimized conditions, the method detection limit of selected analytes by using on-line SPME-LC-UV method ranged from 5 to 33 ng L(-1), except for very polar 3-chlorophenol and 2,4-dichlorophenol which was obscured in wastewater samples by an interfering substance. This level of detection represented a remarkable improvement over the conventional existing methods. The on-line SPME-LC-UV method, which did not require derivatization of analytes, was applied to the determination of TCS including phenolic compounds and acidic pharmaceuticals in tap water and river water and municipal wastewater samples.
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
NASA Astrophysics Data System (ADS)
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; Phillips, Caleb; Elmore, Ryan
2017-07-01
We estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV’s ability to meet estimated city electricity consumption varies widely—from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city’s estimated electricity consumption. In addition to city-wide results, we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.
NASA Technical Reports Server (NTRS)
Parsons, C. L. (Editor)
1989-01-01
The Multimode Airborne Radar Altimeter (MARA), a flexible airborne radar remote sensing facility developed by NASA's Goddard Space Flight Center, is discussed. This volume describes the scientific justification for the development of the instrument and the translation of these scientific requirements into instrument design goals. Values for key instrument parameters are derived to accommodate these goals, and simulations and analytical models are used to estimate the developed system's performance.
Performance-driven Multimodality Sensor Fusion
2012-01-23
in IEEE Intl Conf. on Acoust., Speech , Signal Processing, (Dallas), Mar. 2010. [10] K. Sricharan, R. Raich, and A. Hero III, “Boundary compensated knn ...nearest neighbor ( kNN ) plug-in estima- tors, we have developed a generally applicable theory that gives analytical closed-form expressions for asymptotic...Co-PI’s Raich and Hero and was published in the IEEE Proc. of 2011 Intl Conf. on Acoustics, Speech , and Signal Processing. 2.4 Dimension estimation in
NASA Astrophysics Data System (ADS)
Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.
2017-12-01
In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.
Evaluating Principal Surrogate Markers in Vaccine Trials in the Presence of Multiphase Sampling
Huang, Ying
2017-01-01
Summary This paper focuses on the evaluation of vaccine-induced immune responses as principal surrogate markers for predicting a given vaccine’s effect on the clinical endpoint of interest. To address the problem of missing potential outcomes under the principal surrogate framework, we can utilize baseline predictors of the immune biomarker(s) or vaccinate uninfected placebo recipients at the end of the trial and measure their immune biomarkers. Examples of good baseline predictors are baseline immune responses when subjects enrolled in the trial have been previously exposed to the same antigen, as in our motivating application of the Zostavax Efficacy and Safety Trial (ZEST). However, laboratory assays of these baseline predictors are expensive and therefore their subsampling among participants is commonly performed. In this paper we develop a methodology for estimating principal surrogate values in the presence of baseline predictor subsampling. Under a multiphase sampling framework, we propose a semiparametric pseudo-score estimator based on conditional likelihood and also develop several alternative semiparametric pseudo-score or estimated likelihood estimators. We derive corresponding asymptotic theories and analytic variance formulas for these estimators. Through extensive numeric studies, we demonstrate good finite sample performance of these estimators and the efficiency advantage of the proposed pseudo-score estimator in various sampling schemes. We illustrate the application of our proposed estimators using data from an immune biomarker study nested within the ZEST trial. PMID:28653408
van der Fels-Klerx, H J; Tromp, S; Rijgersberg, H; van Asselt, E D
2008-11-30
The aim of the present study was to demonstrate how Performance Objectives (POs) for Salmonella at various points in the broiler supply chain can be estimated, starting from pre-set levels of the PO in finished products. The estimations were performed using an analytical transmission model, based on prevalence data collected throughout the chain in The Netherlands. In the baseline (current) situation, the end PO was set at 2.5% of the finished products (at end of processing) being contaminated with Salmonella. Scenario analyses were performed by reducing this baseline end PO to 1.5% and 0.5%. The results showed the end PO could be reduced by spreading the POs over the various stages of the broiler supply chain. Sensitivity analyses were performed by changing the values of the model parameters. Results indicated that, in general, decreasing Salmonella contamination between points in the chain is more effective in reducing the baseline PO than increasing the reduction of the pathogen, implying contamination should be prevented rather than treated. Application of both approaches at the same time showed to be most effective in reducing the end PO, especially at the abattoir and during processing. The modelling approach of this study proved to be useful to estimate the implications for preceding stages of the chain by setting a PO at the end of the chain as well as to evaluate the effectiveness of potential interventions in reducing the end PO. The model estimations may support policy-makers in their decision-making process with regard to microbiological food safety.
Numerical modeling and analytical evaluation of light absorption by gold nanostars
NASA Astrophysics Data System (ADS)
Zarkov, Sergey; Akchurin, Georgy; Yakunin, Alexander; Avetisyan, Yuri; Akchurin, Garif; Tuchin, Valery
2018-04-01
In this paper, the regularity of local light absorption by gold nanostars (AuNSts) model is studied by method of numerical simulation. The mutual diffraction influence of individual geometric fragments of AuNSts is analyzed. A comparison is made with an approximate analytical approach for estimating the average bulk density of absorbed power and total absorbed power by individual geometric fragments of AuNSts. It is shown that the results of the approximate analytical estimate are in qualitative agreement with the numerical calculations of the light absorption by AuNSts.
Prediction of true test scores from observed item scores and ancillary data.
Haberman, Shelby J; Yao, Lili; Sinharay, Sandip
2015-05-01
In many educational tests which involve constructed responses, a traditional test score is obtained by adding together item scores obtained through holistic scoring by trained human raters. For example, this practice was used until 2008 in the case of GRE(®) General Analytical Writing and until 2009 in the case of TOEFL(®) iBT Writing. With use of natural language processing, it is possible to obtain additional information concerning item responses from computer programs such as e-rater(®). In addition, available information relevant to examinee performance may include scores on related tests. We suggest application of standard results from classical test theory to the available data to obtain best linear predictors of true traditional test scores. In performing such analysis, we require estimation of variances and covariances of measurement errors, a task which can be quite difficult in the case of tests with limited numbers of items and with multiple measurements per item. As a consequence, a new estimation method is suggested based on samples of examinees who have taken an assessment more than once. Such samples are typically not random samples of the general population of examinees, so that we apply statistical adjustment methods to obtain the needed estimated variances and covariances of measurement errors. To examine practical implications of the suggested methods of analysis, applications are made to GRE General Analytical Writing and TOEFL iBT Writing. Results obtained indicate that substantial improvements are possible both in terms of reliability of scoring and in terms of assessment reliability. © 2015 The British Psychological Society.
The Theory and Practice of Estimating the Accuracy of Dynamic Flight-Determined Coefficients
NASA Technical Reports Server (NTRS)
Maine, R. E.; Iliff, K. W.
1981-01-01
Means of assessing the accuracy of maximum likelihood parameter estimates obtained from dynamic flight data are discussed. The most commonly used analytical predictors of accuracy are derived and compared from both statistical and simplified geometrics standpoints. The accuracy predictions are evaluated with real and simulated data, with an emphasis on practical considerations, such as modeling error. Improved computations of the Cramer-Rao bound to correct large discrepancies due to colored noise and modeling error are presented. The corrected Cramer-Rao bound is shown to be the best available analytical predictor of accuracy, and several practical examples of the use of the Cramer-Rao bound are given. Engineering judgement, aided by such analytical tools, is the final arbiter of accuracy estimation.
A new frequency approach for light flicker evaluation in electric power systems
NASA Astrophysics Data System (ADS)
Feola, Luigi; Langella, Roberto; Testa, Alfredo
2015-12-01
In this paper, a new analytical estimator for light flicker in frequency domain, which is able to take into account also the frequency components neglected by the classical methods proposed in literature, is proposed. The analytical solutions proposed apply for any generic stationary signal affected by interharmonic distortion. The light flicker analytical estimator proposed is applied to numerous numerical case studies with the goal of showing i) the correctness and the improvements of the analytical approach proposed with respect to the other methods proposed in literature and ii) the accuracy of the results compared to those obtained by means of the classical International Electrotechnical Commission (IEC) flickermeter. The usefulness of the proposed analytical approach is that it can be included in signal processing tools for interharmonic penetration studies for the integration of renewable energy sources in future smart grids.
Personality and job performance: the Big Five revisited.
Hurtz, G M; Donovan, J J
2000-12-01
Prior meta-analyses investigating the relation between the Big 5 personality dimensions and job performance have all contained a threat to construct validity, in that much of the data included within these analyses was not derived from actual Big 5 measures. In addition, these reviews did not address the relations between the Big 5 and contextual performance. Therefore, the present study sought to provide a meta-analytic estimate of the criterion-related validity of explicit Big 5 measures for predicting job performance and contextual performance. The results for job performance closely paralleled 2 of the previous meta-analyses, whereas analyses with contextual performance showed more complex relations among the Big 5 and performance. A more critical interpretation of the Big 5-performance relationship is presented, and suggestions for future research aimed at enhancing the validity of personality predictors are provided.
Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booth, T.E.
1996-01-01
The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less
Thermodynamic aspects of an LNG tank in fire and experimental validation
NASA Astrophysics Data System (ADS)
Hulsbosch-Dam, Corina; Atli-Veltin, Bilim; Kamperveen, Jerry; Velthuis, Han; Reinders, Johan; Spruijt, Mark; Vredeveldt, Lex
Mechanical behaviour of a Liquefied Natural Gas (LNG) tank and the thermodynamic behaviour of its containment under extreme heat load - for instance when subjected to external fire source as might occur during an accident - are extremely important when addressing safety concerns. In a scenario where external fire is present and consequent release of LNG from pressure relief valves (PRV) has occurred, escalation of the fire might occur causing difficulty for the fire response teams to approach the tank or to secure the perimeter. If the duration of the tank exposure to fire is known, the PRV opening time can be estimated based on the thermodynamic calculations. In this paper, such an accidental scenario is considered, relevant thermodynamic equations are derived and presented. Moreover, an experiment is performed with liquid nitrogen and the results are compared to the analytical ones. The analytical results match very well with the experimental observations. The resulting analytical models are suitable to be applied to other cryogenic liquids.
Design and Analysis of a Low Latency Deterministic Network MAC for Wireless Sensor Networks
Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin
2017-01-01
The IEEE 802.15.4e standard has four different superframe structures for different applications. Use of a low latency deterministic network (LLDN) superframe for the wireless sensor network is one of them, which can operate in a star topology. In this paper, a new channel access mechanism for IEEE 802.15.4e-based LLDN shared slots is proposed, and analytical models are designed based on this channel access mechanism. A prediction model is designed to estimate the possible number of retransmission slots based on the number of failed transmissions. Performance analysis in terms of data transmission reliability, delay, throughput and energy consumption are provided based on our proposed designs. Our designs are validated for simulation and analytical results, and it is observed that the simulation results well match with the analytical ones. Besides, our designs are compared with the IEEE 802.15.4 MAC mechanism, and it is shown that ours outperforms in terms of throughput, energy consumption, delay and reliability. PMID:28937632
Design and Analysis of a Low Latency Deterministic Network MAC for Wireless Sensor Networks.
Sahoo, Prasan Kumar; Pattanaik, Sudhir Ranjan; Wu, Shih-Lin
2017-09-22
The IEEE 802.15.4e standard has four different superframe structures for different applications. Use of a low latency deterministic network (LLDN) superframe for the wireless sensor network is one of them, which can operate in a star topology. In this paper, a new channel access mechanism for IEEE 802.15.4e-based LLDN shared slots is proposed, and analytical models are designed based on this channel access mechanism. A prediction model is designed to estimate the possible number of retransmission slots based on the number of failed transmissions. Performance analysis in terms of data transmission reliability, delay, throughput and energy consumption are provided based on our proposed designs. Our designs are validated for simulation and analytical results, and it is observed that the simulation results well match with the analytical ones. Besides, our designs are compared with the IEEE 802.15.4 MAC mechanism, and it is shown that ours outperforms in terms of throughput, energy consumption, delay and reliability.
Doses for post-Chernobyl epidemiological studies: are they reliable?
Drozdovitch, Vladimir; Chumak, Vadim; Kesminiene, Ausrele; Ostroumova, Evgenia; Bouville, André
2016-09-01
On 26 April 2016, thirty years will have elapsed since the occurrence of the Chernobyl accident, which has so far been the most severe in the history of the nuclear reactor industry. Numerous epidemiological studies were conducted to evaluate the possible health consequences of the accident. Since the credibility of the association between the radiation exposure and health outcome is highly dependent on the adequacy of the dosimetric quantities used in these studies, this paper makes an effort to overview the methods used to estimate individual doses and the associated uncertainties in the main analytical epidemiological studies (i.e. cohort or case-control) related to the Chernobyl accident. Based on the thorough analysis and comparison with other radiation studies, the authors conclude that individual doses for the Chernobyl analytical epidemiological studies have been calculated with a relatively high degree of reliability and well-characterized uncertainties, and that they compare favorably with many other non-Chernobyl studies. The major strengths of the Chernobyl studies are: (1) they are grounded on a large number of measurements, either performed on humans or made in the environment; and (2) extensive effort has been invested to evaluate the uncertainties associated with the dose estimates. Nevertheless, gaps in the methodology are identified and suggestions for the possible improvement of the current dose estimates are made.
Cargnin, Sarah; Jommi, Claudio; Canonico, Pier Luigi; Genazzani, Armando A; Terrazzino, Salvatore
2014-05-01
To determine diagnostic accuracy of HLA-B*57:01 testing for prediction of abacavir-induced hypersensitivity and to quantify the clinical benefit of pretreatment screening through a meta-analytic review of published studies. A comprehensive search was performed up to June 2013. The methodological quality of relevant studies was assessed by the QUADAS-2 tool. The pooled diagnostic estimates were calculated using a random effect model. Despite the presence of heterogeneity in sensitivity or specificity estimates, the pooled diagnostic odds ratio to detect abacavir-induced hypersensitivity on the basis of clinical criteria was 33.07 (95% CI: 22.33-48.97, I(2): 13.9%), while diagnostic odds ratio for detection of immunologically confirmed abacavir hypersensitivity was 1141 (95% CI: 409-3181, I(2): 0%). Pooled analysis of risk ratio showed that prospective HLA-B*57:01 testing significantly reduced the incidence of abacavir-induced hypersensitivity. This meta-analysis demonstrates an excellent diagnostic accuracy of HLA-B*57:01 testing to detect immunologically confirmed abacavir hypersensitivity and corroborates existing recommendations.
Kwon, S; Perera, S; Pahor, M; Katula, J A; King, A C; Groessl, E J; Studenski, S A
2009-06-01
Performance measures provide important information, but the meaning of change in these measures is not well known. The purpose of this research is to 1) examine the effect of treatment assignment on the relationship between self-report and performance; 2) to estimate the magnitude of meaningful change in 400-meter walk time (400MWT), 4-meter gait speed (4MGS), and Short Physical Performance Battery (SPPB) and 3) to evaluate the effect of direction of change on estimates of magnitude. This is a secondary analysis of data from the LIFE-P study, a single blinded randomized clinical trial. Using change over one year, we applied distribution-based and anchor-based methods for self-reported mobility to estimate minimally important and substantial change in 400MWT, 4MGS and SPPB. Four university-based clinical research sites. Sedentary adults aged 70-89 whose SPPB scores were less than 10 and who were able to complete a 400MW at baseline (n=424). A structured exercise program versus health education. 400MWT, 4MGS, SPPB. Relationships between self-report and performance measures were consistent between treatment arms. Minimally significant change estimates were 400MWT: 20-30 seconds, 4MGS: 0.03-0.05m/s and SPPB: 0.3 - 0.8 points. Substantial changes were 400MWT: 50-60 seconds, 4MGS: 0.08m/s, SPPB: 0.4 - 1.5 points. Magnitudes of change for improvement and decline were not significantly different. The magnitude of clinically important change in physical performance measures is reasonably consistent using several analytic techniques and appears to be achievable in clinical trials of exercise. Due to limited power, the effect of direction of change on estimates of magnitude remains uncertain.
Phillips, Jeffrey D.
2002-01-01
In 1997, the U.S. Geological Survey (USGS) contracted with Sial Geosciences Inc. for a detailed aeromagnetic survey of the Santa Cruz basin and Patagonia Mountains area of south-central Arizona. The contractor's Operational Report is included as an Appendix in this report. This section describes the data processing performed by the USGS on the digital aeromagnetic data received from the contractor. This processing was required in order to remove flight line noise, estimate the depths to the magnetic sources, and estimate the locations of the magnetic contacts. Three methods were used for estimating source depths and contact locations: the horizontal gradient method, the analytic signal method, and the local wavenumber method. The depth estimates resulting from each method are compared, and the contact locations are combined into an interpretative map showing the dip direction for some contacts.
A modified Leslie-Gower predator-prey interaction model and parameter identifiability
NASA Astrophysics Data System (ADS)
Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed
2018-01-01
In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.
Coates, James; Jeyaseelan, Asha K; Ybarra, Norma; David, Marc; Faria, Sergio; Souhami, Luis; Cury, Fabio; Duclos, Marie; El Naqa, Issam
2015-04-01
We explore analytical and data-driven approaches to investigate the integration of genetic variations (single nucleotide polymorphisms [SNPs] and copy number variations [CNVs]) with dosimetric and clinical variables in modeling radiation-induced rectal bleeding (RB) and erectile dysfunction (ED) in prostate cancer patients. Sixty-two patients who underwent curative hypofractionated radiotherapy (66 Gy in 22 fractions) between 2002 and 2010 were retrospectively genotyped for CNV and SNP rs5489 in the xrcc1 DNA repair gene. Fifty-four patients had full dosimetric profiles. Two parallel modeling approaches were compared to assess the risk of severe RB (Grade⩾3) and ED (Grade⩾1); Maximum likelihood estimated generalized Lyman-Kutcher-Burman (LKB) and logistic regression. Statistical resampling based on cross-validation was used to evaluate model predictive power and generalizability to unseen data. Integration of biological variables xrcc1 CNV and SNP improved the fit of the RB and ED analytical and data-driven models. Cross-validation of the generalized LKB models yielded increases in classification performance of 27.4% for RB and 14.6% for ED when xrcc1 CNV and SNP were included, respectively. Biological variables added to logistic regression modeling improved classification performance over standard dosimetric models by 33.5% for RB and 21.2% for ED models. As a proof-of-concept, we demonstrated that the combination of genetic and dosimetric variables can provide significant improvement in NTCP prediction using analytical and data-driven approaches. The improvement in prediction performance was more pronounced in the data driven approaches. Moreover, we have shown that CNVs, in addition to SNPs, may be useful structural genetic variants in predicting radiation toxicities. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Mangas-Sanjuan, Victor; Navarro-Fontestad, Carmen; García-Arieta, Alfredo; Trocóniz, Iñaki F; Bermejo, Marival
2018-05-30
A semi-physiological two compartment pharmacokinetic model with two active metabolites (primary (PM) and secondary metabolites (SM)) with saturable and non-saturable pre-systemic efflux transporter, intestinal and hepatic metabolism has been developed. The aim of this work is to explore in several scenarios which analyte (parent drug or any of the metabolites) is the most sensitive to changes in drug product performance (i.e. differences in in vivo dissolution) and to make recommendations based on the simulations outcome. A total of 128 scenarios (2 Biopharmaceutics Classification System (BCS) drug types, 2 levels of K M Pgp , in 4 metabolic scenarios at 2 dose levels in 4 quality levels of the drug product) were simulated for BCS class II and IV drugs. Monte Carlo simulations of all bioequivalence studies were performed in NONMEM 7.3. Results showed the parent drug (PD) was the most sensitive analyte for bioequivalence trials in all the studied scenarios. PM and SM revealed less or the same sensitivity to detect differences in pharmaceutical quality as the PD. Another relevant result is that mean point estimate of C max and AUC methodology from Monte Carlo simulations allows to select more accurately the most sensitive analyte compared to the criterion on the percentage of failed or successful BE studies, even for metabolites which frequently show greater variability than PD. Copyright © 2018 Elsevier B.V. All rights reserved.
Quantum State Tomography via Linear Regression Estimation
Qi, Bo; Hou, Zhibo; Li, Li; Dong, Daoyi; Xiang, Guoyong; Guo, Guangcan
2013-01-01
A simple yet efficient state reconstruction algorithm of linear regression estimation (LRE) is presented for quantum state tomography. In this method, quantum state reconstruction is converted into a parameter estimation problem of a linear regression model and the least-squares method is employed to estimate the unknown parameters. An asymptotic mean squared error (MSE) upper bound for all possible states to be estimated is given analytically, which depends explicitly upon the involved measurement bases. This analytical MSE upper bound can guide one to choose optimal measurement sets. The computational complexity of LRE is O(d4) where d is the dimension of the quantum state. Numerical examples show that LRE is much faster than maximum-likelihood estimation for quantum state tomography. PMID:24336519
Bigus, Paulina; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek; Tobiszewski, Marek
2016-05-01
This study presents an application of the Hasse diagram technique (HDT) as the assessment tool to select the most appropriate analytical procedures according to their greenness or the best analytical performance. The dataset consists of analytical procedures for benzo[a]pyrene determination in sediment samples, which were described by 11 variables concerning their greenness and analytical performance. Two analyses with the HDT were performed-the first one with metrological variables and the second one with "green" variables as input data. Both HDT analyses ranked different analytical procedures as the most valuable, suggesting that green analytical chemistry is not in accordance with metrology when benzo[a]pyrene in sediment samples is determined. The HDT can be used as a good decision support tool to choose the proper analytical procedure concerning green analytical chemistry principles and analytical performance merits.
NASA Astrophysics Data System (ADS)
Nguyen, Duc Anh; Cat Vu, Minh; Willems, Patrick; Monbaliu, Jaak
2017-04-01
Salt intrusion is the most acute problem for irrigation water quality in coastal regions during dry seasons. The use of numerical hydrodynamic models is widespread and has become the prevailing approach to simulate the salinity distribution in an estuary. Despite its power to estimate both spatial and temporal salinity variations along the estuary, this approach also has its drawbacks. The high computational cost and the need for detailed hydrological, bathymetric and tidal datasets, put some limits on the usability in particular case studies. In poor data environments, analytical salt intrusion models are more widely used as they require less data and have a further reduction of the computational effort. There are few studies however where a more comprehensive comparison is made between the performance of a numerical hydrodynamic and an analytical model. In this research the multi-channel Ma Estuary in Vietnam is considered as a case study. Both the analytical and the hydrodynamic simulation approaches have been applied and were found capable to mimic the longitudinal salt distribution along the estuary. The data to construct the MIKE11 model include observations provided by a network of fixed hydrological stations and the cross-section measurements along the estuary. The analytic model is developed in parallel but based on information obtained from the hydrological network only (typical for poor data environment). Note that the two convergence length parameters of this simplified model are usually extracted from topography data including cross-sectional area and width along the estuary. Furthermore, freshwater discharge data are needed but these are gauged further upstream outside of the tidal region and unable to reflect the individual flows entering the multi-channel estuary. In order to tackle the poor data environment limitations, a new approach was needed to calibrate the two estuary geometry parameters of the parsimonious salt intrusion model. Compared to the values based on a field survey for the estuary, the calibrated cross-sectional convergence length values are in very high agreement. By assuming a linear relation between inverses of the individual flows entering the estuary and inverses of the sum of flows gauged further upstream, the individual flows can be assessed. Evaluation on the modeling approaches at high water slack shows that the two modeling approaches have similar results. They explain salinity distribution along the Ma Estuary reasonably well with Nash-Sutcliffe efficiency values at gauging stations along the estuary of 0.50 or higher. These performances demonstrate the predictive power of the simplified salt intrusion model and of the proposed parameter/input estimation approach, even with the poorer data.
Reliability analysis of composite structures
NASA Technical Reports Server (NTRS)
Kan, Han-Pin
1992-01-01
A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.
Simon, L
2007-10-01
The integral transform technique was implemented to solve a mathematical model developed for percutaneous drug absorption. The model included repeated application and removal of a patch from the skin. Fick's second law of diffusion was used to study the transport of a medicinal agent through the vehicle and subsequent penetration into the stratum corneum. Eigenmodes and eigenvalues were computed and introduced into an inversion formula to estimate the delivery rate and the amount of drug in the vehicle and the skin. A dynamic programming algorithm calculated the optimal doses necessary to achieve a desired transdermal flux. The analytical method predicted profiles that were in close agreement with published numerical solutions and provided an automated strategy to perform therapeutic drug monitoring and control.
High-frequency phase shift measurement greatly enhances the sensitivity of QCM immunosensors.
March, Carmen; García, José V; Sánchez, Ángel; Arnau, Antonio; Jiménez, Yolanda; García, Pablo; Manclús, Juan J; Montoya, Ángel
2015-03-15
In spite of being widely used for in liquid biosensing applications, sensitivity improvement of conventional (5-20MHz) quartz crystal microbalance (QCM) sensors remains an unsolved challenging task. With the help of a new electronic characterization approach based on phase change measurements at a constant fixed frequency, a highly sensitive and versatile high fundamental frequency (HFF) QCM immunosensor has successfully been developed and tested for its use in pesticide (carbaryl and thiabendazole) analysis. The analytical performance of several immunosensors was compared in competitive immunoassays taking carbaryl insecticide as the model analyte. The highest sensitivity was exhibited by the 100MHz HFF-QCM carbaryl immunosensor. When results were compared with those reported for 9MHz QCM, analytical parameters clearly showed an improvement of one order of magnitude for sensitivity (estimated as the I50 value) and two orders of magnitude for the limit of detection (LOD): 30μgl(-1) vs 0.66μgL(-1)I50 value and 11μgL(-1) vs 0.14μgL(-1) LOD, for 9 and 100MHz, respectively. For the fungicide thiabendazole, I50 value was roughly the same as that previously reported for SPR under the same biochemical conditions, whereas LOD improved by a factor of 2. The analytical performance achieved by high frequency QCM immunosensors surpassed those of conventional QCM and SPR, closely approaching the most sensitive ELISAs. The developed 100MHz QCM immunosensor strongly improves sensitivity in biosensing, and therefore can be considered as a very promising new analytical tool for in liquid applications where highly sensitive detection is required. Copyright © 2014 Elsevier B.V. All rights reserved.
How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?
West, Brady T; Sakshaug, Joseph W; Aurelien, Guy Alain S
2016-01-01
Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.
How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?
West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.
2016-01-01
Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817
Research in digital adaptive flight controllers
NASA Technical Reports Server (NTRS)
Kaufman, H.
1976-01-01
A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.
Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A
2016-09-06
Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Least Square Regression Method for Estimating Gas Concentration in an Electronic Nose System
Khalaf, Walaa; Pace, Calogero; Gaudioso, Manlio
2009-01-01
We describe an Electronic Nose (ENose) system which is able to identify the type of analyte and to estimate its concentration. The system consists of seven sensors, five of them being gas sensors (supplied with different heater voltage values), the remainder being a temperature and a humidity sensor, respectively. To identify a new analyte sample and then to estimate its concentration, we use both some machine learning techniques and the least square regression principle. In fact, we apply two different training models; the first one is based on the Support Vector Machine (SVM) approach and is aimed at teaching the system how to discriminate among different gases, while the second one uses the least squares regression approach to predict the concentration of each type of analyte. PMID:22573980
Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.
Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H
2018-01-01
To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.
Police, Anitha; Shankar, Vijay Kumar; Narasimha Murthy, S
2018-02-15
Vigabatrin is used as first line drug in treatment of infantile spasms for its potential benefit overweighing risk of causing permanent peripheral visual field defects and retinal damage. Chronic administration of vigabatrin in rats has demonstrated these ocular events are result of GABA accumulation and depletion of taurine levels in retinal tissues. In vigabatrin clinical studies taurine plasma level is considered as biomarker for studying structure and function of retina. The analytical method is essential to monitor taurine levels along with vigabatrin and GABA. A RP-HPLC method has been developed and validated for simultaneous estimation of vigabatrin, GABA and taurine using surrogate matrix. Analytes were extracted from human plasma, rat plasma, retina and brain by simple protein precipitation method and derivatized by naphthalene 2, 3‑dicarboxaldehyde to produce stable fluorescent active isoindole derivatives. The chromatographic analysis was performed on Zorbax Eclipse AAA column using gradient elution profile and eluent was monitored using fluorescence detector. A linear plot of calibration curve was observed in concentration range of 64.6 to 6458, 51.5 to 5150 and 62.5 to 6258 ng/mL for vigabatrin, GABA and taurine, respectively with r 2 ≥ 0.997 for all analytes. The method was successfully applied for estimating levels of vigabatrin and its modulator effect on GABA and taurine levels in rat plasma, brain and retinal tissue. This RP-HPLC method can be applied in clinical and preclinical studies to explore the effect of taurine deficiency and to investigate novel approaches for alleviating vigabatrin induced ocular toxicity. Copyright © 2018. Published by Elsevier B.V.
Lee, L.; Helsel, D.
2007-01-01
Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.
A simple, analytical, axisymmetric microburst model for downdraft estimation
NASA Technical Reports Server (NTRS)
Vicroy, Dan D.
1991-01-01
A simple analytical microburst model was developed for use in estimating vertical winds from horizontal wind measurements. It is an axisymmetric, steady state model that uses shaping functions to satisfy the mass continuity equation and simulate boundary layer effects. The model is defined through four model variables: the radius and altitude of the maximum horizontal wind, a shaping function variable, and a scale factor. The model closely agrees with a high fidelity analytical model and measured data, particularily in the radial direction and at lower altitudes. At higher altitudes, the model tends to overestimate the wind magnitude relative to the measured data.
Estimation of real-time runway surface contamination using flight data recorder parameters
NASA Astrophysics Data System (ADS)
Curry, Donovan
Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.
Theoretical performance model for single image depth from defocus.
Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme
2014-12-01
In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batcheller, Thomas Aquinas; Taylor, Dean Dalton
Idaho Nuclear Technology and Engineering Center 300,000-gallon vessel WM-189 was filled in late 2001 with concentrated sodium bearing waste (SBW). Three airlifted liquid samples and a steam jetted slurry sample were obtained for quantitative analysis and characterization of WM-189 liquid phase SBW and tank heel sludge. Estimates were provided for most of the reported data values, based on the greater of (a) analytical uncertainty, and (b) variation of analytical results between nominally similar samples. A consistency check on the data was performed by comparing the total mass of dissolved solids in the liquid, as measured gravimetrically from a dried sample,more » with the corresponding value obtained by summing the masses of cations and anions in the liquid, based on the reported analytical data. After reasonable adjustments to the nitrate and oxygen concentrations, satisfactory consistency between the two results was obtained. A similar consistency check was performed on the reported compositional data for sludge solids from the steam jetted sample. In addition to the compositional data, various other analyses were performed: particle size distribution was measured for the sludge solids, sludge settling tests were performed, and viscosity measurements were made. WM-189 characterization results were compared with those for WM-180, and other Tank Farm Facility tank characterization data. A 2-liter batch of WM-189 simulant was prepared and a clear, stable solution was obtained, based on a general procedure for mixing SBW simulant that was develop by Dr. Jerry Christian. This WM-189 SBW simulant is considered suitable for laboratory testing for process development.« less
Estimate of Cosmic Muon Background for Shallow Underground Neutrino Detectors
NASA Astrophysics Data System (ADS)
Casimiro, E.; Simão, F. R. A.; Anjos, J. C.
One of the severe limitations in detecting neutrino signals from nuclear reactors is that the copious cosmic ray background imposes the use of a time veto upon the passage of the muons to reduce the number of fake signals due to muon-induced spallation neutrons. For this reason neutrino detectors are usually located underground, with a large overburden. However there are practical limitations that do restrain from locating the detectors at large depths underground. In order to decide the depth underground at which the Neutrino Angra Detector (currently in preparation) should be installed, an estimate of the cosmogenic background in the detector as a function of the depth is required. We report here a simple analytical estimation of the muon rates in the detector volume for different plausible depths, assuming a simple plain overburden geometry. We extend the calculation to the case of the San Onofre neutrino detector and to the case of the Double Chooz neutrino detector, where other estimates or measurements have been performed. Our estimated rates are consistent.
Neural Net Gains Estimation Based on an Equivalent Model
Aguilar Cruz, Karen Alicia; Medel Juárez, José de Jesús; Fernández Muñoz, José Luis; Esmeralda Vigueras Velázquez, Midory
2016-01-01
A model of an Equivalent Artificial Neural Net (EANN) describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN). The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB) the factors based on the functional error and the reference signal built with the past information of the system. PMID:27366146
Neural Net Gains Estimation Based on an Equivalent Model.
Aguilar Cruz, Karen Alicia; Medel Juárez, José de Jesús; Fernández Muñoz, José Luis; Esmeralda Vigueras Velázquez, Midory
2016-01-01
A model of an Equivalent Artificial Neural Net (EANN) describes the gains set, viewed as parameters in a layer, and this consideration is a reproducible process, applicable to a neuron in a neural net (NN). The EANN helps to estimate the NN gains or parameters, so we propose two methods to determine them. The first considers a fuzzy inference combined with the traditional Kalman filter, obtaining the equivalent model and estimating in a fuzzy sense the gains matrix A and the proper gain K into the traditional filter identification. The second develops a direct estimation in state space, describing an EANN using the expected value and the recursive description of the gains estimation. Finally, a comparison of both descriptions is performed; highlighting the analytical method describes the neural net coefficients in a direct form, whereas the other technique requires selecting into the Knowledge Base (KB) the factors based on the functional error and the reference signal built with the past information of the system.
Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz
2017-04-30
Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Habchi, Baninia; Alves, Sandra; Jouan-Rimbaud Bouveresse, Delphine; Appenzeller, Brice; Paris, Alain; Rutledge, Douglas N; Rathahao-Paris, Estelle
2018-01-01
Due to the presence of pollutants in the environment and food, the assessment of human exposure is required. This necessitates high-throughput approaches enabling large-scale analysis and, as a consequence, the use of high-performance analytical instruments to obtain highly informative metabolomic profiles. In this study, direct introduction mass spectrometry (DIMS) was performed using a Fourier transform ion cyclotron resonance (FT-ICR) instrument equipped with a dynamically harmonized cell. Data quality was evaluated based on mass resolving power (RP), mass measurement accuracy, and ion intensity drifts from the repeated injections of quality control sample (QC) along the analytical process. The large DIMS data size entails the use of bioinformatic tools for the automatic selection of common ions found in all QC injections and for robustness assessment and correction of eventual technical drifts. RP values greater than 10 6 and mass measurement accuracy of lower than 1 ppm were obtained using broadband mode resulting in the detection of isotopic fine structure. Hence, a very accurate relative isotopic mass defect (RΔm) value was calculated. This reduces significantly the number of elemental composition (EC) candidates and greatly improves compound annotation. A very satisfactory estimate of repeatability of both peak intensity and mass measurement was demonstrated. Although, a non negligible ion intensity drift was observed for negative ion mode data, a normalization procedure was easily applied to correct this phenomenon. This study illustrates the performance and robustness of the dynamically harmonized FT-ICR cell to perform large-scale high-throughput metabolomic analyses in routine conditions. Graphical abstract Analytical performance of FT-ICR instrument equipped with a dynamically harmonized cell.
Study of LH2-fueled topping cycle engine for aircraft propulsion
NASA Technical Reports Server (NTRS)
Turney, G. E.; Fishbach, L. H.
1983-01-01
An analytical investigation was made of a topping cycle aircraft engine system which uses a cryogenic fuel. This system consists of a main turboshaft engine which is mechanically coupled (by cross-shafting) to a topping loop which augments the shaft power output of the system. The thermodynamic performance of the topping cycle engine was analyzed and compared with that of a reference (conventional-type) turboshaft engine. For the cycle operating conditions selected, the performance of the topping cycle engine in terms of brake specific fuel consumption (bsfc) was determined to be about 12 percent better than that of the reference turboshaft engine. Engine weights were estimated for both the topping cycle engine and the reference turboshaft engine. These estimates were based on a common shaft power output for each engine. Results indicate that the weight of the topping cycle engine is comparable to that of the reference turboshaft engine.
Inertia-gravity wave radiation from the elliptical vortex in the f-plane shallow water system
NASA Astrophysics Data System (ADS)
Sugimoto, Norihiko
2017-04-01
Inertia-gravity wave (IGW) radiation from the elliptical vortex is investigated in the f-plane shallow water system. The far field of IGW is analytically derived for the case of an almost circular Kirchhoff vortex with a small aspect ratio. Cyclone-anticyclone asymmetry appears at finite values of the Rossby number (Ro) caused by the source originating in the Coriolis acceleration. While the intensity of IGWs from the cyclone monotonically decreases as f increases, that from the anticyclone increases as f increases for relatively smaller f and has a local maximum at intermediate f. A numerical experiment is conducted on a model using a spectral method in an unbounded domain. The numerical results agree quite well with the analytical ones for elliptical vortices with small aspect ratios, implying that the derived analytical forms are useful for the verification of the numerical model. For elliptical vortices with larger aspect ratios, however, significant deviation from the analytical estimates appears. The intensity of IGWs radiated in the numerical simulation is larger than that estimated analytically. The reason is that the source of IGWs is amplified during the time evolution because the shape of the vortex changes from ideal ellipse to elongated with filaments. Nevertheless, cyclone-anticyclone asymmetry similar to the analytical estimate appears in all the range of aspect ratios, suggesting that this asymmetry is a robust feature.
Estimation of the limit of detection using information theory measures.
Fonollosa, Jordi; Vergara, Alexander; Huerta, Ramón; Marco, Santiago
2014-01-31
Definitions of the limit of detection (LOD) based on the probability of false positive and/or false negative errors have been proposed over the past years. Although such definitions are straightforward and valid for any kind of analytical system, proposed methodologies to estimate the LOD are usually simplified to signals with Gaussian noise. Additionally, there is a general misconception that two systems with the same LOD provide the same amount of information on the source regardless of the prior probability of presenting a blank/analyte sample. Based upon an analogy between an analytical system and a binary communication channel, in this paper we show that the amount of information that can be extracted from an analytical system depends on the probability of presenting the two different possible states. We propose a new definition of LOD utilizing information theory tools that deals with noise of any kind and allows the introduction of prior knowledge easily. Unlike most traditional LOD estimation approaches, the proposed definition is based on the amount of information that the chemical instrumentation system provides on the chemical information source. Our findings indicate that the benchmark of analytical systems based on the ability to provide information about the presence/absence of the analyte (our proposed approach) is a more general and proper framework, while converging to the usual values when dealing with Gaussian noise. Copyright © 2013 Elsevier B.V. All rights reserved.
The effect of air entrapment on the performance of squeeze film dampers: Experiments and analysis
NASA Astrophysics Data System (ADS)
Diaz Briceno, Sergio Enrique
Squeeze film dampers (SFDs) are an effective means to introduce the required damping in rotor-bearing systems. They are a standard application in jet engines and are commonly used in industrial compressors. Yet, lack of understanding of their operation has confined the design of SFDs to a costly trial and error process based on prior experience. The main factor deterring the success of analytical models for the prediction of SFDs' performance lays on the modeling of the dynamic film rupture. Usually, the cavitation models developed for journal bearings are applied to SFDs. Yet, the characteristic motion of the SFD results in the entrapment of air into the oil film, thus producing a bubbly mixture that can not be represented by these models. In this work, an extensive experimental study establishes qualitatively and---for the first time---quantitatively the differences between operation with vapor cavitation and with air entrainment. The experiments show that most operating conditions lead to air entrainment and demonstrate the paramount effect it has on the performance of SFDs, evidencing the limitation of currently available models. Further experiments address the operation of SFDs with controlled bubbly mixtures. These experiments bolster the possibility of modeling air entrapment by representing the lubricant as a homogeneous mixture of air and oil and provide a reliable data base for benchmarking such a model. An analytical model is developed based on a homogeneous mixture assumption and where the bubbles are described by the Rayleigh-Plesset equation. Good agreement is obtained between this model and the measurements performed in the SFD operating with controlled mixtures. A complementary analytical model is devised to estimate the amount of air entrained from the balance of axial flows in the film. A combination of the analytical models for prediction of the air volume fraction and of the hydrodynamic pressures renders promising results for prediction of the performance of SFDs with freely entrained air. The results of this work are of immediate engineering applicability. Furthermore, they represent a firm step to advance the understanding on the effects of air entrapment in the performance of SFD.
NASA Astrophysics Data System (ADS)
Vjačeslavov, N. S.
1980-02-01
In this paper estimates are found for L_pR_n(f) - the least deviation in the L_p-metric, 0 < p\\leq\\infty, of a piecewise analytic function f from the rational functions of degree at most n. It is shown that these estimates are sharp in a well-defined sense.Bibliography: 12 titles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chhiber, R; Usmanov, AV; Matthaeus, WH
Simple estimates of the number of Coulomb collisions experienced by the interplanetary plasma to the point of observation, i.e., the “collisional age”, can be usefully employed in the study of non-thermal features of the solar wind. Usually these estimates are based on local plasma properties at the point of observation. Here we improve the method of estimation of the collisional age by employing solutions obtained from global three-dimensional magnetohydrodynamics simulations. This enables evaluation of the complete analytical expression for the collisional age without using approximations. The improved estimation of the collisional timescale is compared with turbulence and expansion timescales tomore » assess the relative importance of collisions. The collisional age computed using the approximate formula employed in previous work is compared with the improved simulation-based calculations to examine the validity of the simplified formula. We also develop an analytical expression for the evaluation of the collisional age and we find good agreement between the numerical and analytical results. Finally, we briefly discuss the implications for an improved estimation of collisionality along spacecraft trajectories, including Solar Probe Plus.« less
NASA Astrophysics Data System (ADS)
Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen
2014-08-01
Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yan; Mohanty, Soumya D.; Jenet, Fredrick A., E-mail: ywang12@hust.edu.cn
2015-12-20
Supermassive black hole binaries are one of the primary targets of gravitational wave (GW) searches using pulsar timing arrays (PTAs). GW signals from such systems are well represented by parameterized models, allowing the standard Generalized Likelihood Ratio Test (GLRT) to be used for their detection and estimation. However, there is a dichotomy in how the GLRT can be implemented for PTAs: there are two possible ways in which one can split the set of signal parameters for semi-analytical and numerical extremization. The straightforward extension of the method used for continuous signals in ground-based GW searches, where the so-called pulsar phasemore » parameters are maximized numerically, was addressed in an earlier paper. In this paper, we report the first study of the performance of the second approach where the pulsar phases are maximized semi-analytically. This approach is scalable since the number of parameters left over for numerical optimization does not depend on the size of the PTA. Our results show that for the same array size (9 pulsars), the new method performs somewhat worse in parameter estimation, but not in detection, than the previous method where the pulsar phases were maximized numerically. The origin of the performance discrepancy is likely to be in the ill-posedness that is intrinsic to any network analysis method. However, the scalability of the new method allows the ill-posedness to be mitigated by simply adding more pulsars to the array. This is shown explicitly by taking a larger array of pulsars.« less
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
Veeraraghavan, Sridhar; Viswanadha, Srikant; Thappali, Satheeshmanikandan; Govindarajulu, Babu; Vakkalanka, Swaroopkumar; Rangasamy, Manivannan
2015-03-25
Efficacy assessments using a combination of ibrutinib and lenalidomide necessitate the development of an analytical method for determination of both drugs in plasma with precision. A high performance liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed for the simultaneous determination of lenalidomide, ibrutinib, and its active metabolite PCI45227 in rat plasma. Extraction of lenalidomide, ibrutinib, PCI45227 and tolbutamide (internal standard; IS) from 50 μl rat plasma was carried out by liquid-liquid extraction with ethyl acetate:dichloromethane (90:10) ratio. Chromatographic separation of analytes was performed on YMC pack ODS AM (150 mm × 4.6 mm, 5 μm) column under gradient conditions with acetonitrile:0.1% formic acid buffer as the mobile phases at a flow rate of 1 ml/min. Precursor ion and product ion transition for analytes and IS were monitored on a triple quadrupole mass spectrometer, operated in the selective reaction monitoring with positive ionization mode. Method was validated over a concentration range of 0.72-183.20 ng/ml for ibrutinib, 0.76-194.33 ng/ml for PCI-45227 and 1.87-479.16 ng/ml for lenalidomide. Mean extraction recovery for ibrutinib, PCI-45227, lenalidomide and IS of 75.2%, 84.5%, 97.3% and 92.3% were consistent across low, medium, and high QC levels. Precision and accuracy at low, medium and high quality control levels were less than 15% across analytes. Bench top, wet, freeze-thaw and long term stability was evaluated for all the analytes. The analytical method was applied to support a pharmacokinetic study of simultaneous estimation of lenalidomide, ibrutinib, and its active metabolite PCI-45227 in Wistar rat. Assay reproducibility was demonstrated by re-analysis of 18 incurred samples. Copyright © 2014 Elsevier B.V. All rights reserved.
Projected 1981 exposure estimates using iterative proportional fitting
DOT National Transportation Integrated Search
1985-10-01
1981 VMT estimates categorized by eight driver, vehicle, and environmental : variables are produced. These 1981 estimates are produced using analytical : methods developed in a previous report. The estimates are based on 1977 : NPTS data (the latest ...
MSE-impact of PPP-RTK ZTD estimation strategies
NASA Astrophysics Data System (ADS)
Wang, K.; Khodabandeh, A.; Teunissen, P. J. G.
2018-06-01
In PPP-RTK network processing, the wet component of the zenith tropospheric delay (ZTD) cannot be precisely modelled and thus remains unknown in the observation equations. For small networks, the tropospheric mapping functions of different stations to a given satellite are almost equal to each other, thereby causing a near rank-deficiency between the ZTDs and satellite clocks. The stated near rank-deficiency can be solved by estimating the wet ZTD components relatively to that of the reference receiver, while the wet ZTD component of the reference receiver is constrained to zero. However, by increasing network scale and humidity around the reference receiver, enlarged mismodelled effects could bias the network and the user solutions. To consider both the influences of the noise and the biases, the mean-squared errors (MSEs) of different network and user parameters are studied analytically employing both the ZTD estimation strategies. We conclude that for a certain set of parameters, the difference in their MSE structures using both strategies is only driven by the square of the reference wet ZTD component and the formal variance of its solution. Depending on the network scale and the humidity condition around the reference receiver, the ZTD estimation strategy that delivers more accurate solutions might be different. Simulations are performed to illustrate the conclusions made by analytical studies. We find that estimating the ZTDs relatively in large networks and humid regions (for the reference receiver) could significantly degrade the network ambiguity success rates. Using ambiguity-fixed network-derived PPP-RTK corrections, for networks with an inter-station distance within 100 km, the choices of the ZTD estimation strategy is not crucial for single-epoch ambiguity-fixed user positioning. Using ambiguity-float network corrections, for networks with inter-station distances of 100, 300 and 500 km in humid regions (for the reference receiver), the root-mean-squared errors (RMSEs) of the estimated user coordinates using relative ZTD estimation could be higher than those under the absolute case with differences up to millimetres, centimetres and decimetres, respectively.
An Automated Directed Spectral Search Methodology for Small Target Detection
NASA Astrophysics Data System (ADS)
Grossman, Stanley I.
Much of the current efforts in remote sensing tackle macro-level problems such as determining the extent of wheat in a field, the general health of vegetation or the extent of mineral deposits in an area. However, for many of the remaining remote sensing challenges being studied currently, such as border protection, drug smuggling, treaty verification, and the war on terror, most targets are very small in nature - a vehicle or even a person. While in typical macro-level problems the objective vegetation is in the scene, for small target detection problems it is not usually known if the desired small target even exists in the scene, never mind finding it in abundance. The ability to find specific small targets, such as vehicles, typifies this problem. Complicating the analyst's life, the growing number of available sensors is generating mountains of imagery outstripping the analysts' ability to visually peruse them. This work presents the important factors influencing spectral exploitation using multispectral data and suggests a different approach to small target detection. The methodology of directed search is presented, including the use of scene-modeled spectral libraries, various search algorithms, and traditional statistical and ROC curve analysis. The work suggests a new metric to calibrate analysis labeled the analytic sweet spot as well as an estimation method for identifying the sweet spot threshold for an image. It also suggests a new visualization aid for highlighting the target in its entirety called nearest neighbor inflation (NNI). It brings these all together to propose that these additions to the target detection arena allow for the construction of a fully automated target detection scheme. This dissertation next details experiments to support the hypothesis that the optimum detection threshold is the analytic sweet spot and that the estimation method adequately predicts it. Experimental results and analysis are presented for the proposed directed search techniques of spectral image based small target detection. It offers evidence of the functionality of the NNI visualization and also provides evidence that the increased spectral dimensionality of the 8-band Worldview-2 datasets provides noteworthy improvement in results over traditional 4-band multispectral datasets. The final experiment presents the results from a prototype fully automated target detection scheme in support of the overarching premise. This work establishes the analytic sweet spot as the optimum threshold defined as the point where error detection rate curves -- false detections vs. missing detections -- cross. At this point the errors are minimized while the detection rate is maximized. It then demonstrates that taking the first moment statistic of the histogram of calculated target detection values from a detection search with test threshold set arbitrarily high will estimate the analytic sweet spot for that image. It also demonstrates that directed search techniques -- when utilized with appropriate scene-specific modeled signatures and atmospheric compensations -- perform at least as well as in-scene search techniques 88% of the time and grossly under-performing only 11% of the time; the in-scene only performs as well or better 50% of the time. It further demonstrates the clear advantage increased multispectral dimensionality brings to detection searches improving performance in 50% of the cases while performing at least as well 72% of the time. Lastly, it presents evidence that a fully automated prototype performs as anticipated laying the groundwork for further research into fully automated processes for small target detection.
NASA Astrophysics Data System (ADS)
Reynerson, Charles Martin
This research has been performed to create concept design and economic feasibility data for space business parks. A space business park is a commercially run multi-use space station facility designed for use by a wide variety of customers. Both space hardware and crew are considered as revenue producing payloads. Examples of commercial markets may include biological and materials research, processing, and production, space tourism habitats, and satellite maintenance and resupply depots. This research develops a design methodology and an analytical tool to create feasible preliminary design information for space business parks. The design tool is validated against a number of real facility designs. Appropriate model variables are adjusted to ensure that statistical approximations are valid for subsequent analyses. The tool is used to analyze the effect of various payload requirements on the size, weight and power of the facility. The approach for the analytical tool was to input potential payloads as simple requirements, such as volume, weight, power, crew size, and endurance. In creating the theory, basic principles are used and combined with parametric estimation of data when necessary. Key system parameters are identified for overall system design. Typical ranges for these key parameters are identified based on real human spaceflight systems. To connect the economics to design, a life-cycle cost model is created based upon facility mass. This rough cost model estimates potential return on investments, initial investment requirements and number of years to return on the initial investment. Example cases are analyzed for both performance and cost driven requirements for space hotels, microgravity processing facilities, and multi-use facilities. In combining both engineering and economic models, a design-to-cost methodology is created for more accurately estimating the commercial viability for multiple space business park markets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passos de Figueiredo, Leandro, E-mail: leandrop.fgr@gmail.com; Grana, Dario; Santos, Marcio
We propose a Bayesian approach for seismic inversion to estimate acoustic impedance, porosity and lithofacies within the reservoir conditioned to post-stack seismic and well data. The link between elastic and petrophysical properties is given by a joint prior distribution for the logarithm of impedance and porosity, based on a rock-physics model. The well conditioning is performed through a background model obtained by well log interpolation. Two different approaches are presented: in the first approach, the prior is defined by a single Gaussian distribution, whereas in the second approach it is defined by a Gaussian mixture to represent the well datamore » multimodal distribution and link the Gaussian components to different geological lithofacies. The forward model is based on a linearized convolutional model. For the single Gaussian case, we obtain an analytical expression for the posterior distribution, resulting in a fast algorithm to compute the solution of the inverse problem, i.e. the posterior distribution of acoustic impedance and porosity as well as the facies probability given the observed data. For the Gaussian mixture prior, it is not possible to obtain the distributions analytically, hence we propose a Gibbs algorithm to perform the posterior sampling and obtain several reservoir model realizations, allowing an uncertainty analysis of the estimated properties and lithofacies. Both methodologies are applied to a real seismic dataset with three wells to obtain 3D models of acoustic impedance, porosity and lithofacies. The methodologies are validated through a blind well test and compared to a standard Bayesian inversion approach. Using the probability of the reservoir lithofacies, we also compute a 3D isosurface probability model of the main oil reservoir in the studied field.« less
A rotor technology assessment of the advancing blade concept
NASA Technical Reports Server (NTRS)
Pleasants, W. A.
1983-01-01
A rotor technology assessment of the Advancing Blade Concept (ABC) was conducted in support of a preliminary design study. The analytical methodology modifications and inputs, the correlation, and the results of the assessment are documented. The primary emphasis was on the high-speed forward flight performance of the rotor. The correlation data base included both the wind tunnel and the flight test results. An advanced ABC rotor design was examined; the suitability of the ABC for a particular mission was not considered. The objective of this technology assessment was to provide estimates of the performance potential of an advanced ABC rotor designed for high speed forward flight.
NASA Technical Reports Server (NTRS)
Bowyer, J. M.
1984-01-01
The potential of a suitably designed and economically manufactured Stirling engine as the energy conversion subsystem of a paraboloidal dish-Stirling solar thermal power module was estimated. Results obtained by elementary cycle analyses were shown to match quite well the performance characteristics of an advanced kinematic Stirling engine, the United Stirling P-40, as established by current prototypes of the engine and by a more sophisticated analytic model of its advanced derivative. In addition to performance, brief consideration was given to other Stirling engine criteria such as durability, reliability, and serviceability. Production costs were not considered here.
Current and future technology in radial and axial gas turbines
NASA Technical Reports Server (NTRS)
Rohlik, H. E.
1983-01-01
Design approaches and flow analysis techniques currently employed by aircraft engine manufacturers are assessed. Studies were performed to define the characteristics of aircraft and engines for civil missions of the 1990's and beyond. These studies, coupled with experience in recent years, identified the critical technologies needed to meet long range goals in fuel economy and other operating costs. Study results, recent and current research and development programs, and an estimate of future design and analytic capabilities are discussed.
The uncertainty of nitrous oxide emissions from grazed grasslands: A New Zealand case study
NASA Astrophysics Data System (ADS)
Kelliher, Francis M.; Henderson, Harold V.; Cox, Neil R.
2017-01-01
Agricultural soils emit nitrous oxide (N2O), a greenhouse gas and the primary source of nitrogen oxides which deplete stratospheric ozone. Agriculture has been estimated to be the largest anthropogenic N2O source. In New Zealand (NZ), pastoral agriculture uses half the land area. To estimate the annual N2O emissions from NZ's agricultural soils, the nitrogen (N) inputs have been determined and multiplied by an emission factor (EF), the mass fraction of N inputs emitted as N2Osbnd N. To estimate the associated uncertainty, we developed an analytical method. For comparison, another estimate was determined by Monte Carlo numerical simulation. For both methods, expert judgement was used to estimate the N input uncertainty. The EF uncertainty was estimated by meta-analysis of the results from 185 NZ field trials. For the analytical method, assuming a normal distribution and independence of the terms used to calculate the emissions (correlation = 0), the estimated 95% confidence limit was ±57%. When there was a normal distribution and an estimated correlation of 0.4 between N input and EF, the latter inferred from experimental data involving six NZ soils, the analytical method estimated a 95% confidence limit of ±61%. The EF data from 185 NZ field trials had a logarithmic normal distribution. For the Monte Carlo method, assuming a logarithmic normal distribution for EF, a normal distribution for the other terms and independence of all terms, the estimated 95% confidence limits were -32% and +88% or ±60% on average. When there were the same distribution assumptions and a correlation of 0.4 between N input and EF, the Monte Carlo method estimated 95% confidence limits were -34% and +94% or ±64% on average. For the analytical and Monte Carlo methods, EF uncertainty accounted for 95% and 83% of the emissions uncertainty when the correlation between N input and EF was 0 and 0.4, respectively. As the first uncertainty analysis of an agricultural soils N2O emissions inventory using "country-specific" field trials to estimate EF uncertainty, this can be a potentially informative case study for the international scientific community.
Corrected Four-Sphere Head Model for EEG Signals.
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.
Corrected Four-Sphere Head Model for EEG Signals
Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V.; Dale, Anders M.; Einevoll, Gaute T.; Wójcik, Daniel K.
2017-01-01
The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations. PMID:29093671
Models for estimating photosynthesis parameters from in situ production profiles
NASA Astrophysics Data System (ADS)
Kovač, Žarko; Platt, Trevor; Sathyendranath, Shubha; Antunović, Suzana
2017-12-01
The rate of carbon assimilation in phytoplankton primary production models is mathematically prescribed with photosynthesis irradiance functions, which convert a light flux (energy) into a material flux (carbon). Information on this rate is contained in photosynthesis parameters: the initial slope and the assimilation number. The exactness of parameter values is crucial for precise calculation of primary production. Here we use a model of the daily production profile based on a suite of photosynthesis irradiance functions and extract photosynthesis parameters from in situ measured daily production profiles at the Hawaii Ocean Time-series station Aloha. For each function we recover parameter values, establish parameter distributions and quantify model skill. We observe that the choice of the photosynthesis irradiance function to estimate the photosynthesis parameters affects the magnitudes of parameter values as recovered from in situ profiles. We also tackle the problem of parameter exchange amongst the models and the effect it has on model performance. All models displayed little or no bias prior to parameter exchange, but significant bias following parameter exchange. The best model performance resulted from using optimal parameter values. Model formulation was extended further by accounting for spectral effects and deriving a spectral analytical solution for the daily production profile. The daily production profile was also formulated with time dependent growing biomass governed by a growth equation. The work on parameter recovery was further extended by exploring how to extract photosynthesis parameters from information on watercolumn production. It was demonstrated how to estimate parameter values based on a linearization of the full analytical solution for normalized watercolumn production and from the solution itself, without linearization. The paper complements previous works on photosynthesis irradiance models by analysing the skill and consistency of photosynthesis irradiance functions and parameters for modeling in situ production profiles. In light of the results obtained in this work we argue that the choice of the primary production model should reflect the available data and these models should be data driven regarding parameter estimation.
Durstewitz, Daniel
2017-06-01
The computational and cognitive properties of neural systems are often thought to be implemented in terms of their (stochastic) network dynamics. Hence, recovering the system dynamics from experimentally observed neuronal time series, like multiple single-unit recordings or neuroimaging data, is an important step toward understanding its computations. Ideally, one would not only seek a (lower-dimensional) state space representation of the dynamics, but would wish to have access to its statistical properties and their generative equations for in-depth analysis. Recurrent neural networks (RNNs) are a computationally powerful and dynamically universal formal framework which has been extensively studied from both the computational and the dynamical systems perspective. Here we develop a semi-analytical maximum-likelihood estimation scheme for piecewise-linear RNNs (PLRNNs) within the statistical framework of state space models, which accounts for noise in both the underlying latent dynamics and the observation process. The Expectation-Maximization algorithm is used to infer the latent state distribution, through a global Laplace approximation, and the PLRNN parameters iteratively. After validating the procedure on toy examples, and using inference through particle filters for comparison, the approach is applied to multiple single-unit recordings from the rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory task, delayed alternation. Models estimated from kernel-smoothed spike time data were able to capture the essential computational dynamics underlying task performance, including stimulus-selective delay activity. The estimated models were rarely multi-stable, however, but rather were tuned to exhibit slow dynamics in the vicinity of a bifurcation point. In summary, the present work advances a semi-analytical (thus reasonably fast) maximum-likelihood estimation framework for PLRNNs that may enable to recover relevant aspects of the nonlinear dynamics underlying observed neuronal time series, and directly link these to computational properties.
ERIC Educational Resources Information Center
Myers, Nicholas D.; Ahn, Soyeon; Jin, Ying
2011-01-01
Monte Carlo methods can be used in data analytic situations (e.g., validity studies) to make decisions about sample size and to estimate power. The purpose of using Monte Carlo methods in a validity study is to improve the methodological approach within a study where the primary focus is on construct validity issues and not on advancing…
NASA Astrophysics Data System (ADS)
Lolli, Simone; Di Girolamo, Paolo; Demoz, Belay; Li, Xiaowen; Welton, Ellsworth J.
2018-04-01
Rain evaporation significantly contributes to moisture and heat cloud budgets. In this paper, we illustrate an approach to estimate the median volume raindrop diameter and the rain evaporation rate profiles from dual-wavelength lidar measurements. These observational results are compared with those provided by a model analytical solution. We made use of measurements from the multi-wavelength Raman lidar BASIL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pant, Nidhi; Das, Santanu; Mitra, Sanjit
Mild, unavoidable deviations from circular-symmetry of instrumental beams along with scan strategy can give rise to measurable Statistical Isotropy (SI) violation in Cosmic Microwave Background (CMB) experiments. If not accounted properly, this spurious signal can complicate the extraction of other SI violation signals (if any) in the data. However, estimation of this effect through exact numerical simulation is computationally intensive and time consuming. A generalized analytical formalism not only provides a quick way of estimating this signal, but also gives a detailed understanding connecting the leading beam anisotropy components to a measurable BipoSH characterisation of SI violation. In this paper,more » we provide an approximate generic analytical method for estimating the SI violation generated due to a non-circular (NC) beam and arbitrary scan strategy, in terms of the Bipolar Spherical Harmonic (BipoSH) spectra. Our analytical method can predict almost all the features introduced by a NC beam in a complex scan and thus reduces the need for extensive numerical simulation worth tens of thousands of CPU hours into minutes long calculations. As an illustrative example, we use WMAP beams and scanning strategy to demonstrate the easability, usability and efficiency of our method. We test all our analytical results against that from exact numerical simulations.« less
Faggion, Clovis Mariano; Wu, Yun-Chun; Scheidgen, Moritz; Tu, Yu-Kang
2015-01-01
Risk of bias (ROB) may threaten the internal validity of a clinical trial by distorting the magnitude of treatment effect estimates, although some conflicting information on this assumption exists. The objective of this study was evaluate the effect of ROB on the magnitude of treatment effect estimates in randomized controlled trials (RCTs) in periodontology and implant dentistry. A search for Cochrane systematic reviews (SRs), including meta-analyses of RCTs published in periodontology and implant dentistry fields, was performed in the Cochrane Library in September 2014. Random-effect meta-analyses were performed by grouping RCTs with different levels of ROBs in three domains (sequence generation, allocation concealment, and blinding of outcome assessment). To increase power and precision, only SRs with meta-analyses including at least 10 RCTs were included. Meta-regression was performed to investigate the association between ROB characteristics and the magnitudes of intervention effects in the meta-analyses. Of the 24 initially screened SRs, 21 SRs were excluded because they did not include at least 10 RCTs in the meta-analyses. Three SRs (two from periodontology field) generated information for conducting 27 meta-analyses. Meta-regression did not reveal significant differences in the relationship of the ROB level with the size of treatment effect estimates, although a trend for inflated estimates was observed in domains with unclear ROBs. In this sample of RCTs, high and (mainly) unclear risks of selection and detection biases did not seem to influence the size of treatment effect estimates, although several confounders might have influenced the strength of the association.
Estimated Benefits of Variable-Geometry Wing Camber Control for Transport Aircraft
NASA Technical Reports Server (NTRS)
Bolonkin, Alexander; Gilyard, Glenn B.
1999-01-01
Analytical benefits of variable-camber capability on subsonic transport aircraft are explored. Using aerodynamic performance models, including drag as a function of deflection angle for control surfaces of interest, optimal performance benefits of variable camber are calculated. Results demonstrate that if all wing trailing-edge surfaces are available for optimization, drag can be significantly reduced at most points within the flight envelope. The optimization approach developed and illustrated for flight uses variable camber for optimization of aerodynamic efficiency (maximizing the lift-to-drag ratio). Most transport aircraft have significant latent capability in this area. Wing camber control that can affect performance optimization for transport aircraft includes symmetric use of ailerons and flaps. In this paper, drag characteristics for aileron and flap deflections are computed based on analytical and wind-tunnel data. All calculations based on predictions for the subject aircraft and the optimal surface deflection are obtained by simple interpolation for given conditions. An algorithm is also presented for computation of optimal surface deflection for given conditions. Benefits of variable camber for a transport configuration using a simple trailing-edge control surface system can approach more than 10 percent, especially for nonstandard flight conditions. In the cruise regime, the benefit is 1-3 percent.
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer; ...
2017-07-06
Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less
Using GIS-based methods and lidar data to estimate rooftop solar technical potential in US cities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolis, Robert; Gagnon, Pieter; Melius, Jennifer
Here, we estimate the technical potential of rooftop solar photovoltaics (PV) for select US cities by combining light detection and ranging (lidar) data, a validated analytical method for determining rooftop PV suitability employing geographic information systems, and modeling of PV electricity generation. We find that rooftop PV's ability to meet estimated city electricity consumption varies widely - from meeting 16% of annual consumption (in Washington, DC) to meeting 88% (in Mission Viejo, CA). Important drivers include average rooftop suitability, household footprint/per-capita roof space, the quality of the solar resource, and the city's estimated electricity consumption. In addition to city-wide results,more » we also estimate the ability of aggregations of households to offset their electricity consumption with PV. In a companion article, we will use statistical modeling to extend our results and estimate national rooftop PV technical potential. In addition, our publically available data and methods may help policy makers, utilities, researchers, and others perform customized analyses to meet their specific needs.« less
Brooks, M.H.; Schroder, L.J.; Malo, B.A.
1985-01-01
Four laboratories were evaluated in their analysis of identical natural and simulated precipitation water samples. Interlaboratory comparability was evaluated using analysis of variance coupled with Duncan 's multiple range test, and linear-regression models describing the relations between individual laboratory analytical results for natural precipitation samples. Results of the statistical analyses indicate that certain pairs of laboratories produce different results when analyzing identical samples. Analyte bias for each laboratory was examined using analysis of variance coupled with Duncan 's multiple range test on data produced by the laboratories from the analysis of identical simulated precipitation samples. Bias for a given analyte produced by a single laboratory has been indicated when the laboratory mean for that analyte is shown to be significantly different from the mean for the most-probable analyte concentrations in the simulated precipitation samples. Ion-chromatographic methods for the determination of chloride, nitrate, and sulfate have been compared with the colorimetric methods that were also in use during the study period. Comparisons were made using analysis of variance coupled with Duncan 's multiple range test for means produced by the two methods. Analyte precision for each laboratory has been estimated by calculating a pooled variance for each analyte. Analyte estimated precisions have been compared using F-tests and differences in analyte precisions for laboratory pairs have been reported. (USGS)
Analytical Modeling of Groundwater Seepages to St. Lucie Estuary
NASA Astrophysics Data System (ADS)
Lee, J.; Yeh, G.; Hu, G.
2008-12-01
In this paper, six analytical models describing hydraulic interaction of stream-aquifer systems were applied to St Lucie Estuary (SLE) River Estuaries. These are analytical solutions for: (1) flow from a finite aquifer to a canal, (2) flow from an infinite aquifer to a canal, (3) the linearized Laplace system in a seepage surface, (4) wave propagation in the aquifer, (5) potential flow through stratified unconfined aquifers, and (6) flow through stratified confined aquifers. Input data for analytical solutions were obtained from monitoring wells and river stages at seepage-meter sites. Four transects in the study area are available: Club Med, Harbour Ridge, Lutz/MacMillan, and Pendarvis Cove located in the St. Lucie River. The analytical models were first calibrated with seepage meter measurements and then used to estimate of groundwater discharges into St. Lucie River. From this process, analytical relationships between the seepage rate and river stages and/or groundwater tables were established to predict the seasonal and monthly variation in groundwater seepage into SLE. It was found the seepage rate estimations by analytical models agreed well with measured data for some cases but only fair for some other cases. This is not unexpected because analytical solutions have some inherently simplified assumptions, which may be more valid for some cases than the others. From analytical calculations, it is possible to predict approximate seepage rates in the study domain when the assumptions underlying these analytical models are valid. The finite and infinite aquifer models and the linearized Laplace method are good for sites Pendarvis Cove and Lutz/MacMillian, but fair for the other two sites. The wave propagation model gave very good agreement in phase but only fairly agreement in magnitude for all four sites. The stratified unconfined and confined aquifer models gave similarly good agreements with measurements at three sites but poorly at the Club Med site. None of the analytical models presented here can fit the data at this site. To give better estimates at all sites numerical modeling that couple river hydraulics and groundwater flow involving less simplifications of and assumptions for the system may have to be adapted.
Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda
2018-03-01
A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kampel, Milton; Lorenzzetti, João A; Bentz, Cristina M; Nunes, Raul A; Paranhos, Rodolfo; Rudorff, Frederico M; Politano, Alexandre T
2009-01-01
Comparisons between in situ measurements of surface chlorophyll-a concentration (CHL) and ocean color remote sensing estimates were conducted during an oceanographic cruise on the Brazilian Southeastern continental shelf and slope, Southwestern South Atlantic. In situ values were based on fluorometry, above-water radiometry and lidar fluorosensor. Three empirical algorithms were used to estimate CHL from radiometric measurements: Ocean Chlorophyll 3 bands (OC3M(RAD)), Ocean Chlorophyll 4 bands (OC4v4(RAD)), and Ocean Chlorophyll 2 bands (OC2v4(RAD)). The satellite estimates of CHL were derived from data collected by the MODerate-resolution Imaging Spectroradiometer (MODIS) with a nominal 1.1 km resolution at nadir. Three algorithms were used to estimate chlorophyll concentrations from MODIS data: one empirical - OC3M(SAT), and two semi-analytical - Garver, Siegel, Maritorena version 01 (GSM01(SAT)), and Carder(SAT). In the present work, MODIS, lidar and in situ above-water radiometry and fluorometry are briefly described and the estimated values of chlorophyll retrieved by these techniques are compared. The chlorophyll concentration in the study area was in the range 0.01 to 0.2 mg/m(3). In general, the empirical algorithms applied to the in situ radiometric and satellite data showed a tendency to overestimate CHL with a mean difference between estimated and measured values of as much as 0.17 mg/m(3) (OC2v4(RAD)). The semi-analytical GSM01 algorithm applied to MODIS data performed better (rmse 0.28, rmse-L 0.08, mean diff. -0.01 mg/m(3)) than the Carder and the empirical OC3M algorithms (rmse 1.14 and 0.36, rmse-L 0.34 and 0.11, mean diff. 0.17 and 0.02 mg/m(3), respectively). We find that rmsd values between MODIS relative to the in situ radiometric measurements are < 26%, i.e., there is a trend towards overestimation of R(RS) by MODIS for the stations considered in this work. Other authors have already reported over and under estimation of MODIS remotely sensed reflectance due to several errors in the bio-optical algorithm performance, in the satellite sensor calibration, and in the atmospheric-correction algorithm.
Estimate of Joule Heating in a Flat Dechirper
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bane, Karl; Stupakov, Gennady; Gjonaj, Erion
2017-02-10
We have performed Joule power loss calculations for a flat dechirper. We have considered the configurations of the beam on-axis between the two plates—for chirp control—and for the beam especially close to one plate—for use as a fast kicker. Our calculations use a surface impedance approach, one that is valid when corrugation parameters are small compared to aperture (the perturbative parameter regime). In our model we ignore effects of field reflections at the sides of the dechirper plates, and thus expect the results to underestimate the Joule losses. The analytical results were also tested by numerical, time-domain simulations. We findmore » that most of the wake power lost by the beam is radiated out to the sides of the plates. For the case of the beam passing by a single plate, we derive an analytical expression for the broad-band impedance, and—in Appendix B—numerically confirm recently developed, analytical formulas for the short-range wakes. While our theory can be applied to the LCLS-II dechirper with large gaps, for the nominal apertures we are not in the perturbative regime and the reflection contribution to Joule losses is not negligible. With input from computer simulations, we estimate the Joule power loss (assuming bunch charge of 300 pC, repetition rate of 100 kHz) is 21 W/m for the case of two plates, and 24 W/m for the case of a single plate.« less
Holistic rubric vs. analytic rubric for measuring clinical performance levels in medical students.
Yune, So Jung; Lee, Sang Yeoup; Im, Sun Ju; Kam, Bee Sung; Baek, Sun Yong
2018-06-05
Task-specific checklists, holistic rubrics, and analytic rubrics are often used for performance assessments. We examined what factors evaluators consider important in holistic scoring of clinical performance assessment, and compared the usefulness of applying holistic and analytic rubrics respectively, and analytic rubrics in addition to task-specific checklists based on traditional standards. We compared the usefulness of a holistic rubric versus an analytic rubric in effectively measuring the clinical skill performances of 126 third-year medical students who participated in a clinical performance assessment conducted by Pusan National University School of Medicine. We conducted a questionnaire survey of 37 evaluators who used all three evaluation methods-holistic rubric, analytic rubric, and task-specific checklist-for each student. The relationship between the scores on the three evaluation methods was analyzed using Pearson's correlation. Inter-rater agreement was analyzed by Kappa index. The effect of holistic and analytic rubric scores on the task-specific checklist score was analyzed using multiple regression analysis. Evaluators perceived accuracy and proficiency to be major factors in objective structured clinical examinations evaluation, and history taking and physical examination to be major factors in clinical performance examinations evaluation. Holistic rubric scores were highly related to the scores of the task-specific checklist and analytic rubric. Relatively low agreement was found in clinical performance examinations compared to objective structured clinical examinations. Meanwhile, the holistic and analytic rubric scores explained 59.1% of the task-specific checklist score in objective structured clinical examinations and 51.6% in clinical performance examinations. The results show the usefulness of holistic and analytic rubrics in clinical performance assessment, which can be used in conjunction with task-specific checklists for more efficient evaluation.
Performance monitoring can boost turboexpander efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
McIntire, R.
1982-07-05
This paper discusses ways of improving the productivity of the turboexpander/refrigeration system's radial expander and radial compressor through systematic review of component performance. It reviews several techniques to determine the performance of an expander and compressor. It suggests that any performance improvement program requires quantifying the performance of separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. The model is used to quantify the economic benefits of any change in the system, eithermore » a change in operating procedures or a hardware modification. Topics include proper ways of using antisurge control valves and modifying flow rate/shaft speed (Q/N). It is noted that compressor efficiency depends on the incidence angle of blade at the rotor leading edge and the angle of the incoming gas stream.« less
Kounali, Daphne Z; Button, Katherine S; Lewis, Glyn; Ades, Anthony E
2016-09-01
We present a meta-analytic method that combines information on treatment effects from different instruments from a network of randomized trials to estimate instrument relative responsiveness. Five depression-test instruments [Beck Depression Inventory (BDI I/II), Patient Health Questionnaire (PHQ9), Hamilton Rating for Depression 17 and 24 items, Montgomery-Asberg Depression Rating] and three generic quality of life measures [EuroQoL (EQ-5D), SF36 mental component summary (SF36 MCS), and physical component summary (SF36 PCS)] were compared. Randomized trials of treatments for depression reporting outcomes on any two or more of these instruments were identified. Information on the within-trial ratios of standardized treatment effects was pooled across the studies to estimate relative responsiveness. The between-instrument ratios of standardized treatment effects vary across trials, with a coefficient of variation of 13% (95% credible interval: 6%, 25%). There were important differences between the depression measures, with PHQ9 being the most responsive instrument and BDI the least. Responsiveness of the EQ-5D and SF36 PCS was poor. SF36 MCS performed similarly to depression instruments. Information on relative responsiveness of several test instruments can be pooled across networks of trials reporting at least two outcomes, allowing comparison and ranking of test instruments that may never have been compared directly. Copyright © 2016 Elsevier Inc. All rights reserved.
Application of Raman microscopy to biodegradable double-walled microspheres.
Widjaja, Effendi; Lee, Wei Li; Loo, Say Chye Joachim
2010-02-15
Raman mapping measurements were performed on the cross section of the ternary-phase biodegradable double-walled microsphere (DWMS) of poly(D,L-lactide-co-glycolide) (50:50) (PLGA), poly(L-lactide) (PLLA), and poly(epsilon-caprolactone) (PCL), which was fabricated by a one-step solvent evaporation method. The collected Raman spectra were subjected to a band-target entropy minimization (BTEM) algorithm in order to reconstruct the pure component spectra of the species observed in this sample. Seven pure component spectral estimates were recovered, and their spatial distributions within DWMS were determined. The first three spectral estimates were identified as PLLA, PLGA 50:50, and PCL, which were the main components in DWMS. The last four spectral estimates were identified as semicrystalline polyglycolic acid (PGA), dichloromethane (DCM), copper-phthalocyanine blue, and calcite, which were the minor components in DWMS. PGA was the decomposition product of PLGA. DCM was the solvent used in DWMS fabrication. Copper-phthalocyanine blue and calcite were the unexpected contaminants. The current result showed that combined Raman microscopy and BTEM analysis can provide a sensitive characterization tool to DWMS, as it can give more specific information on the chemical species present as well as the spatial distributions. This novel analytical method for microsphere characterization can serve as a complementary tool to other more established analytical techniques, such as scanning electron microscopy and optical microscopy.
NASA Astrophysics Data System (ADS)
Bagolini, Alvise; Picciotto, Antonino; Crivellari, Michele; Conci, Paolo; Bellutti, Pierluigi
2016-02-01
An analysis of the mechanical properties of plasma enhanced chemical vapor (PECVD) silicon nitrides is presented, using micro fabricated silicon nitride membranes under point load deflection. The membranes are made of PECVD silicon-rich nitride and low stress nitride films. The mechanical performance of the bended membranes is examined both with analytical models and finite element simulation in order to extract the elastic modulus and residual stress values. The elastic modulus of low stress silicon nitride is calculated using stress free analytical models, while for silicon-rich silicon nitride and annealed low stress silicon nitride it is estimated with a pre-stressed model of point-load deflection. The effect of annealing both in nitrogen and hydrogen atmosphere is evaluated in terms of residual stress, refractive index and thickness variation. It is demonstrated that a hydrogen rich annealing atmosphere induces very little change in low stress silicon nitride. Nitrogen annealing effects are measured and shown to be much higher in silicon-rich nitride than in low stress silicon nitride. An estimate of PECVD silicon-rich nitride elastic modulus is obtained in the range between 240-320 GPa for deposited samples and 390 GPa for samples annealed in nitrogen atmosphere. PECVD low stress silicon nitride elastic modulus is estimated to be 88 GPa as deposited and 320 GPa after nitrogen annealing.
Jackson, Brian A; Faith, Kay Sullivan
2013-02-01
Although significant progress has been made in measuring public health emergency preparedness, system-level performance measures are lacking. This report examines a potential approach to such measures for Strategic National Stockpile (SNS) operations. We adapted an engineering analytic technique used to assess the reliability of technological systems-failure mode and effects analysis-to assess preparedness. That technique, which includes systematic mapping of the response system and identification of possible breakdowns that affect performance, provides a path to use data from existing SNS assessment tools to estimate likely future performance of the system overall. Systems models of SNS operations were constructed and failure mode analyses were performed for each component. Linking data from existing assessments, including the technical assistance review and functional drills, to reliability assessment was demonstrated using publicly available information. The use of failure mode and effects estimates to assess overall response system reliability was demonstrated with a simple simulation example. Reliability analysis appears an attractive way to integrate information from the substantial investment in detailed assessments for stockpile delivery and dispensing to provide a view of likely future response performance.
Manipulation of the polarization of intense laser beams via optical wave mixing in plasmas
NASA Astrophysics Data System (ADS)
Michel, Pierre; Divol, Laurent; Turnbull, David; Moody, John
2014-10-01
When intense laser beams overlap in plasmas, the refractive index modulation created by the beat wave via the ponderomotive force can lead to optical wave mixing phenomena reminiscent of those used in crystals and photorefractive materials. Using a vector analysis, we present a full analytical description of the modification of the polarization state of laser beams crossing at arbitrary angles in a plasma. We show that plasmas can be used to provide full control of the polarization state of a laser beam, and give simple analytical estimates and practical considerations for the design of novel photonics devices such as plasma polarizers and plasma waveplates. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344.
Long, H. Keith; Daddow, Richard L.; Farrar, Jerry W.
1998-01-01
Since 1962, the U.S. Geological Survey (USGS) has operated the Standard Reference Sample Project to evaluate the performance of USGS, cooperator, and contractor analytical laboratories that analyze chemical constituents of environmental samples. The laboratories are evaluated by using performance evaluation samples, called Standard Reference Samples (SRSs). SRSs are submitted to laboratories semi-annually for round-robin laboratory performance comparison purposes. Currently, approximately 100 laboratories are evaluated for their analytical performance on six SRSs for inorganic and nutrient constituents. As part of the SRS Project, a surplus of homogeneous, stable SRSs is maintained for purchase by USGS offices and participating laboratories for use in continuing quality-assurance and quality-control activities. Statistical evaluation of the laboratories results provides information to compare the analytical performance of the laboratories and to determine possible analytical deficiences and problems. SRS results also provide information on the bias and variability of different analytical methods used in the SRS analyses.
Development of 1-m primary mirror for a spaceborne camera
NASA Astrophysics Data System (ADS)
Kihm, Hagyong; Yang, Ho-Soon; Rhee, Hyug-Gyo; Lee, Yun-Woo
2015-09-01
We present the development of a 1-m lightweight mirror system for a spaceborne electro-optical camera. The mirror design was optimized to satisfy the performance requirements under launch loads and space environment. The mirror made of Zerodur® has pockets at the back surface and three square bosses at the rim. Metallic bipod flexures support the mirror at the bosses and adjust the mirror's surface distortion due to gravity. We also show an analytical formulation of the bipod flexure, where compliance and stiffness matrices of the bipod flexure are derived to estimate theoretical performance and to make initial design guidelines. Optomechanical performances such as surface distortions due to gravity is explained. Environmental verification of the mirror is achieved by vibration tests.
On the Application of Euler Deconvolution to the Analytic Signal
NASA Astrophysics Data System (ADS)
Fedi, M.; Florio, G.; Pasteka, R.
2005-05-01
In the last years papers on Euler deconvolution (ED) used formulations that accounted for the unknown background field, allowing to consider the structural index (N) an unknown to be solved for, together with the source coordinates. Among them, Hsu (2002) and Fedi and Florio (2002) independently pointed out that the use of an adequate m-order derivative of the field, instead than the field itself, allowed solving for both N and source position. For the same reason, Keating and Pilkington (2004) proposed the ED of the analytic signal. A function being analyzed by ED must be homogeneous but also harmonic, because it must be possible to compute its vertical derivative, as well known from potential field theory. Huang et al. (1995), demonstrated that analytic signal is a homogeneous function, but, for instance, it is rather obvious that the magnetic field modulus (corresponding to the analytic signal of a gravity field) is not a harmonic function (e.g.: Grant & West, 1965). Thus, it appears that a straightforward application of ED to the analytic signal is not possible because a vertical derivation of this function is not correct by using standard potential fields analysis tools. In this note we want to theoretically and empirically check what kind of error are caused in the ED by such wrong assumption about analytic signal harmonicity. We will discuss results on profile and map synthetic data, and use a simple method to compute the vertical derivative of non-harmonic functions measured on a horizontal plane. Our main conclusions are: 1. To approximate a correct evaluation of the vertical derivative of a non-harmonic function it is useful to compute it with finite-difference, by using upward continuation. 2. We found that the errors on the vertical derivative computed as if the analytic signal was harmonic reflects mainly on the structural index estimate; these errors can mislead an interpretation even though the depth estimates are almost correct. 3. Consistent estimates of depth and S.I. are instead obtained by using a finite-difference vertical derivative of the analytic signal. 4. Analysis of a case history confirms the strong error in the estimation of structural index if the analytic signal is treated as an harmonic function.
Long term evolution of planetary systems with a terrestrial planet and a giant planet.
NASA Astrophysics Data System (ADS)
Georgakarakos, Nikolaos; Dobbs-Dixon, Ian; Way, Michael J.
2017-06-01
We study the long term orbital evolution of a terrestrial planet under the gravitational perturbations of a giant planet. In particular, we are interested in situations where the two planets are in the same plane and are relatively close. We examine both possible configurations: the giant planet orbit being either outside or inside the orbit of the smaller planet. The perturbing potential is expanded to high orders and an analytical solution of the terrestrial planetary orbit is derived. The analytical estimates are then compared against results from the numerical integration of the full equations of motion and we find that the analytical solution works reasonably well. An interesting finding is that the new analytical estimates improve greatly the predictions for the timescales of the orbital evolution of the terrestrial planet compared to an octupole order expansion.
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, N.B.; Walker, J.F.
1990-01-01
The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Gilliom, Robert J.; Helsel, Dennis R.
1986-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations, for determining the best performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1986-02-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensoredmore » observations, for determining the best performing parameter estimation method for any particular data det. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification.« less
Zhang, Yan; Wang, Ping; Guo, Lixin; Wang, Wei; Tian, Hongxin
2017-08-21
The average bit error rate (ABER) performance of an orbital angular momentum (OAM) multiplexing-based free-space optical (FSO) system with multiple-input multiple-output (MIMO) architecture has been investigated over atmospheric turbulence considering channel estimation and space-time coding. The impact of different types of space-time coding, modulation orders, turbulence strengths, receive antenna numbers on the transmission performance of this OAM-FSO system is also taken into account. On the basis of the proposed system model, the analytical expressions of the received signals carried by the k-th OAM mode of the n-th receive antenna for the vertical bell labs layered space-time (V-Blast) and space-time block codes (STBC) are derived, respectively. With the help of channel estimator carrying out with least square (LS) algorithm, the zero-forcing criterion with ordered successive interference cancellation criterion (ZF-OSIC) equalizer of V-Blast scheme and Alamouti decoder of STBC scheme are adopted to mitigate the performance degradation induced by the atmospheric turbulence. The results show that the ABERs obtained by channel estimation have excellent agreement with those of turbulence phase screen simulations. The ABERs of this OAM multiplexing-based MIMO system deteriorate with the increase of turbulence strengths. And both V-Blast and STBC schemes can significantly improve the system performance by mitigating the distortions of atmospheric turbulence as well as additive white Gaussian noise (AWGN). In addition, the ABER performances of both space-time coding schemes can be further enhanced by increasing the number of receive antennas for the diversity gain and STBC outperforms V-Blast in this system for data recovery. This work is beneficial to the OAM FSO system design.
Dong, Ying; Gao, Wei; Zhou, Qin; Zheng, Yi; You, Zheng
2010-06-25
The gas sensors based on polymer-coated resonant microcantilevers for volatile organic compounds (VOCs) detection are investigated. A method to characterize the gas sensors through sensor calibration is proposed. The expressions for the estimation of the characteristic parameters are derived. The effect of the polymer coating location on the sensor's sensitivity is investigated and the formula to calculate the polymer-analyte partition coefficient without knowing the polymer coating features is presented for the first time. Three polymers: polyethyleneoxide (PEO), polyethylenevinylacetate (PEVA) and polyvinylalcohol (PVA) are used to perform the experiments. Six organic solvents: toluene, benzene, ethanol, acetone, hexane and octane are used as analytes. The response time, reversibility, hydrophilicity, sensitivity and selectivity of the polymer layers are discussed. According to the results, highly sensitive sensors for each of the analytes are proposed. Based on the characterization method, a convenient and flexible way to the construction of electric nose system by the polymer-coated resonant microcantilevers can be achieved. Copyright 2010 Elsevier B.V. All rights reserved.
Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study
NASA Astrophysics Data System (ADS)
Troudi, Molka; Alimi, Adel M.; Saoudi, Samir
2008-12-01
The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Simulation of fatigue crack growth under large scale yielding conditions
NASA Astrophysics Data System (ADS)
Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann
2010-07-01
A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.
Estimation of aneurysm wall stresses created by treatment with a shape memory polymer foam device
Hwang, Wonjun; Volk, Brent L.; Akberali, Farida; Singhal, Pooja; Criscione, John C.
2012-01-01
In this study, compliant latex thin-walled aneurysm models are fabricated to investigate the effects of expansion of shape memory polymer foam. A simplified cylindrical model is selected for the in-vitro aneurysm, which is a simplification of a real, saccular aneurysm. The studies are performed by crimping shape memory polymer foams, originally 6 and 8 mm in diameter, and monitoring the resulting deformation when deployed into 4-mm-diameter thin-walled latex tubes. The deformations of the latex tubes are used as inputs to physical, analytical, and computational models to estimate the circumferential stresses. Using the results of the stress analysis in the latex aneurysm model, a computational model of the human aneurysm is developed by changing the geometry and material properties. The model is then used to predict the stresses that would develop in a human aneurysm. The experimental, simulation, and analytical results suggest that shape memory polymer foams have potential of being a safe treatment for intracranial saccular aneurysms. In particular, this work suggests oversized shape memory foams may be used to better fill the entire aneurysm cavity while generating stresses below the aneurysm wall breaking stresses. PMID:21901546
Intensity correction for multichannel hyperpolarized 13C imaging of the heart.
Dominguez-Viqueira, William; Geraghty, Benjamin J; Lau, Justin Y C; Robb, Fraser J; Chen, Albert P; Cunningham, Charles H
2016-02-01
Develop and test an analytic correction method to correct the signal intensity variation caused by the inhomogeneous reception profile of an eight-channel phased array for hyperpolarized (13) C imaging. Fiducial markers visible in anatomical images were attached to the individual coils to provide three dimensional localization of the receive hardware with respect to the image frame of reference. The coil locations and dimensions were used to numerically model the reception profile using the Biot-Savart Law. The accuracy of the coil sensitivity estimation was validated with images derived from a homogenous (13) C phantom. Numerical coil sensitivity estimates were used to perform intensity correction of in vivo hyperpolarized (13) C cardiac images in pigs. In comparison to the conventional sum-of-squares reconstruction, improved signal uniformity was observed in the corrected images. The analytical intensity correction scheme was shown to improve the uniformity of multichannel image reconstruction in hyperpolarized [1-(13) C]pyruvate and (13) C-bicarbonate cardiac MRI. The method is independent of the pulse sequence used for (13) C data acquisition, simple to implement and does not require additional scan time, making it an attractive technique for multichannel hyperpolarized (13) C MRI. © 2015 Wiley Periodicals, Inc.
A Gaussian beam method for ultrasonic non-destructive evaluation modeling
NASA Astrophysics Data System (ADS)
Jacquet, O.; Leymarie, N.; Cassereau, D.
2018-05-01
The propagation of high-frequency ultrasonic body waves can be efficiently estimated with a semi-analytic Dynamic Ray Tracing approach using paraxial approximation. Although this asymptotic field estimation avoids the computational cost of numerical methods, it may encounter several limitations in reproducing identified highly interferential features. Nevertheless, some can be managed by allowing paraxial quantities to be complex-valued. This gives rise to localized solutions, known as paraxial Gaussian beams. Whereas their propagation and transmission/reflection laws are well-defined, the fact remains that the adopted complexification introduces additional initial conditions. While their choice is usually performed according to strategies specifically tailored to limited applications, a Gabor frame method has been implemented to indiscriminately initialize a reasonable number of paraxial Gaussian beams. Since this method can be applied for an usefully wide range of ultrasonic transducers, the typical case of the time-harmonic piston radiator is investigated. Compared to the commonly used Multi-Gaussian Beam model [1], a better agreement is obtained throughout the radiated field between the results of numerical integration (or analytical on-axis solution) and the resulting Gaussian beam superposition. Sparsity of the proposed solution is also discussed.
On sequential data assimilation for scalar macroscopic traffic flow models
NASA Astrophysics Data System (ADS)
Blandin, Sébastien; Couque, Adrien; Bayen, Alexandre; Work, Daniel
2012-09-01
We consider the problem of sequential data assimilation for transportation networks using optimal filtering with a scalar macroscopic traffic flow model. Properties of the distribution of the uncertainty on the true state related to the specific nonlinearity and non-differentiability inherent to macroscopic traffic flow models are investigated, derived analytically and analyzed. We show that nonlinear dynamics, by creating discontinuities in the traffic state, affect the performances of classical filters and in particular that the distribution of the uncertainty on the traffic state at shock waves is a mixture distribution. The non-differentiability of traffic dynamics around stationary shock waves is also proved and the resulting optimality loss of the estimates is quantified numerically. The properties of the estimates are explicitly studied for the Godunov scheme (and thus the Cell-Transmission Model), leading to specific conclusions about their use in the context of filtering, which is a significant contribution of this article. Analytical proofs and numerical tests are introduced to support the results presented. A Java implementation of the classical filters used in this work is available on-line at http://traffic.berkeley.edu for facilitating further efforts on this topic and fostering reproducible research.
Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.
2014-01-01
Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.
USDA-ARS?s Scientific Manuscript database
Rill detachment is an important process in rill erosion. The rill detachment rate is the fundamental basis for determination of the parameters of a rill erosion model. In this paper, an analytical method was proposed to estimate the rill detachment rate. The method is based on the exact analytical s...
NASA Technical Reports Server (NTRS)
Joshi, S. M.
1984-01-01
Closed-loop stability is investigated for multivariable linear time-invariant systems controlled by optimal full state feedback linear quadratic (LQ) regulators, with nonlinear gains present in the feedback channels. Estimates are obtained for the region of attraction when the nonlinearities escape the (0.5, infinity) sector in regions away from the origin and for the region of ultimate boundedness when the nonlinearities escape the sector near the origin. The expressions for these regions also provide methods for selecting the performance function parameters in order to obtain LQ designs with better tolerance for nonlinearities. The analytical results are illustrated by applying them to the problem of controlling the rigid-body pitch angle and elastic motion of a large, flexible space antenna.
NASA Astrophysics Data System (ADS)
Zahari, Zakirah Mohd; Zubaidah Adnan, Siti; Kanthasamy, Ramesh; Saleh, Suriyati; Samad, Noor Asma Fazli Abdul
2018-03-01
The specification of the crystal product is usually given in terms of crystal size distribution (CSD). To this end, optimal cooling strategy is necessary to achieve the CSD. The direct design control involving analytical CSD estimator is one of the approaches that can be used to generate the set-point. However, the effects of temperature on the crystal growth rate are neglected in the estimator. Thus, the temperature dependence on the crystal growth rate needs to be considered in order to provide an accurate set-point. The objective of this work is to extend the analytical CSD estimator where Arrhenius expression is employed to cover the effects of temperature on the growth rate. The application of this work is demonstrated through a potassium sulphate crystallisation process. Based on specified target CSD, the extended estimator is capable of generating the required set-point where a proposed controller successfully maintained the operation at the set-point to achieve the target CSD. Comparison with other cooling strategies shows a reduction up to 18.2% of the total number of undesirable crystals generated from secondary nucleation using linear cooling strategy is achieved.
Analytical methods in multivariate highway safety exposure data estimation
DOT National Transportation Integrated Search
1984-01-01
Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...
An analytic performance model of disk arrays and its application
NASA Technical Reports Server (NTRS)
Lee, Edward K.; Katz, Randy H.
1991-01-01
As disk arrays become widely used, tools for understanding and analyzing their performance become increasingly important. In particular, performance models can be invaluable in both configuring and designing disk arrays. Accurate analytic performance models are desirable over other types of models because they can be quickly evaluated, are applicable under a wide range of system and workload parameters, and can be manipulated by a range of mathematical techniques. Unfortunately, analytical performance models of disk arrays are difficult to formulate due to the presence of queuing and fork-join synchronization; a disk array request is broken up into independent disk requests which must all complete to satisfy the original request. We develop, validate, and apply an analytic performance model for disk arrays. We derive simple equations for approximating their utilization, response time, and throughput. We then validate the analytic model via simulation and investigate the accuracy of each approximation used in deriving the analytical model. Finally, we apply the analytical model to derive an equation for the optimal unit of data striping in disk arrays.
Brooks, Myron H.; Schroder, LeRoy J.; Willoughby, Timothy C.
1987-01-01
Four laboratories involved in the routine analysis of wet-deposition samples participated in an interlaboratory comparison program managed by the U.S. Geological Survey. The four participants were: Illinois State Water Survey central analytical laboratory in Champaign, Illinois; U.S. Geological Survey national water-quality laboratories in Atlanta, Georgia, and Denver, Colorado; and Inland Waters Directorate national water-quality laboratory in Burlington, Ontario, Canada. Analyses of interlaboratory samples performed by the four laboratories from October 1983 through December 1984 were compared.Participating laboratories analyzed three types of interlaboratory samples--natural wet deposition, simulated wet deposition, and deionized water--for pH and specific conductance, and for dissolved calcium, magnesium, sodium, sodium, potassium, chloride, sulfate, nitrate, ammonium, and orthophosphate. Natural wet-deposition samples were aliquots of actual wet-deposition samples. Analyses of these samples by the four laboratories were compared using analysis of variance. Test results indicated that pH, calcium, nitrate, and ammonium results were not directly comparable among the four laboratories. Statistically significant differences between laboratory results probably only were meaningful for analyses of dissolved calcium. Simulated wet-deposition samples with known analyte concentrations were used to test each laboratory for analyte bias. Laboratory analyses of calcium, magnesium, sodium, potassium, chloride, sulfate, and nitrate were not significantly different from the known concentrations of these analytes when tested using analysis of variance. Deionized-water samples were used to test each laboratory for reporting of false positive values. The Illinois State Water Survey Laboratory reported the smallest percentage of false positive values for most analytes. Analyte precision was estimated for each laboratory from results of replicate measurements. In general, the Illinois State Water Survey laboratory achieved the greatest precision, whereas the U.S. Geological Survey laboratories achieved the least precision.
Sajnóg, Adam; Hanć, Anetta; Koczorowski, Ryszard; Barałkiewicz, Danuta
2017-12-01
A new procedure for determination of elements derived from titanium implants and physiological elements in soft tissues by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) is presented. The analytical procedure was developed which involved preparation of in-house matrix matched solid standards with analyte addition based on certified reference material (CRM) MODAS-4 Cormorant Tissue. Addition of gelatin, serving as a binding agent, essentially improved physical properties of standards. Performance of the analytical method was assayed and validated by calculating parameters like precision, detection limits, trueness and recovery of analyte addition using additional CRM - ERM-BB184 Bovine Muscle. Analyte addition was additionally confirmed by microwave digestion of solid standards and analysis by solution nebulization ICP-MS. The detection limits are in range 1.8μgg -1 to 450μgg -1 for Mn and Ca respectively. The precision values range from 7.3% to 42% for Al and Zn respectively. The estimated recoveries of analyte addition line within scope of 83%-153% for Mn and Cu respectively. Oral mucosa samples taken from patients treated with titanium dental implants were examined using developed analytical method. Standards and tissue samples were cryocut into 30µm thin sections. LA-ICP-MS allowed to obtain two-dimensional maps of distribution of elements in tested samples which revealed high content of Ti and Al derived from implants. Photographs from optical microscope displayed numerous particles with µm size in oral mucosa samples which suggests that they are residues from implantation procedure. Copyright © 2017 Elsevier B.V. All rights reserved.
EDXRF as an alternative method for multielement analysis of tropical soils and sediments.
Fernández, Zahily Herrero; Dos Santos Júnior, José Araújo; Dos Santos Amaral, Romilton; Alvarez, Juan Reinaldo Estevez; da Silva, Edvane Borges; De França, Elvis Joacir; Menezes, Rômulo Simões Cezar; de Farias, Emerson Emiliano Gualberto; do Nascimento Santos, Josineide Marques
2017-08-10
The quality assessment of tropical soils and sediments is still under discussion, with efforts being made on the part of governmental agencies to establish reference values. Energy dispersive X-ray fluorescence (EDXRF) is a potential analytical technique for quantifying diverse chemical elements in geological material without chemical treatment, primarily when it is performed at an appropriate metrological level. In this work, analytical curves were obtained by means of the analysis of geological reference materials (RMs), which allowed for the researchers to draw a comparison among the sources of analytical uncertainty. After having determined the quality assurance of the analytical procedure, the EDXRF method was applied to determine chemical elements in soils from the state of Pernambuco, Brazil. The regression coefficients of the analytical curves used to determine Al, Ca, Fe, K, Mg, Mn, Ni, Pb, Si, Sr, Ti, and Zn were higher than 0.99. The quality of the analytical procedure was demonstrated at a 95% confidence level, in which the estimated analytical uncertainties agreed with those from the RM's certificates of analysis. The analysis of diverse geological samples from Pernambuco indicated higher concentrations of Ni and Zn in sugarcane, with maximum values of 41 mg kg - 1 and 118 mg kg - 1 , respectively, and agricultural areas (41 mg kg - 1 and 127 mg kg - 1 , respectively). The trace element Sr was mainly enriched in urban soils with values of 400 mg kg - 1 . According to the results, the EDXRF method was successfully implemented, providing some chemical tracers for the quality assessment of tropical soils and sediments.
Theoretical characterization of a model of aragonite crystal orientation in red abalone nacre
NASA Astrophysics Data System (ADS)
Coppersmith, S N; Gilbert, P U P A; Metzler, R A
2009-03-01
Nacre, commonly known as mother-of-pearl, is a remarkable biomineral that in red abalone consists of layers of 400 nm thick aragonite crystalline tablets confined by organic matrix sheets, with the [0 0 1] crystal axes of the aragonite tablets oriented to within ±12° from the normal to the layer planes. Recent experiments demonstrate that greater orientational order develops over a distance of tens of layers from the prismatic boundary at which nacre formation begins. Our previous simulations of a model in which the order develops because of differential tablet growth rates (oriented tablets growing faster than misoriented ones) yield patterns of tablets that agree qualitatively and quantitatively with the experimental measurements. This paper presents an analytical treatment of this model, focusing on how the dynamical development and eventual degree of order depend on model parameters. Dynamical equations for the probability distributions governing tablet orientations are introduced whose form can be determined from symmetry considerations and for which substantial analytic progress can be made. Numerical simulations are performed to relate the parameters used in the analytic theory to those in the microscopic growth model. The analytic theory demonstrates that the dynamical mechanism is able to achieve a much higher degree of order than naive estimates would indicate.
NASA Astrophysics Data System (ADS)
Melnikov, A. A.; Kostishin, V. G.; Alenkov, V. V.
2017-05-01
Real operating conditions of a thermoelectric cooling device are in the presence of thermal resistances between thermoelectric material and a heat medium or cooling object. They limit performance of a device and should be considered when modeling. Here we propose a dimensionless mathematical steady state model, which takes them into account. Analytical equations for dimensionless cooling capacity, voltage, and coefficient of performance (COP) depending on dimensionless current are given. For improved accuracy a device can be modeled with use of numerical or combined analytical-numerical methods. The results of modeling are in acceptable accordance with experimental results. The case of zero temperature difference between hot and cold heat mediums at which the maximum cooling capacity mode appears is considered in detail. Optimal device parameters for maximal cooling capacity, such as fraction of thermal conductance on the cold side y, fraction of current relative to maximal j' are estimated in range of 0.38-0.44 and 0.48-0.95, respectively, for dimensionless conductance K' = 5-100. Also, a method for determination of thermal resistances of a thermoelectric cooling system is proposed.
NASA Astrophysics Data System (ADS)
Sansone, Giuseppe; Ferretti, Andrea; Maschio, Lorenzo
2017-09-01
Within the semiclassical Boltzmann transport theory in the constant relaxation-time approximation, we perform an ab initio study of the transport properties of selected systems, including crystalline solids and nanostructures. A local (Gaussian) basis set is adopted and exploited to analytically evaluate band velocities as well as to access full and range-separated hybrid functionals (such as B3LYP, PBE0, or HSE06) at a moderate computational cost. As a consequence of the analytical derivative, our approach is computationally efficient and does not suffer from problems related to band crossings. We investigate and compare the performance of a variety of hybrid functionals in evaluating Boltzmann conductivity. Demonstrative examples include silicon and aluminum bulk crystals as well as two thermoelectric materials (CoSb3, Bi2Te3). We observe that hybrid functionals other than providing more realistic bandgaps—as expected—lead to larger bandwidths and hence allow for a better estimate of transport properties, also in metallic systems. As a nanostructure prototype, we also investigate conductivity in boron-nitride (BN) substituted graphene, in which nanoribbons (nanoroads) alternate with BN ones.
NASA Astrophysics Data System (ADS)
Barbarossa, S.; Farina, A.
A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.
Tamura, Koichiro; Tao, Qiqing; Kumar, Sudhir
2018-01-01
Abstract RelTime estimates divergence times by relaxing the assumption of a strict molecular clock in a phylogeny. It shows excellent performance in estimating divergence times for both simulated and empirical molecular sequence data sets in which evolutionary rates varied extensively throughout the tree. RelTime is computationally efficient and scales well with increasing size of data sets. Until now, however, RelTime has not had a formal mathematical foundation. Here, we show that the basis of the RelTime approach is a relative rate framework (RRF) that combines comparisons of evolutionary rates in sister lineages with the principle of minimum rate change between evolutionary lineages and their respective descendants. We present analytical solutions for estimating relative lineage rates and divergence times under RRF. We also discuss the relationship of RRF with other approaches, including the Bayesian framework. We conclude that RelTime will be useful for phylogenies with branch lengths derived not only from molecular data, but also morphological and biochemical traits. PMID:29893954
Pla-Tolós, J; Serra-Mora, P; Hakobyan, L; Molins-Legua, C; Moliner-Martinez, Y; Campins-Falcó, P
2016-11-01
In this work, in-tube solid phase microextraction (in-tube SPME) coupled to capillary LC (CapLC) with diode array detection has been reported, for on-line extraction and enrichment of booster biocides (irgarol-1051 and diuron) included in Water Frame Directive 2013/39/UE (WFD). The analytical performance has been successfully demonstrated. Furthermore, in the present work, the environmental friendliness of the procedure has been quantified by means of the implementation of the carbon footprint calculation of the analytical procedure and the comparison with other methodologies previously reported. Under the optimum conditions, the method presents good linearity over the range assayed, 0.05-10μg/L for irgarol-1051 and 0.7-10μg/L for diuron. The LODs were 0.015μg/L and 0.2μg/L for irgarol-1051 and diuron, respectively. Precision was also satisfactory (relative standard deviation, RSD<3.5%). The proposed methodology was applied to monitor water samples, taking into account the EQS standards for these compounds. The carbon footprint values for the proposed procedure consolidate the operational efficiency (analytical and environmental performance) of in-tube SPME-CapLC-DAD, in general, and in particular for determining irgarol-1051 and diuron in water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
An Analysis Model for Water Cone Subsidence in Bottom Water Drive Reservoirs
NASA Astrophysics Data System (ADS)
Wang, Jianjun; Xu, Hui; Wu, Shucheng; Yang, Chao; Kong, lingxiao; Zeng, Baoquan; Xu, Haixia; Qu, Tailai
2017-12-01
Water coning in bottom water drive reservoirs, which will result in earlier water breakthrough, rapid increase in water cut and low recovery level, has drawn tremendous attention in petroleum engineering field. As one simple and effective method to inhibit bottom water coning, shut-in coning control is usually preferred in oilfield to control the water cone and furthermore to enhance economic performance. However, most of the water coning researchers just have been done on investigation of the coning behavior as it grows up, the reported studies for water cone subsidence are very scarce. The goal of this work is to present an analytical model for water cone subsidence to analyze the subsidence of water cone when the well shut in. Based on Dupuit critical oil production rate formula, an analytical model is developed to estimate the initial water cone shape at the point of critical drawdown. Then, with the initial water cone shape equation, we propose an analysis model for water cone subsidence in bottom water reservoir reservoirs. Model analysis and several sensitivity studies are conducted. This work presents accurate and fast analytical model to perform the water cone subsidence in bottom water drive reservoirs. To consider the recent interests in development of bottom drive reservoirs, our approach provides a promising technique for better understanding the subsidence of water cone.
Analytical Model for Mean Flow and Fluxes of Momentum and Energy in Very Large Wind Farms
NASA Astrophysics Data System (ADS)
Markfort, Corey D.; Zhang, Wei; Porté-Agel, Fernando
2018-01-01
As wind-turbine arrays continue to be installed and the array size continues to grow, there is an increasing need to represent very large wind-turbine arrays in numerical weather prediction models, for wind-farm optimization, and for environmental assessment. We propose a simple analytical model for boundary-layer flow in fully-developed wind-turbine arrays, based on the concept of sparsely-obstructed shear flows. In describing the vertical distribution of the mean wind speed and shear stress within wind farms, our model estimates the mean kinetic energy harvested from the atmospheric boundary layer, and determines the partitioning between the wind power captured by the wind turbines and that absorbed by the underlying land or water. A length scale based on the turbine geometry, spacing, and performance characteristics, is able to estimate the asymptotic limit for the fully-developed flow through wind-turbine arrays, and thereby determine if the wind-farm flow is fully developed for very large turbine arrays. Our model is validated using data collected in controlled wind-tunnel experiments, and its usefulness for the prediction of wind-farm performance and optimization of turbine-array spacing are described. Our model may also be useful for assessing the extent to which the extraction of wind power affects the land-atmosphere coupling or air-water exchange of momentum, with implications for the transport of heat, moisture, trace gases such as carbon dioxide, methane, and nitrous oxide, and ecologically important oxygen.
Tabassum, Rana; Gupta, Banshi D
2016-12-15
We report an approach for the simultaneous estimation of vitamin K1 (VK1) and heparin via cascaded channel multianalyte sensing probe employing fiber optic surface plasmon resonance technique. Cladding from two well separated portions of the fiber is removed and are respectively coated with thin films of silver (channel-1) and copper (channel-2). The nanohybrid of multiwalled carbon nanotube in chitosan is fabricated over silver layer for the sensing of VK1 whereas core shell nanostructure of polybrene@ZnO is coated over copper layer for the sensing of heparin. Spectral interrogation method is used for the characterization of the sensor. Analyte selectivity of both the channels is performed by carrying out experiments using independent solutions of VK1 and heparin. Experiments performed on the solution of the mixture of VK1 and heparin show red shifts in both the channels on changing the concentration of both the analytes in the mixture. The operating range of both VK1 and heparin is from 0 to 10(-3)g/l. The limit of detection of the sensor is 2.66×10(-4)µg/l and 2.88×10(-4)µg/l for VK1 and heparin respectively which are lower than the reported ones. The additional advantages of the present sensor are low cost, possibility of online monitoring and remote sensing. Copyright © 2016 Elsevier B.V. All rights reserved.
Likelihood-Based Random-Effect Meta-Analysis of Binary Events.
Amatya, Anup; Bhaumik, Dulal K; Normand, Sharon-Lise; Greenhouse, Joel; Kaizar, Eloise; Neelon, Brian; Gibbons, Robert D
2015-01-01
Meta-analysis has been used extensively for evaluation of efficacy and safety of medical interventions. Its advantages and utilities are well known. However, recent studies have raised questions about the accuracy of the commonly used moment-based meta-analytic methods in general and for rare binary outcomes in particular. The issue is further complicated for studies with heterogeneous effect sizes. Likelihood-based mixed-effects modeling provides an alternative to moment-based methods such as inverse-variance weighted fixed- and random-effects estimators. In this article, we compare and contrast different mixed-effect modeling strategies in the context of meta-analysis. Their performance in estimation and testing of overall effect and heterogeneity are evaluated when combining results from studies with a binary outcome. Models that allow heterogeneity in both baseline rate and treatment effect across studies have low type I and type II error rates, and their estimates are the least biased among the models considered.
Assessment of the Performance of a Dual-Frequency Surface Reference Technique
NASA Technical Reports Server (NTRS)
Meneghini, Robert; Liao, Liang; Tanelli, Simone; Durden, Stephen
2013-01-01
The high correlation of the rain-free surface cross sections at two frequencies implies that the estimate of differential path integrated attenuation (PIA) caused by precipitation along the radar beam can be obtained to a higher degree of accuracy than the path-attenuation at either frequency. We explore this finding first analytically and then by examining data from the JPL dual-frequency airborne radar using measurements from the TC4 experiment obtained during July-August 2007. Despite this improvement in the accuracy of the differential path attenuation, solving the constrained dual-wavelength radar equations for parameters of the particle size distribution requires not only this quantity but the single-wavelength path attenuation as well. We investigate a simple method of estimating the single-frequency path attenuation from the differential attenuation and compare this with the estimate derived directly from the surface return.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, J.J. Jr.; Hyder, Z.
The Nguyen and Pinder method is one of four techniques commonly used for analysis of response data from slug tests. Limited field research has raised questions about the reliability of the parameter estimates obtained with this method. A theoretical evaluation of this technique reveals that errors were made in the derivation of the analytical solution upon which the technique is based. Simulation and field examples show that the errors result in parameter estimates that can differ from actual values by orders of magnitude. These findings indicate that the Nguyen and Pinder method should no longer be a tool in themore » repertoire of the field hydrogeologist. If data from a slug test performed in a partially penetrating well in a confined aquifer need to be analyzed, recent work has shown that the Hvorslev method is the best alternative among the commonly used techniques.« less
Consistent Partial Least Squares Path Modeling via Regularization.
Jung, Sunho; Park, JaeHong
2018-01-01
Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.
Determination of structure tilting in magnetized plasmas—Time delay estimation in two dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guszejnov, Dávid; Bencze, Attila; Zoletnik, Sándor
2013-06-15
Time delay estimation (TDE) is a well-known technique to investigate poloidal flows in fusion plasmas. The present work is an extension of the earlier works of Bencze and Zoletnik [Phys. Plasmas 12, 052323 (2005)] and Tal et al.[Phys. Plasmas 18, 122304 (2011)]. From the prospective of the comparison of theory and experiment, it seems to be important to estimate the statistical properties of the TDE based on solid mathematical groundings. This paper provides analytic derivation of the variance of the TDE using a two-dimensional model for coherent turbulent structures in the plasma edge and also gives an explicit method formore » determination of the tilt angle of structures. As a demonstration, this method is then applied to the results of a quasi-2D Beam Emission Spectroscopy measurement performed at the TEXTOR tokamak.« less
NASA Technical Reports Server (NTRS)
1974-01-01
Technical information is presented covering the areas of: (1) analytical instrumentation useful in the analysis of physical phenomena; (2) analytical techniques used to determine the performance of materials; and (3) systems and component analyses for design and quality control.
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.845 - Standard; Toxicology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.843 - Standard; Endocrinology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
42 CFR 493.851 - Standard; Hematology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... acceptable responses for each analyte in each testing event is unsatisfactory analyte performance for the... testing event. (e)(1) For any unsatisfactory analyte or test performance or testing event for reasons... any unacceptable analyte or testing event score, remedial action must be taken and documented, and the...
The Analytical Solution of the Transient Radial Diffusion Equation with a Nonuniform Loss Term.
NASA Astrophysics Data System (ADS)
Loridan, V.; Ripoll, J. F.; De Vuyst, F.
2017-12-01
Many works have been done during the past 40 years to perform the analytical solution of the radial diffusion equation that models the transport and loss of electrons in the magnetosphere, considering a diffusion coefficient proportional to a power law in shell and a constant loss term. Here, we propose an original analytical method to address this challenge with a nonuniform loss term. The strategy is to match any L-dependent electron losses with a piecewise constant function on M subintervals, i.e., dealing with a constant lifetime on each subinterval. Applying an eigenfunction expansion method, the eigenvalue problem becomes presently a Sturm-Liouville problem with M interfaces. Assuming the continuity of both the distribution function and its first spatial derivatives, we are able to deal with a well-posed problem and to find the full analytical solution. We further show an excellent agreement between both the analytical solutions and the solutions obtained directly from numerical simulations for different loss terms of various shapes and with a diffusion coefficient DLL L6. We also give two expressions for the required number of eigenmodes N to get an accurate snapshot of the analytical solution, highlighting that N is proportional to 1/√t0, where t0 is a time of interest, and that N increases with the diffusion power. Finally, the equilibrium time, defined as the time to nearly reach the steady solution, is estimated by a closed-form expression and discussed. Applications to Earth and also Jupiter and Saturn are discussed.
How Much Can We Learn from a Single Chromatographic Experiment? A Bayesian Perspective.
Wiczling, Paweł; Kaliszan, Roman
2016-01-05
In this work, we proposed and investigated a Bayesian inference procedure to find the desired chromatographic conditions based on known analyte properties (lipophilicity, pKa, and polar surface area) using one preliminary experiment. A previously developed nonlinear mixed effect model was used to specify the prior information about a new analyte with known physicochemical properties. Further, the prior (no preliminary data) and posterior predictive distribution (prior + one experiment) were determined sequentially to search towards the desired separation. The following isocratic high-performance reversed-phase liquid chromatographic conditions were sought: (1) retention time of a single analyte within the range of 4-6 min and (2) baseline separation of two analytes with retention times within the range of 4-10 min. The empirical posterior Bayesian distribution of parameters was estimated using the "slice sampling" Markov Chain Monte Carlo (MCMC) algorithm implemented in Matlab. The simulations with artificial analytes and experimental data of ketoprofen and papaverine were used to test the proposed methodology. The simulation experiment showed that for a single and two randomly selected analytes, there is 97% and 74% probability of obtaining a successful chromatogram using none or one preliminary experiment. The desired separation for ketoprofen and papaverine was established based on a single experiment. It was confirmed that the search for a desired separation rarely requires a large number of chromatographic analyses at least for a simple optimization problem. The proposed Bayesian-based optimization scheme is a powerful method of finding a desired chromatographic separation based on a small number of preliminary experiments.
NASA Astrophysics Data System (ADS)
Wu, Fang-Xiang; Mu, Lei; Shi, Zhong-Ke
2010-01-01
The models of gene regulatory networks are often derived from statistical thermodynamics principle or Michaelis-Menten kinetics equation. As a result, the models contain rational reaction rates which are nonlinear in both parameters and states. It is challenging to estimate parameters nonlinear in a model although there have been many traditional nonlinear parameter estimation methods such as Gauss-Newton iteration method and its variants. In this article, we develop a two-step method to estimate the parameters in rational reaction rates of gene regulatory networks via weighted linear least squares. This method takes the special structure of rational reaction rates into consideration. That is, in the rational reaction rates, the numerator and the denominator are linear in parameters. By designing a special weight matrix for the linear least squares, parameters in the numerator and the denominator can be estimated by solving two linear least squares problems. The main advantage of the developed method is that it can produce the analytical solutions to the estimation of parameters in rational reaction rates which originally is nonlinear parameter estimation problem. The developed method is applied to a couple of gene regulatory networks. The simulation results show the superior performance over Gauss-Newton method.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.
Oftedal, O T; Eisert, R; Barrell, G K
2014-01-01
Mammalian milks may differ greatly in composition from cow milk, and these differences may affect the performance of analytical methods. High-fat, high-protein milks with a preponderance of oligosaccharides, such as those produced by many marine mammals, present a particular challenge. We compared the performance of several methods against reference procedures using Weddell seal (Leptonychotes weddellii) milk of highly varied composition (by reference methods: 27-63% water, 24-62% fat, 8-12% crude protein, 0.5-1.8% sugar). A microdrying step preparatory to carbon-hydrogen-nitrogen (CHN) gas analysis slightly underestimated water content and had a higher repeatability relative standard deviation (RSDr) than did reference oven drying at 100°C. Compared with a reference macro-Kjeldahl protein procedure, the CHN (or Dumas) combustion method had a somewhat higher RSDr (1.56 vs. 0.60%) but correlation between methods was high (0.992), means were not different (CHN: 17.2±0.46% dry matter basis; Kjeldahl 17.3±0.49% dry matter basis), there were no significant proportional or constant errors, and predictive performance was high. A carbon stoichiometric procedure based on CHN analysis failed to adequately predict fat (reference: Röse-Gottlieb method) or total sugar (reference: phenol-sulfuric acid method). Gross energy content, calculated from energetic factors and results from reference methods for fat, protein, and total sugar, accurately predicted gross energy as measured by bomb calorimetry. We conclude that the CHN (Dumas) combustion method and calculation of gross energy are acceptable analytical approaches for marine mammal milk, but fat and sugar require separate analysis by appropriate analytic methods and cannot be adequately estimated by carbon stoichiometry. Some other alternative methods-low-temperature drying for water determination; Bradford, Lowry, and biuret methods for protein; the Folch and the Bligh and Dyer methods for fat; and enzymatic and reducing sugar methods for total sugar-appear likely to produce substantial error in marine mammal milks. It is important that alternative analytical methods be properly validated against a reference method before being used, especially for mammalian milks that differ greatly from cow milk in analyte characteristics and concentrations. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Dai, James Y.; Hughes, James P.
2012-01-01
The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448
The areal reduction factor: A new analytical expression for the Lazio Region in central Italy
NASA Astrophysics Data System (ADS)
Mineo, C.; Ridolfi, E.; Napolitano, F.; Russo, F.
2018-05-01
For the study and modeling of hydrological phenomena, both in urban and rural areas, a proper estimation of the areal reduction factor (ARF) is crucial. In this paper, we estimated the ARF from observed rainfall data as the ratio between the average rainfall occurring in a specific area and the point rainfall. Then, we compared the obtained ARF values with some of the most widespread empirical approaches in literature which are used when rainfall observations are not available. Results highlight that the literature formulations can lead to a substantial over- or underestimation of the ARF estimated from observed data. These findings can have severe consequences, especially in the design of hydraulic structures where empirical formulations are extensively applied. The aim of this paper is to present a new analytical relationship with an explicit dependence on the rainfall duration and area that can better represent the ARF-area trend over the area case of study. The analytical curve presented here can find an important application to estimate the ARF values for design purposes. The test study area is the Lazio Region (central Italy).
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
Estimation of distributional parameters for censored trace-level water-quality data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilliom, R.J.; Helsel, D.R.
1984-01-01
A recurring difficulty encountered in investigations of many metals and organic contaminants in ambient waters is that a substantial portion of water-sample concentrations are below limits of detection established by analytical laboratories. Several methods were evaluated for estimating distributional parameters for such censored data sets using only uncensored observations. Their reliabilities were evaluated by a Monte Carlo experiment in which small samples were generated from a wide range of parent distributions and censored at varying levels. Eight methods were used to estimate the mean, standard deviation, median, and interquartile range. Criteria were developed, based on the distribution of uncensored observations,more » for determining the best-performing parameter estimation method for any particular data set. The most robust method for minimizing error in censored-sample estimates of the four distributional parameters over all simulation conditions was the log-probability regression method. With this method, censored observations are assumed to follow the zero-to-censoring level portion of a lognormal distribution obtained by a least-squares regression between logarithms of uncensored concentration observations and their z scores. When method performance was separately evaluated for each distributional parameter over all simulation conditions, the log-probability regression method still had the smallest errors for the mean and standard deviation, but the lognormal maximum likelihood method had the smallest errors for the median and interquartile range. When data sets were classified prior to parameter estimation into groups reflecting their probable parent distributions, the ranking of estimation methods was similar, but the accuracy of error estimates was markedly improved over those without classification. 6 figs., 6 tabs.« less
Developing appropriate methods for cost-effectiveness analysis of cluster randomized trials.
Gomes, Manuel; Ng, Edmond S-W; Grieve, Richard; Nixon, Richard; Carpenter, James; Thompson, Simon G
2012-01-01
Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering--seemingly unrelated regression (SUR) without a robust standard error (SE)--and 4 methods that recognized clustering--SUR and generalized estimating equations (GEEs), both with robust SE, a "2-stage" nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92-0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters.
Developing Appropriate Methods for Cost-Effectiveness Analysis of Cluster Randomized Trials
Gomes, Manuel; Ng, Edmond S.-W.; Nixon, Richard; Carpenter, James; Thompson, Simon G.
2012-01-01
Aim. Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Methods. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering—seemingly unrelated regression (SUR) without a robust standard error (SE)—and 4 methods that recognized clustering—SUR and generalized estimating equations (GEEs), both with robust SE, a “2-stage” nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Results. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92–0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. Conclusions. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters. PMID:22016450
Danoix, F; Grancher, G; Bostel, A; Blavette, D
2007-09-01
Atom probe is a very powerful instrument to measure concentrations on a sub nanometric scale [M.K. Miller, G.D.W. Smith, Atom Probe Microanalysis, Principles and Applications to Materials Problems, Materials Research Society, Pittsburgh, 1989]. Atom probe is therefore a unique tool to study and characterise finely decomposed metallic materials. Composition profiles or 3D mapping can be realised by gathering elemental composition measurements. As the detector efficiency is generally not equal to 1, the measured compositions are only estimates of actual values. The variance of the estimates depends on which information is to be estimated. It can be calculated when the detection process is known. These two papers are devoted to give complete analytical derivation and expressions of the variance on composition measurements in several situations encountered when using atom probe. In the first paper, we will concentrate on the analytical derivation of the variance when estimation of compositions obtained from a conventional one dimension (1D) atom probe is considered. In particular, the existing expressions, and the basic hypotheses on which they rely, will be reconsidered, and complete analytical demonstrations established. In the second companion paper, the case of 3D atom probe will be treated, highlighting how the knowledge of the 3D position of detected ions modifies the analytical derivation of the variance of local composition data.
NASA Astrophysics Data System (ADS)
Keshavarz-Motamed, Zahra
2015-11-01
Coarctation of the aorta (COA) is a congenital heart disease corresponding to a narrowing in the aorta. Cardiac catheterization is considered to be the reference standard for definitive evaluation of COA severity, based on the peak-to-peak trans-coarctation pressure gradient (PtoP TCPG) and instantaneous systolic value of trans-COA pressure gradient (TCPG). However, invasive cardiac catheterization may carry high risks given that undergoing multiple follow-up cardiac catheterizations in patients with COA is common. The objective of this study is to present an analytical description of the COA that estimates PtoP TCPG and TCPG without a need for high risk invasive data collection. Coupled Navier-Stokes and elastic deformation equations were solved analytically to estimate TCPG and PtoP TCPG. The results were validated against data measured in vitro (e.g., 90% COA: TCPG: root mean squared error (RMSE) = 3.93 mmHg; PtoP TCPG: RMSE = 7.9 mmHg). Moreover, the estimated PtoP TCPG resulted from the suggested analytical description was validated using clinical data in twenty patients with COA (maximum RMSE: 8.3 mmHg). Very good correlation and concordance were found between TCPG and PtoP TCPG obtained from the analytical formulation and in vitro and in vivo data. The suggested methodology can be considered as an alternative to cardiac catheterization and can help preventing its risks.
Yule, Daniel L.; Adams, Jean V.; Warner, David M.; Hrabik, Thomas R.; Kocovsky, Patrick M.; Weidel, Brian C.; Rudstam, Lars G.; Sullivan, Patrick J.
2013-01-01
Pelagic fish assessments often combine large amounts of acoustic-based fish density data and limited midwater trawl information to estimate species-specific biomass density. We compared the accuracy of five apportionment methods for estimating pelagic fish biomass density using simulated communities with known fish numbers that mimic Lakes Superior, Michigan, and Ontario, representing a range of fish community complexities. Across all apportionment methods, the error in the estimated biomass generally declined with increasing effort, but methods that accounted for community composition changes with water column depth performed best. Correlations between trawl catch and the true species composition were highest when more fish were caught, highlighting the benefits of targeted trawling in locations of high fish density. Pelagic fish surveys should incorporate geographic and water column depth stratification in the survey design, use apportionment methods that account for species-specific depth differences, target midwater trawling effort in areas of high fish density, and include at least 15 midwater trawls. With relatively basic biological information, simulations of fish communities and sampling programs can optimize effort allocation and reduce error in biomass estimates.
Analytical Fuselage and Wing Weight Estimation of Transport Aircraft
DOT National Transportation Integrated Search
1996-05-01
A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of comp...
Cavitation in liquid cryogens. 4: Combined correlations for venturi, hydrofoil, ogives, and pumps
NASA Technical Reports Server (NTRS)
Hord, J.
1974-01-01
The results of a series of experimental and analytical cavitation studies are presented. Cross-correlation is performed of the developed cavity data for a venturi, a hydrofoil and three scaled ogives. The new correlating parameter, MTWO, improves data correlation for these stationary bodies and for pumping equipment. Existing techniques for predicting the cavitating performance of pumping machinery were extended to include variations in flow coefficient, cavitation parameter, and equipment geometry. The new predictive formulations hold promise as a design tool and universal method for correlating pumping machinery performance. Application of these predictive formulas requires prescribed cavitation test data or an independent method of estimating the cavitation parameter for each pump. The latter would permit prediction of performance without testing; potential methods for evaluating the cavitation parameter prior to testing are suggested.
Modeling and evaluating the performance of Brillouin distributed optical fiber sensors.
Soto, Marcelo A; Thévenaz, Luc
2013-12-16
A thorough analysis of the key factors impacting on the performance of Brillouin distributed optical fiber sensors is presented. An analytical expression is derived to estimate the error on the determination of the Brillouin peak gain frequency, based for the first time on real experimental conditions. This expression is experimentally validated, and describes how this frequency uncertainty depends on measurement parameters, such as Brillouin gain linewidth, frequency scanning step and signal-to-noise ratio. Based on the model leading to this expression and considering the limitations imposed by nonlinear effects and pump depletion, a figure-of-merit is proposed to fairly compare the performance of Brillouin distributed sensing systems. This figure-of-merit offers to the research community and to potential users the possibility to evaluate with an objective metric the real performance gain resulting from any proposed configuration.
Error recovery in shared memory multiprocessors using private caches
NASA Technical Reports Server (NTRS)
Wu, Kun-Lung; Fuchs, W. Kent; Patel, Janak H.
1990-01-01
The problem of recovering from processor transient faults in shared memory multiprocesses systems is examined. A user-transparent checkpointing and recovery scheme using private caches is presented. Processes can recover from errors due to faulty processors by restarting from the checkpointed computation state. Implementation techniques using checkpoint identifiers and recovery stacks are examined as a means of reducing performance degradation in processor utilization during normal execution. This cache-based checkpointing technique prevents rollback propagation, provides rapid recovery, and can be integrated into standard cache coherence protocols. An analytical model is used to estimate the relative performance of the scheme during normal execution. Extensions to take error latency into account are presented.
Faggion, Clovis Mariano; Wu, Yun-Chun; Scheidgen, Moritz; Tu, Yu-Kang
2015-01-01
Background Risk of bias (ROB) may threaten the internal validity of a clinical trial by distorting the magnitude of treatment effect estimates, although some conflicting information on this assumption exists. Objective The objective of this study was evaluate the effect of ROB on the magnitude of treatment effect estimates in randomized controlled trials (RCTs) in periodontology and implant dentistry. Methods A search for Cochrane systematic reviews (SRs), including meta-analyses of RCTs published in periodontology and implant dentistry fields, was performed in the Cochrane Library in September 2014. Random-effect meta-analyses were performed by grouping RCTs with different levels of ROBs in three domains (sequence generation, allocation concealment, and blinding of outcome assessment). To increase power and precision, only SRs with meta-analyses including at least 10 RCTs were included. Meta-regression was performed to investigate the association between ROB characteristics and the magnitudes of intervention effects in the meta-analyses. Results Of the 24 initially screened SRs, 21 SRs were excluded because they did not include at least 10 RCTs in the meta-analyses. Three SRs (two from periodontology field) generated information for conducting 27 meta-analyses. Meta-regression did not reveal significant differences in the relationship of the ROB level with the size of treatment effect estimates, although a trend for inflated estimates was observed in domains with unclear ROBs. Conclusion In this sample of RCTs, high and (mainly) unclear risks of selection and detection biases did not seem to influence the size of treatment effect estimates, although several confounders might have influenced the strength of the association. PMID:26422698
Keller, Lisa A; Clauser, Brian E; Swanson, David B
2010-12-01
In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.
Fernández-San-Martín, Maria Isabel; Martín-López, Luis Miguel; Masa-Font, Roser; Olona-Tabueña, Noemí; Roman, Yuani; Martin-Royo, Jaume; Oller-Canet, Silvia; González-Tejón, Susana; San-Emeterio, Luisa; Barroso-Garcia, Albert; Viñas-Cabrera, Lidia; Flores-Mateo, Gemma
2014-01-01
Patients with severe mental illness have higher prevalences of cardiovascular risk factors (CRF). The objective is to determine whether interventions to modify lifestyles in these patients reduce anthropometric and analytical parameters related to CRF in comparison to routine clinical practice. Systematic review of controlled clinical trials with lifestyle intervention in Medline, Cochrane Library, Embase, PsycINFO and CINALH. Change in body mass index, waist circumference, cholesterol, triglycerides and blood sugar. Meta-analyses were performed using random effects models to estimate the weighted mean difference. Heterogeneity was determined using i(2) statistical and subgroups analyses. 26 studies were selected. Lifestyle interventions decrease anthropometric and analytical parameters at 3 months follow up. At 6 and 12 months, the differences between the intervention and control groups were maintained, although with less precision. More studies with larger samples and long-term follow-up are needed.
An Advanced Framework for Improving Situational Awareness in Electric Power Grid Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yousu; Huang, Zhenyu; Zhou, Ning
With the deployment of new smart grid technologies and the penetration of renewable energy in power systems, significant uncertainty and variability is being introduced into power grid operation. Traditionally, the Energy Management System (EMS) operates the power grid in a deterministic mode, and thus will not be sufficient for the future control center in a stochastic environment with faster dynamics. One of the main challenges is to improve situational awareness. This paper reviews the current status of power grid operation and presents a vision of improving wide-area situational awareness for a future control center. An advanced framework, consisting of parallelmore » state estimation, state prediction, parallel contingency selection, parallel contingency analysis, and advanced visual analytics, is proposed to provide capabilities needed for better decision support by utilizing high performance computing (HPC) techniques and advanced visual analytic techniques. Research results are presented to support the proposed vision and framework.« less
Galaxy–galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; ...
2017-07-21
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less
Diagnostic precision of mentally estimated home blood pressure means.
Ouattara, Franck Olivier; Laskine, Mikhael; Cheong, Nathalie Ng; Birnbaum, Leora; Wistaff, Robert; Bertrand, Michel; van Nguyen, Paul; Kolan, Christophe; Durand, Madeleine; Rinfret, Felix; Lamarre-Cliche, Maxime
2018-05-07
Paper home blood pressure (HBP) charts are commonly brought to physicians at office visits. The precision and accuracy of mental calculations of blood pressure (BP) means are not known. A total of 109 hypertensive patients were instructed to measure and record their HBP for 1 week and to bring their paper charts to their office visit. Study section 1: HBP means were calculated electronically and compared to corresponding in-office BP estimates made by physicians. Study section 2: 100 randomly ordered HBP charts were re-examined repetitively by 11 evaluators. Each evaluator estimated BP means four times in 5, 15, 30, and 60 s (random order) allocated for the task. BP means and diagnostic performance (determination of therapeutic systolic and diastolic BP goals attained or not) were compared between physician estimates and electronically calculated results. Overall, electronically and mentally calculated BP means were not different. Individual analysis showed that 83% of in-office physician estimates were within a 5-mmHg systolic BP range. There was diagnostic disagreement in 15% of cases. Performance improved consistently when the time allocated for BP estimation was increased from 5 to 15 s and from 15 to 30 s, but not when it exceeded 30 s. Mentally calculating HBP means from paper charts can cause a number of diagnostic errors. Chart evaluation exceeding 30 s does not significantly improve accuracy. BP-measuring devices with modern analytical capacities could be useful to physicians.
Galaxy–galaxy lensing estimators and their covariance properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less
Galaxy-galaxy lensing estimators and their covariance properties
NASA Astrophysics Data System (ADS)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, Norwood B.; Walker, J.F.
1992-01-01
Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Human, Lauren J; Thorson, Katherine R; Woolley, Joshua D; Mendes, Wendy Berry
2017-04-01
Intranasal administration of the hypothalamic neuropeptide oxytocin (OT) has, in some studies, been associated with positive effects on social perception and cognition. Similarly, positive emotion inductions can improve a range of perceptual and performance-based behaviors. In this exploratory study, we examined how OT administration and positive emotion inductions interact in their associations with social and analytical performance. Participants (N=124) were randomly assigned to receive an intranasal spray of OT (40IU) or placebo and then viewed one of three videos designed to engender one of the following emotion states: social warmth, pride, or an affectively neutral state. Following the emotion induction, participants completed social perception and analytical tasks. There were no significant main effects of OT condition on social perception tasks, failing to replicate prior research, or on analytical performance. Further, OT condition and positive emotion inductions did not interact with each other in their associations with social perception performance. However, OT condition and positive emotion manipulations did significantly interact in their associations with analytical performance. Specifically, combining positive emotion inductions with OT administration was associated with worse analytical performance, with the pride induction no longer benefiting performance and the warmth induction resulting in worse performance. In sum, we found little evidence for main or interactive effects of OT on social perception but preliminary evidence that OT administration may impair analytical performance when paired with positive emotion inductions. Copyright © 2017 Elsevier Inc. All rights reserved.
SWAT system performance predictions
NASA Astrophysics Data System (ADS)
Parenti, Ronald R.; Sasiela, Richard J.
1993-03-01
In the next phase of Lincoln Laboratory's SWAT (Short-Wavelength Adaptive Techniques) program, the performance of a 241-actuator adaptive-optics system will be measured using a variety of synthetic-beacon geometries. As an aid in this experimental investigation, a detailed set of theoretical predictions has also been assembled. The computational tools that have been applied in this study include a numerical approach in which Monte-Carlo ray-trace simulations of accumulated phase error are developed, and an analytical analysis of the expected system behavior. This report describes the basis of these two computational techniques and compares their estimates of overall system performance. Although their regions of applicability tend to be complementary rather than redundant, good agreement is usually obtained when both sets of results can be derived for the same engagement scenario.
Evaluation of analytical performance based on partial order methodology.
Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin
2015-01-01
Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.
Aleksejevs, Aleksandrs; Barkanova, Svetlana; Ilyichev, Alexander; ...
2010-11-19
We perform updated and detailed calculations of the complete NLO set of electroweak radiative corrections to parity violating e – e – → e – e – (γ) scattering asymmetries at energies relevant for the ultra-precise Moller experiment coming soon at JLab. Our numerical results are presented for a range of experimental cuts and relative importance of various contributions is analyzed. In addition, we also provide very compact expressions analytically free from non-physical parameters and show them to be valid for fast yet accurate estimations.
Turbofan engine demonstration of sensor failure detection
NASA Technical Reports Server (NTRS)
Merrill, Walter C.; Delaat, John C.; Abdelwahab, Mahmood
1991-01-01
In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at both medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed.
Ionic Graphitization of Ultrathin Films of Ionic Compounds.
Kvashnin, A G; Pashkin, E Y; Yakobson, B I; Sorokin, P B
2016-07-21
On the basis of ab initio density functional calculations, we performed a comprehensive investigation of the general graphitization tendency in rocksalt-type structures. In this paper, we determine the critical slab thickness for a range of ionic cubic crystal systems, below which a spontaneous conversion from a cubic to a layered graphitic-like structure occurs. This conversion is driven by surface energy reduction. Using only fundamental parameters of the compounds such as the Allen electronegativity and ionic radius of the metal atom, we also develop an analytical relation to estimate the critical number of layers.
Multi-hole pressure probes to wind tunnel experiments and air data systems
NASA Astrophysics Data System (ADS)
Shevchenko, A. M.; Shmakov, A. S.
2017-10-01
The problems to develop a multihole pressure system to measure flow angularity, Mach number and dynamic head for wind tunnel experiments or air data systems are discussed. A simple analytical model with separation of variables is derived for the multihole spherical pressure probe. The proposed model is uniform for small subsonic and supersonic speeds. An error analysis was performed. The error functions are obtained, allowing to estimate the influence of the Mach number, the pitch angle, the location of the pressure ports on the uncertainty of determining the flow parameters.
Data Centric Sensor Stream Reduction for Real-Time Applications in Wireless Sensor Networks
Aquino, Andre Luiz Lins; Nakamura, Eduardo Freire
2009-01-01
This work presents a data-centric strategy to meet deadlines in soft real-time applications in wireless sensor networks. This strategy considers three main aspects: (i) The design of real-time application to obtain the minimum deadlines; (ii) An analytic model to estimate the ideal sample size used by data-reduction algorithms; and (iii) Two data-centric stream-based sampling algorithms to perform data reduction whenever necessary. Simulation results show that our data-centric strategies meet deadlines without loosing data representativeness. PMID:22303145
X-43A Rudder Spindle Fatigue Life Estimate and Testing
NASA Technical Reports Server (NTRS)
Glaessgen, Edward H.; Dawicke, David S.; Johnston, William M.; James, Mark A.; Simonsen, Micah; Mason, Brian H.
2005-01-01
Fatigue life analyses were performed using a standard strain-life approach and a linear cumulative damage parameter to assess the effect of a single accidental overload on the fatigue life of the Haynes 230 nickel-base superalloy X-43A rudder spindle. Because of a limited amount of information available about the Haynes 230 material, a series of tests were conducted to replicate the overload and in-service conditions for the spindle and corroborate the analysis. Both the analytical and experimental results suggest that the spindle will survive the anticipated flight loads.
Feasibility study of inlet shock stability system of YF-12
NASA Technical Reports Server (NTRS)
Blausey, G. C.; Coleman, D. M.; Harp, D. S.
1972-01-01
The feasibility of self actuating bleed valves as a shock stabilization system in the inlet of the YF-12 is considered for vortex valves, slide valves, and poppet valves. Analytical estimation of valve performance indicates that only the slide and poppet valves located in the inlet cowl can meet the desired steady state stabilizing flows, and of the two the poppet valve is substantially faster in response to dynamic disturbances. The poppet valve is, therefore, selected as the best shock stability system for the YF-12 inlet.
Schwartz, Rachel S; Mueller, Rachel L
2010-01-11
Estimates of divergence dates between species improve our understanding of processes ranging from nucleotide substitution to speciation. Such estimates are frequently based on molecular genetic differences between species; therefore, they rely on accurate estimates of the number of such differences (i.e. substitutions per site, measured as branch length on phylogenies). We used simulations to determine the effects of dataset size, branch length heterogeneity, branch depth, and analytical framework on branch length estimation across a range of branch lengths. We then reanalyzed an empirical dataset for plethodontid salamanders to determine how inaccurate branch length estimation can affect estimates of divergence dates. The accuracy of branch length estimation varied with branch length, dataset size (both number of taxa and sites), branch length heterogeneity, branch depth, dataset complexity, and analytical framework. For simple phylogenies analyzed in a Bayesian framework, branches were increasingly underestimated as branch length increased; in a maximum likelihood framework, longer branch lengths were somewhat overestimated. Longer datasets improved estimates in both frameworks; however, when the number of taxa was increased, estimation accuracy for deeper branches was less than for tip branches. Increasing the complexity of the dataset produced more misestimated branches in a Bayesian framework; however, in an ML framework, more branches were estimated more accurately. Using ML branch length estimates to re-estimate plethodontid salamander divergence dates generally resulted in an increase in the estimated age of older nodes and a decrease in the estimated age of younger nodes. Branch lengths are misestimated in both statistical frameworks for simulations of simple datasets. However, for complex datasets, length estimates are quite accurate in ML (even for short datasets), whereas few branches are estimated accurately in a Bayesian framework. Our reanalysis of empirical data demonstrates the magnitude of effects of Bayesian branch length misestimation on divergence date estimates. Because the length of branches for empirical datasets can be estimated most reliably in an ML framework when branches are <1 substitution/site and datasets are > or =1 kb, we suggest that divergence date estimates using datasets, branch lengths, and/or analytical techniques that fall outside of these parameters should be interpreted with caution.
The design, analysis and experimental evaluation of an elastic model wing
NASA Technical Reports Server (NTRS)
Cavin, R. K., III; Thisayakorn, C.
1974-01-01
An elastic orbiter model was developed to evaluate the effectiveness of aeroelasticity computer programs. The elasticity properties were introduced by constructing beam-like straight wings for the wind tunnel model. A standard influence coefficient mathematical model was used to estimate aeroelastic effects analytically. In general good agreement was obtained between the empirical and analytical estimates of the deformed shape. However, in the static aeroelasticity case, it was found that the physical wing exhibited less bending and more twist than was predicted by theory.
Study of a LH2-fueled topping cycle engine for aircraft propulsion
NASA Technical Reports Server (NTRS)
Turney, G. E.; Fishbach, L. H.
1983-01-01
An analytical investigation was made of a topping cycle aircraft engine system which uses a cryogenic fuel. This system consists of a main turboshaft engine which is mechanically coupled (by cross-shafting) to a topping loop which augments the shaft power output of the system. The thermodynamic performance of the topping cycle engine was analyzed and compared with that of a reference (conventional-type) turboshaft engine. For the cycle operating conditions selected, the performance of the topping cycle engine in terms of brake specific fuel consumption (bsfc) was determined to be about 12 percent better than that of the reference turboshaft engine. Engine weights were estimated for both the topping cycle engine and the reference turboshaft engine. These estimates were based on a common shaft power output for each engine. Results indicate that the weight of the topping cycle engine is comparable to that of the reference turboshaft engine. Previously announced in STAR as N83-34942
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramirez Aviles, Camila A.; Rao, Nageswara S.
We consider the problem of inferring the operational state of a reactor facility by using measurements from a radiation sensor network, which is deployed around the facility’s ventilation stack. The radiation emissions from the stack decay with distance, and the corresponding measurements are inherently random with parameters determined by radiation intensity levels at the sensor locations. We fuse measurements from network sensors to estimate the intensity at the stack, and use this estimate in a one-sided Sequential Probability Ratio Test (SPRT) to infer the on/off state of the reactor facility. We demonstrate the superior performance of this method over conventionalmore » majority vote fusers and individual sensors using (i) test measurements from a network of NaI sensors, and (ii) emulated measurements using radioactive effluents collected at a reactor facility stack. We analytically quantify the performance improvements of individual sensors and their networks with adaptive thresholds over those with fixed ones, by using the packing number of the radiation intensity space.« less
Mousa-Pasandi, Mohammad E; Plant, David V
2010-09-27
We report and investigate the feasibility of zero-overhead laser phase noise compensation (PNC) for long-haul coherent optical orthogonal frequency division multiplexing (CO-OFDM) transmission systems, using the decision-directed phase equalizer (DDPE). DDPE updates the equalization parameters on a symbol-by-symbol basis after an initial decision making stage and retrieves an estimation of the phase noise value by extracting and averaging the phase drift of all OFDM sub-channels. Subsequently, a second equalization is performed by using the estimated phase noise value which is followed by a final decision making stage. We numerically compare the performance of DDPE and the CO-OFDM conventional equalizer (CE) for different laser linewidth values after transmission over 2000 km of uncompensated single-mode fiber (SMF) at 40 Gb/s and investigate the effect of fiber nonlinearity and amplified spontaneous emission (ASE) noise on the received signal quality. Furthermore, we analytically analyze the complexity of DDPE versus CE in terms of the number of required complex multiplications per bit.
Gear crack propagation investigations
NASA Technical Reports Server (NTRS)
Lewicki, David G.; Ballarini, Roberto
1996-01-01
Analytical and experimental studies were performed to investigate the effect of gear rim thickness on crack propagation life. The FRANC (FRacture ANalysis Code) computer program was used to simulate crack propagation. The FRANC program used principles of linear elastic fracture mechanics, finite element modeling, and a unique re-meshing scheme to determine crack tip stress distributions, estimate stress intensity factors, and model crack propagation. Various fatigue crack growth models were used to estimate crack propagation life based on the calculated stress intensity factors. Experimental tests were performed in a gear fatigue rig to validate predicted crack propagation results. Test gears were installed with special crack propagation gages in the tooth fillet region to measure bending fatigue crack growth. Good correlation between predicted and measured crack growth was achieved when the fatigue crack closure concept was introduced into the analysis. As the gear rim thickness decreased, the compressive cyclic stress in the gear tooth fillet region increased. This retarded crack growth and increased the number of crack propagation cycles to failure.
A Fault Tolerant System for an Integrated Avionics Sensor Configuration
NASA Technical Reports Server (NTRS)
Caglayan, A. K.; Lancraft, R. E.
1984-01-01
An aircraft sensor fault tolerant system methodology for the Transport Systems Research Vehicle in a Microwave Landing System (MLS) environment is described. The fault tolerant system provides reliable estimates in the presence of possible failures both in ground-based navigation aids, and in on-board flight control and inertial sensors. Sensor failures are identified by utilizing the analytic relationships between the various sensors arising from the aircraft point mass equations of motion. The estimation and failure detection performance of the software implementation (called FINDS) of the developed system was analyzed on a nonlinear digital simulation of the research aircraft. Simulation results showing the detection performance of FINDS, using a dual redundant sensor compliment, are presented for bias, hardover, null, ramp, increased noise and scale factor failures. In general, the results show that FINDS can distinguish between normal operating sensor errors and failures while providing an excellent detection speed for bias failures in the MLS, indicated airspeed, attitude and radar altimeter sensors.
Process-based Cost Estimation for Ramjet/Scramjet Engines
NASA Technical Reports Server (NTRS)
Singh, Brijendra; Torres, Felix; Nesman, Miles; Reynolds, John
2003-01-01
Process-based cost estimation plays a key role in effecting cultural change that integrates distributed science, technology and engineering teams to rapidly create innovative and affordable products. Working together, NASA Glenn Research Center and Boeing Canoga Park have developed a methodology of process-based cost estimation bridging the methodologies of high-level parametric models and detailed bottoms-up estimation. The NASA GRC/Boeing CP process-based cost model provides a probabilistic structure of layered cost drivers. High-level inputs characterize mission requirements, system performance, and relevant economic factors. Design alternatives are extracted from a standard, product-specific work breakdown structure to pre-load lower-level cost driver inputs and generate the cost-risk analysis. As product design progresses and matures the lower level more detailed cost drivers can be re-accessed and the projected variation of input values narrowed, thereby generating a progressively more accurate estimate of cost-risk. Incorporated into the process-based cost model are techniques for decision analysis, specifically, the analytic hierarchy process (AHP) and functional utility analysis. Design alternatives may then be evaluated not just on cost-risk, but also user defined performance and schedule criteria. This implementation of full-trade study support contributes significantly to the realization of the integrated development environment. The process-based cost estimation model generates development and manufacturing cost estimates. The development team plans to expand the manufacturing process base from approximately 80 manufacturing processes to over 250 processes. Operation and support cost modeling is also envisioned. Process-based estimation considers the materials, resources, and processes in establishing cost-risk and rather depending on weight as an input, actually estimates weight along with cost and schedule.
NASA Astrophysics Data System (ADS)
O'Shaughnessy, Richard; Blackman, Jonathan; Field, Scott E.
2017-07-01
The recent direct observation of gravitational waves has further emphasized the desire for fast, low-cost, and accurate methods to infer the parameters of gravitational wave sources. Due to expense in waveform generation and data handling, the cost of evaluating the likelihood function limits the computational performance of these calculations. Building on recently developed surrogate models and a novel parameter estimation pipeline, we show how to quickly generate the likelihood function as an analytic, closed-form expression. Using a straightforward variant of a production-scale parameter estimation code, we demonstrate our method using surrogate models of effective-one-body and numerical relativity waveforms. Our study is the first time these models have been used for parameter estimation and one of the first ever parameter estimation calculations with multi-modal numerical relativity waveforms, which include all \\ell ≤slant 4 modes. Our grid-free method enables rapid parameter estimation for any waveform with a suitable reduced-order model. The methods described in this paper may also find use in other data analysis studies, such as vetting coincident events or the computation of the coalescing-compact-binary detection statistic.
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
Babiloni, F; Babiloni, C; Carducci, F; Fattorini, L; Onorati, P; Urbano, A
1996-04-01
This paper presents a realistic Laplacian (RL) estimator based on a tensorial formulation of the surface Laplacian (SL) that uses the 2-D thin plate spline function to obtain a mathematical description of a realistic scalp surface. Because of this tensorial formulation, the RL does not need an orthogonal reference frame placed on the realistic scalp surface. In simulation experiments the RL was estimated with an increasing number of "electrodes" (up to 256) on a mathematical scalp model, the analytic Laplacian being used as a reference. Second and third order spherical spline Laplacian estimates were examined for comparison. Noise of increasing magnitude and spatial frequency was added to the simulated potential distributions. Movement-related potentials and somatosensory evoked potentials sampled with 128 electrodes were used to estimate the RL on a realistically shaped, MR-constructed model of the subject's scalp surface. The RL was also estimated on a mathematical spherical scalp model computed from the real scalp surface. Simulation experiments showed that the performances of the RL estimator were similar to those of the second and third order spherical spline Laplacians. Furthermore, the information content of scalp-recorded potentials was clearly better when the RL estimator computed the SL of the potential on an MR-constructed scalp surface model.
Geospatial Analytics in Retail Site Selection and Sales Prediction.
Ting, Choo-Yee; Ho, Chiung Ching; Yee, Hui Jia; Matsah, Wan Razali
2018-03-01
Studies have shown that certain features from geography, demography, trade area, and environment can play a vital role in retail site selection, largely due to the impact they asserted on retail performance. Although the relevant features could be elicited by domain experts, determining the optimal feature set can be intractable and labor-intensive exercise. The challenges center around (1) how to determine features that are important to a particular retail business and (2) how to estimate retail sales performance given a new location? The challenges become apparent when the features vary across time. In this light, this study proposed a nonintervening approach by employing feature selection algorithms and subsequently sales prediction through similarity-based methods. The results of prediction were validated by domain experts. In this study, data sets from different sources were transformed and aggregated before an analytics data set that is ready for analysis purpose could be obtained. The data sets included data about feature location, population count, property type, education status, and monthly sales from 96 branches of a telecommunication company in Malaysia. The finding suggested that (1) optimal retail performance can only be achieved through fulfillment of specific location features together with the surrounding trade area characteristics and (2) similarity-based method can provide solution to retail sales prediction.
NASA Astrophysics Data System (ADS)
Rakotomanga, Prisca; Soussen, Charles; Blondel, Walter C. P. M.
2017-03-01
Diffuse reflectance spectroscopy (DRS) has been acknowledged as a valuable optical biopsy tool for in vivo characterizing pathological modifications in epithelial tissues such as cancer. In spatially resolved DRS, accurate and robust estimation of the optical parameters (OP) of biological tissues is a major challenge due to the complexity of the physical models. Solving this inverse problem requires to consider 3 components: the forward model, the cost function, and the optimization algorithm. This paper presents a comparative numerical study of the performances in estimating OP depending on the choice made for each of the latter components. Mono- and bi-layer tissue models are considered. Monowavelength (scalar) absorption and scattering coefficients are estimated. As a forward model, diffusion approximation analytical solutions with and without noise are implemented. Several cost functions are evaluated possibly including normalized data terms. Two local optimization methods, Levenberg-Marquardt and TrustRegion-Reflective, are considered. Because they may be sensitive to the initial setting, a global optimization approach is proposed to improve the estimation accuracy. This algorithm is based on repeated calls to the above-mentioned local methods, with initial parameters randomly sampled. Two global optimization methods, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO), are also implemented. Estimation performances are evaluated in terms of relative errors between the ground truth and the estimated values for each set of unknown OP. The combination between the number of variables to be estimated, the nature of the forward model, the cost function to be minimized and the optimization method are discussed.
Connolly, Mark P; Tashjian, Cole; Kotsopoulos, Nikolaos; Bhatt, Aomesh; Postma, Maarten J
2017-07-01
Numerous approaches are used to estimate indirect productivity losses using various wage estimates applied to poor health in working aged adults. Considering the different wage estimation approaches observed in the published literature, we sought to assess variation in productivity loss estimates when using average wages compared with age-specific wages. Published estimates for average and age-specific wages for combined male/female wages were obtained from the UK Office of National Statistics. A polynomial interpolation was used to convert 5-year age-banded wage data into annual age-specific wages estimates. To compare indirect cost estimates, average wages and age-specific wages were used to project productivity losses at various stages of life based on the human capital approach. Discount rates of 0, 3, and 6 % were applied to projected age-specific and average wage losses. Using average wages was found to overestimate lifetime wages in conditions afflicting those aged 1-27 and 57-67, while underestimating lifetime wages in those aged 27-57. The difference was most significant for children where average wage overestimated wages by 15 % and for 40-year-olds where it underestimated wages by 14 %. Large differences in projecting productivity losses exist when using the average wage applied over a lifetime. Specifically, use of average wages overestimates productivity losses between 8 and 15 % for childhood illnesses. Furthermore, during prime working years, use of average wages will underestimate productivity losses by 14 %. We suggest that to achieve more precise estimates of productivity losses, age-specific wages should become the standard analytic approach.
Ihssane, B; Bouchafra, H; El Karbane, M; Azougagh, M; Saffaj, T
2016-05-01
We propose in this work an efficient way to evaluate the measurement of uncertainty at the end of the development step of an analytical method, since this assessment provides an indication of the performance of the optimization process. The estimation of the uncertainty is done through a robustness test by applying a Placquett-Burman design, investigating six parameters influencing the simultaneous chromatographic assay of five water-soluble vitamins. The estimated effects of the variation of each parameter are translated into standard uncertainty value at each concentration level. The values obtained of the relative uncertainty do not exceed the acceptance limit of 5%, showing that the procedure development was well done. In addition, a statistical comparison conducted to compare standard uncertainty after the development stage and those of the validation step indicates that the estimated uncertainty are equivalent. The results obtained show clearly the performance and capacity of the chromatographic method to simultaneously assay the five vitamins and suitability for use in routine application. Copyright © 2015 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
Application of Soft Computing in Coherent Communications Phase Synchronization
NASA Technical Reports Server (NTRS)
Drake, Jeffrey T.; Prasad, Nadipuram R.
2000-01-01
The use of soft computing techniques in coherent communications phase synchronization provides an alternative to analytical or hard computing methods. This paper discusses a novel use of Adaptive Neuro-Fuzzy Inference Systems (ANFIS) for phase synchronization in coherent communications systems utilizing Multiple Phase Shift Keying (MPSK) modulation. A brief overview of the M-PSK digital communications bandpass modulation technique is presented and it's requisite need for phase synchronization is discussed. We briefly describe the hybrid platform developed by Jang that incorporates fuzzy/neural structures namely the, Adaptive Neuro-Fuzzy Interference Systems (ANFIS). We then discuss application of ANFIS to phase estimation for M-PSK. The modeling of both explicit, and implicit phase estimation schemes for M-PSK symbols with unknown structure are discussed. Performance results from simulation of the above scheme is presented.
NASA Astrophysics Data System (ADS)
Das Bhowmik, R.; Arumugam, S.
2015-12-01
Multivariate downscaling techniques exhibited superiority over univariate regression schemes in terms of preserving cross-correlations between multiple variables- precipitation and temperature - from GCMs. This study focuses on two aspects: (a) develop an analytical solutions on estimating biases in cross-correlations from univariate downscaling approaches and (b) quantify the uncertainty in land-surface states and fluxes due to biases in cross-correlations in downscaled climate forcings. Both these aspects are evaluated using climate forcings available from both historical climate simulations and CMIP5 hindcasts over the entire US. The analytical solution basically relates the univariate regression parameters, co-efficient of determination of regression and the co-variance ratio between GCM and downscaled values. The analytical solutions are compared with the downscaled univariate forcings by choosing the desired p-value (Type-1 error) in preserving the observed cross-correlation. . For quantifying the impacts of biases on cross-correlation on estimating streamflow and groundwater, we corrupt the downscaled climate forcings with different cross-correlation structure.
NASA Astrophysics Data System (ADS)
Li, Qiang; Argatov, Ivan; Popov, Valentin L.
2018-04-01
A recent paper by Popov, Pohrt and Li (PPL) in Friction investigated adhesive contacts of flat indenters in unusual shapes using numerical, analytical and experimental methods. Based on that paper, we analyze some special cases for which analytical solutions are known. As in the PPL paper, we consider adhesive contact in the Johnson-Kendall-Roberts approximation. Depending on the energy balance, different upper and lower estimates are obtained in terms of certain integral characteristics of the contact area. The special cases of an elliptical punch as well as a system of two circular punches are considered. Theoretical estimations for the first critical force (force at which the detachment process begins) are confirmed by numerical simulations using the adhesive boundary element method. It is shown that simpler approximations for the pull-off force, based both on the Holm radius of contact and the contact area, substantially overestimate the maximum adhesive force.
Bayesian estimation of the discrete coefficient of determination.
Chen, Ting; Braga-Neto, Ulisses M
2016-12-01
The discrete coefficient of determination (CoD) measures the nonlinear interaction between discrete predictor and target variables and has had far-reaching applications in Genomic Signal Processing. Previous work has addressed the inference of the discrete CoD using classical parametric and nonparametric approaches. In this paper, we introduce a Bayesian framework for the inference of the discrete CoD. We derive analytically the optimal minimum mean-square error (MMSE) CoD estimator, as well as a CoD estimator based on the Optimal Bayesian Predictor (OBP). For the latter estimator, exact expressions for its bias, variance, and root-mean-square (RMS) are given. The accuracy of both Bayesian CoD estimators with non-informative and informative priors, under fixed or random parameters, is studied via analytical and numerical approaches. We also demonstrate the application of the proposed Bayesian approach in the inference of gene regulatory networks, using gene-expression data from a previously published study on metastatic melanoma.
Jędrkiewicz, Renata; Orłowski, Aleksander; Namieśnik, Jacek; Tobiszewski, Marek
2016-01-15
In this study we perform ranking of analytical procedures for 3-monochloropropane-1,2-diol determination in soy sauces by PROMETHEE method. Multicriteria decision analysis was performed for three different scenarios - metrological, economic and environmental, by application of different weights to decision making criteria. All three scenarios indicate capillary electrophoresis-based procedure as the most preferable. Apart from that the details of ranking results differ for these three scenarios. The second run of rankings was done for scenarios that include metrological, economic and environmental criteria only, neglecting others. These results show that green analytical chemistry-based selection correlates with economic, while there is no correlation with metrological ones. This is an implication that green analytical chemistry can be brought into laboratories without analytical performance costs and it is even supported by economic reasons. Copyright © 2015 Elsevier B.V. All rights reserved.
Impact of transverse and longitudinal dispersion on first-order degradation rate constant estimation
NASA Astrophysics Data System (ADS)
Stenback, Greg A.; Ong, Say Kee; Rogers, Shane W.; Kjartanson, Bruce H.
2004-09-01
A two-dimensional analytical model is employed for estimating the first-order degradation rate constant of hydrophobic organic compounds (HOCs) in contaminated groundwater under steady-state conditions. The model may utilize all aqueous concentration data collected downgradient of a source area, but does not require that any data be collected along the plume centerline. Using a least squares fit of the model to aqueous concentrations measured in monitoring wells, degradation rate constants were estimated at a former manufactured gas plant (FMGP) site in the Midwest U.S. The estimated degradation rate constants are 0.0014, 0.0034, 0.0031, 0.0019, and 0.0053 day -1 for acenaphthene, naphthalene, benzene, ethylbenzene, and toluene, respectively. These estimated rate constants were as low as one-half those estimated with the one-dimensional (centerline) approach of Buscheck and Alcantar [Buscheck, T.E., Alcantar, C.M., 1995. Regression techniques and analytical solutions to demonstrate intrinsic bioremediation. In: Hinchee, R.E., Wilson, J.T., Downey, D.C. (Eds.), Intrinsic Bioremediation, Battelle Press, Columbus, OH, pp. 109-116] which does not account for transverse dispersivity. Varying the transverse and longitudinal dispersivity values over one order of magnitude for toluene data obtained from the FMGP site resulted in nearly a threefold variation in the estimated degradation rate constant—highlighting the importance of reliable estimates of the dispersion coefficients for obtaining reasonable estimates of the degradation rate constants. These results have significant implications for decision making and site management where overestimation of a degradation rate may result in remediation times and bioconversion factors that exceed expectations. For a complex source area or non-steady-state plume, a superposition of analytical models that incorporate longitudinal and transverse dispersion and time may be used at sites where the centerline method would not be applicable.
Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients
ERIC Educational Resources Information Center
Andersson, Björn; Xin, Tao
2018-01-01
In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…
Makarov, Sergey N.; Yanamadala, Janakinadh; Piazza, Matthew W.; Helderman, Alex M.; Thang, Niang S.; Burnham, Edward H.; Pascual-Leone, Alvaro
2016-01-01
Goals Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of the present study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. Methods We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100,000 observation points, and two distinct pulse rise times, thus providing a representative number of different data sets for comparison, while also using other numerical data. Results Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. Conclusion The simple analytical model tested in the present study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. Significance At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women. PMID:26685221
Optimizing the learning rate for adaptive estimation of neural encoding models
2018-01-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains. PMID:29813069
Optimizing the learning rate for adaptive estimation of neural encoding models.
Hsieh, Han-Lin; Shanechi, Maryam M
2018-05-01
Closed-loop neurotechnologies often need to adaptively learn an encoding model that relates the neural activity to the brain state, and is used for brain state decoding. The speed and accuracy of adaptive learning algorithms are critically affected by the learning rate, which dictates how fast model parameters are updated based on new observations. Despite the importance of the learning rate, currently an analytical approach for its selection is largely lacking and existing signal processing methods vastly tune it empirically or heuristically. Here, we develop a novel analytical calibration algorithm for optimal selection of the learning rate in adaptive Bayesian filters. We formulate the problem through a fundamental trade-off that learning rate introduces between the steady-state error and the convergence time of the estimated model parameters. We derive explicit functions that predict the effect of learning rate on error and convergence time. Using these functions, our calibration algorithm can keep the steady-state parameter error covariance smaller than a desired upper-bound while minimizing the convergence time, or keep the convergence time faster than a desired value while minimizing the error. We derive the algorithm both for discrete-valued spikes modeled as point processes nonlinearly dependent on the brain state, and for continuous-valued neural recordings modeled as Gaussian processes linearly dependent on the brain state. Using extensive closed-loop simulations, we show that the analytical solution of the calibration algorithm accurately predicts the effect of learning rate on parameter error and convergence time. Moreover, the calibration algorithm allows for fast and accurate learning of the encoding model and for fast convergence of decoding to accurate performance. Finally, larger learning rates result in inaccurate encoding models and decoders, and smaller learning rates delay their convergence. The calibration algorithm provides a novel analytical approach to predictably achieve a desired level of error and convergence time in adaptive learning, with application to closed-loop neurotechnologies and other signal processing domains.
Nonlinear estimation for arrays of chemical sensors
NASA Astrophysics Data System (ADS)
Yosinski, Jason; Paffenroth, Randy
2010-04-01
Reliable detection of hazardous materials is a fundamental requirement of any national security program. Such materials can take a wide range of forms including metals, radioisotopes, volatile organic compounds, and biological contaminants. In particular, detection of hazardous materials in highly challenging conditions - such as in cluttered ambient environments, where complex collections of analytes are present, and with sensors lacking specificity for the analytes of interest - is an important part of a robust security infrastructure. Sophisticated single sensor systems provide good specificity for a limited set of analytes but often have cumbersome hardware and environmental requirements. On the other hand, simple, broadly responsive sensors are easily fabricated and efficiently deployed, but such sensors individually have neither the specificity nor the selectivity to address analyte differentiation in challenging environments. However, arrays of broadly responsive sensors can provide much of the sensitivity and selectivity of sophisticated sensors but without the substantial hardware overhead. Unfortunately, arrays of simple sensors are not without their challenges - the selectivity of such arrays can only be realized if the data is first distilled using highly advanced signal processing algorithms. In this paper we will demonstrate how the use of powerful estimation algorithms, based on those commonly used within the target tracking community, can be extended to the chemical detection arena. Herein our focus is on algorithms that not only provide accurate estimates of the mixture of analytes in a sample, but also provide robust measures of ambiguity, such as covariances.
Modeling and control of non-square MIMO system using relay feedback.
Kalpana, D; Thyagarajan, T; Gokulraj, N
2015-11-01
This paper proposes a systematic approach for the modeling and control of non-square MIMO systems in time domain using relay feedback. Conventionally, modeling, selection of the control configuration and controller design of non-square MIMO systems are performed using input/output information of direct loop, while the output of undesired responses that bears valuable information on interaction among the loops are not considered. However, in this paper, the undesired response obtained from relay feedback test is also taken into consideration to extract the information about the interaction between the loops. The studies are performed on an Air Path Scheme of Turbocharged Diesel Engine (APSTDE) model, which is a typical non-square MIMO system, with input and output variables being 3 and 2 respectively. From the relay test response, the generalized analytical expressions are derived and these analytical expressions are used to estimate unknown system parameters and also to evaluate interaction measures. The interaction is analyzed by using Block Relative Gain (BRG) method. The model thus identified is later used to design appropriate controller to carry out closed loop studies. Closed loop simulation studies were performed for both servo and regulatory operations. Integral of Squared Error (ISE) performance criterion is employed to quantitatively evaluate performance of the proposed scheme. The usefulness of the proposed method is demonstrated on a lab-scale Two-Tank Cylindrical Interacting System (TTCIS), which is configured as a non-square system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Comparison of bipolar vs. tripolar concentric ring electrode Laplacian estimates.
Besio, W; Aakula, R; Dai, W
2004-01-01
Potentials on the body surface from the heart are of a spatial and temporal function. The 12-lead electrocardiogram (ECG) provides useful global temporal assessment, but it yields limited spatial information due to the smoothing effect caused by the volume conductor. The smoothing complicates identification of multiple simultaneous bioelectrical events. In an attempt to circumvent the smoothing problem, some researchers used a five-point method (FPM) to numerically estimate the analytical solution of the Laplacian with an array of monopolar electrodes. The FPM is generalized to develop a bi-polar concentric ring electrode system. We have developed a new Laplacian ECG sensor, a trielectrode sensor, based on a nine-point method (NPM) numerical approximation of the analytical Laplacian. For a comparison, the NPM, FPM and compact NPM were calculated over a 400 x 400 mesh with 1/400 spacing. Tri and bi-electrode sensors were also simulated and their Laplacian estimates were compared against the analytical Laplacian. We found that tri-electrode sensors have a much-improved accuracy with significantly less relative and maximum errors in estimating the Laplacian operator. Apart from the higher accuracy, our new electrode configuration will allow better localization of the electrical activity of the heart than bi-electrode configurations.
Development of a new semi-analytical model for cross-borehole flow experiments in fractured media
Roubinet, Delphine; Irving, James; Day-Lewis, Frederick D.
2015-01-01
Analysis of borehole flow logs is a valuable technique for identifying the presence of fractures in the subsurface and estimating properties such as fracture connectivity, transmissivity and storativity. However, such estimation requires the development of analytical and/or numerical modeling tools that are well adapted to the complexity of the problem. In this paper, we present a new semi-analytical formulation for cross-borehole flow in fractured media that links transient vertical-flow velocities measured in one or a series of observation wells during hydraulic forcing to the transmissivity and storativity of the fractures intersected by these wells. In comparison with existing models, our approach presents major improvements in terms of computational expense and potential adaptation to a variety of fracture and experimental configurations. After derivation of the formulation, we demonstrate its application in the context of sensitivity analysis for a relatively simple two-fracture synthetic problem, as well as for field-data analysis to investigate fracture connectivity and estimate fracture hydraulic properties. These applications provide important insights regarding (i) the strong sensitivity of fracture property estimates to the overall connectivity of the system; and (ii) the non-uniqueness of the corresponding inverse problem for realistic fracture configurations.
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, marcel; Shallhorn, Paul A.
2014-01-01
Computational Fluid Dynamics (CFD) is the standard numerical tool used by Fluid Dynamists to estimate solutions to many problems in academia, government, and industry. CFD is known to have errors and uncertainties and there is no universally adopted method to estimate such quantities. This paper describes an approach to estimate CFD uncertainties strictly numerically using inputs and the Student-T distribution. The approach is compared to an exact analytical solution of fully developed, laminar flow between infinite, stationary plates. It is shown that treating all CFD input parameters as oscillatory uncertainty terms coupled with the Student-T distribution can encompass the exact solution.
Shimura, Masashi; Maruo, Kazushi; Gosho, Masahiko
2018-04-23
Two-stage designs are widely used to determine whether a clinical trial should be terminated early. In such trials, a maximum likelihood estimate is often adopted to describe the difference in efficacy between the experimental and reference treatments; however, this method is known to display conditional bias. To reduce such bias, a conditional mean-adjusted estimator (CMAE) has been proposed, although the remaining bias may be nonnegligible when a trial is stopped for efficacy at the interim analysis. We propose a new estimator for adjusting the conditional bias of the treatment effect by extending the idea of the CMAE. This estimator is calculated by weighting the maximum likelihood estimate obtained at the interim analysis and the effect size prespecified when calculating the sample size. We evaluate the performance of the proposed estimator through analytical and simulation studies in various settings in which a trial is stopped for efficacy or futility at the interim analysis. We find that the conditional bias of the proposed estimator is smaller than that of the CMAE when the information time at the interim analysis is small. In addition, the mean-squared error of the proposed estimator is also smaller than that of the CMAE. In conclusion, we recommend the use of the proposed estimator for trials that are terminated early for efficacy or futility. Copyright © 2018 John Wiley & Sons, Ltd.
Hubert, Cécile; Roosen, Martin; Levi, Yves; Karolak, Sara
2017-06-02
The analysis of biomarkers in wastewater has become a common approach to assess community behavior. This method is an interesting way to estimate illicit drug consumption in a given population: by using a back calculation method, it is therefore possible to quantify the amount of a specific drug used in a community and to assess the consumption variation at different times and locations. Such a method needs reliable analytical data since the determination of a concentration in the ngL -1 range in a complex matrix is difficult and not easily reproducible. The best analytical method is liquid chromatography - mass spectrometry coupling after solid-phase extraction or on-line pre-concentration. Quality criteria are not specially defined for this kind of determination. In this context, it was decided to develop an UHPLC-MS/MS method to analyze 10 illicit drugs and pharmaceuticals in wastewater treatment plant influent or effluent using a pre-concentration on-line system. A validation process was then carried out using the accuracy profile concept as an innovative tool to estimate the probability of getting prospective results within specified acceptance limits. Influent and effluent samples were spiked with known amounts of the 10 compounds and analyzed three times a day for three days in order to estimate intra-day and inter-day variations. The matrix effect was estimated for each compound. The developed method can provide at least 80% of results within ±25% limits except for compounds that are degraded in influent. Copyright © 2017 Elsevier B.V. All rights reserved.
Murphy, Malia S Q; Hawken, Steven; Atkinson, Katherine M; Milburn, Jennifer; Pervin, Jesmin; Gravett, Courtney; Stringer, Jeffrey S A; Rahman, Anisur; Lackritz, Eve; Chakraborty, Pranesh; Wilson, Kumanan
2017-01-01
Background Knowledge of gestational age (GA) is critical for guiding neonatal care and quantifying regional burdens of preterm birth. In settings where access to ultrasound dating is limited, postnatal estimates are frequently used despite the issues of accuracy associated with postnatal approaches. Newborn metabolic profiles are known to vary by severity of preterm birth. Recent work by our group and others has highlighted the accuracy of postnatal GA estimation algorithms derived from routinely collected newborn screening profiles. This protocol outlines the validation of a GA model originally developed in a North American cohort among international newborn cohorts. Methods Our primary objective is to use blood spot samples collected from infants born in Zambia and Bangladesh to evaluate our algorithm’s capacity to correctly classify GA within 1, 2, 3 and 4 weeks. Secondary objectives are to 1) determine the algorithm's accuracy in small-for-gestational-age and large-for-gestational-age infants, 2) determine its ability to correctly discriminate GA of newborns across dichotomous thresholds of preterm birth (≤34 weeks, <37 weeks GA) and 3) compare the relative performance of algorithms derived from newborn screening panels including all available analytes and those restricted to analyte subsets. The study population will consist of infants born to mothers already enrolled in one of two preterm birth cohorts in Lusaka, Zambia, and Matlab, Bangladesh. Dried blood spot samples will be collected and sent for analysis in Ontario, Canada, for model validation. Discussion This study will determine the validity of a GA estimation algorithm across ethnically diverse infant populations and assess population specific variations in newborn metabolic profiles. PMID:29104765
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji; Sano, Kousuke
This paper presents a new unified analysis of estimate errors by model-matching phase-estimation methods such as rotor-flux state-observers, back EMF state-observers, and back EMF disturbance-observers, for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using one of the model-matching phase-estimation methods.
Landy, Rebecca; Cheung, Li C; Schiffman, Mark; Gage, Julia C; Hyun, Noorie; Wentzensen, Nicolas; Kinney, Walter K; Castle, Philip E; Fetterman, Barbara; Poitras, Nancy E; Lorey, Thomas; Sasieni, Peter D; Katki, Hormuzd A
2018-06-01
Electronic health-records (EHR) are increasingly used by epidemiologists studying disease following surveillance testing to provide evidence for screening intervals and referral guidelines. Although cost-effective, undiagnosed prevalent disease and interval censoring (in which asymptomatic disease is only observed at the time of testing) raise substantial analytic issues when estimating risk that cannot be addressed using Kaplan-Meier methods. Based on our experience analysing EHR from cervical cancer screening, we previously proposed the logistic-Weibull model to address these issues. Here we demonstrate how the choice of statistical method can impact risk estimates. We use observed data on 41,067 women in the cervical cancer screening program at Kaiser Permanente Northern California, 2003-2013, as well as simulations to evaluate the ability of different methods (Kaplan-Meier, Turnbull, Weibull and logistic-Weibull) to accurately estimate risk within a screening program. Cumulative risk estimates from the statistical methods varied considerably, with the largest differences occurring for prevalent disease risk when baseline disease ascertainment was random but incomplete. Kaplan-Meier underestimated risk at earlier times and overestimated risk at later times in the presence of interval censoring or undiagnosed prevalent disease. Turnbull performed well, though was inefficient and not smooth. The logistic-Weibull model performed well, except when event times didn't follow a Weibull distribution. We have demonstrated that methods for right-censored data, such as Kaplan-Meier, result in biased estimates of disease risks when applied to interval-censored data, such as screening programs using EHR data. The logistic-Weibull model is attractive, but the model fit must be checked against Turnbull non-parametric risk estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Prospective regularization design in prior-image-based reconstruction
NASA Astrophysics Data System (ADS)
Dang, Hao; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2015-12-01
Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in phantoms where the optimal parameters vary spatially by an order of magnitude or more. In a series of studies designed to explore potential unknowns associated with accurate PIBR, optimal prior image strength was found to vary with attenuation differences associated with anatomical change but exhibited only small variations as a function of the shape and size of the change. The results suggest that, given a target change attenuation, prospective patient-, change-, and data-specific customization of the prior image strength can be performed to ensure reliable reconstruction of specific anatomical changes.
Methods of multi-conjugate adaptive optics for astronomy
NASA Astrophysics Data System (ADS)
Flicker, Ralf
2003-07-01
This work analyses several aspects of multi-conjugate adaptive optics (MCAO) for astronomy. The research ranges from fundamental and technical studies for present-day MCAO projects, to feasibility studies of high-order MCAO instruments for the extremely large telescopes (ELTs) of the future. The first part is an introductory exposition on atmospheric turbulence, adaptive optics (AO) and MCAO, establishing the framework within which the research was carried out The second part (papers I VI) commences with a fundamental design parameter study of MCAO systems, based upon a first-order performance estimation Monte Carlo simulation. It is investigated how the number and geometry of deformable mirrors and reference beacons, and the choice of wavefront reconstruction algorithm, affect system performance. Multi-conjugation introduces the possibility of optically canceling scintillation in part, at the expense of additional optics, by applying the phase correction in a certain sequence. The effects of scintillation when this sequence is not observed are investigated. As a link in characterizing anisoplanatism in conventional AO systems, images made with the AO instrument Hokupa'a on the Gemini-North Telescope were analysed with respect to the anisoplanatism signal. By model-fitting of simulated data, conclusions could be drawn about the vertical distribution of turbulence above the observatory site (Mauna Kea), and the significance to future AO and MCAO instruments with conjugated deformable mirrors is addressed. The problem of tilt anisoplanatism with MCAO systems relying on artificial reference beacons—laser guide stars (LGSs)—is analysed, and analytical models for predicting the effects of tilt anisoplanatism are devised. A method is presented for real-time retrieval of the tilt anisoplanatism point spread function (PSF), using control loop data. An independent PSF estimation of high accuracy is thus obtained which enables accurate PSF photometry and deconvolution. Lastly, a first-order performance estimation method is presented by which MCAO systems for ELTs may be studied efficiently, using sparse matrix techniques for wavefront reconstruction and a hybrid numerical/analytical simulation model. MCAO simulation results are presented for a wide range of telescope diameters up to 100 meters, and the effects of LGSs and a finite turbulence outer scale are investigated.
SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters
NASA Technical Reports Server (NTRS)
McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Bailey, Sean W.; Shea, Donald M.; Feldman, Gene C.
2014-01-01
In clear shallow waters, light that is transmitted downward through the water column can reflect off the sea floor and thereby influence the water-leaving radiance signal. This effect can confound contemporary ocean color algorithms designed for deep waters where the seafloor has little or no effect on the water-leaving radiance. Thus, inappropriate use of deep water ocean color algorithms in optically shallow regions can lead to inaccurate retrievals of inherent optical properties (IOPs) and therefore have a detrimental impact on IOP-based estimates of marine parameters, including chlorophyll-a and the diffuse attenuation coefficient. In order to improve IOP retrievals in optically shallow regions, a semi-analytical inversion algorithm, the Shallow Water Inversion Model (SWIM), has been developed. Unlike established ocean color algorithms, SWIM considers both the water column depth and the benthic albedo. A radiative transfer study was conducted that demonstrated how SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Properties algorithm (GIOP) and Quasi-Analytical Algorithm (QAA), performed in optically deep and shallow scenarios. The results showed that SWIM performed well, whilst both GIOP and QAA showed distinct positive bias in IOP retrievals in optically shallow waters. The SWIM algorithm was also applied to a test region: the Great Barrier Reef, Australia. Using a single test scene and time series data collected by NASA's MODIS-Aqua sensor (2002-2013), a comparison of IOPs retrieved by SWIM, GIOP and QAA was conducted.
Weykamp, Cas; Secchiero, Sandra; Plebani, Mario; Thelen, Marc; Cobbaert, Christa; Thomas, Annette; Jassam, Nuthar; Barth, Julian H; Perich, Carmen; Ricós, Carmen; Faria, Ana Paula
2017-02-01
Optimum patient care in relation to laboratory medicine is achieved when results of laboratory tests are equivalent, irrespective of the analytical platform used or the country where the laboratory is located. Standardization and harmonization minimize differences and the success of efforts to achieve this can be monitored with international category 1 external quality assessment (EQA) programs. An EQA project with commutable samples, targeted with reference measurement procedures (RMPs) was organized by EQA institutes in Italy, the Netherlands, Portugal, UK, and Spain. Results of 17 general chemistry analytes were evaluated across countries and across manufacturers according to performance specifications derived from biological variation (BV). For K, uric acid, glucose, cholesterol and high-density density (HDL) cholesterol, the minimum performance specification was met in all countries and by all manufacturers. For Na, Cl, and Ca, the minimum performance specifications were met by none of the countries and manufacturers. For enzymes, the situation was complicated, as standardization of results of enzymes toward RMPs was still not achieved in 20% of the laboratories and questionable in the remaining 80%. The overall performance of the measurement of 17 general chemistry analytes in European medical laboratories met the minimum performance specifications. In this general picture, there were no significant differences per country and no significant differences per manufacturer. There were major differences between the analytes. There were six analytes for which the minimum quality specifications were not met and manufacturers should improve their performance for these analytes. Standardization of results of enzymes requires ongoing efforts.
NASA Astrophysics Data System (ADS)
Swearingen, Michelle E.
2004-04-01
An analytic model, developed in cylindrical coordinates, is described for the scattering of a spherical wave off a semi-infinite reight cylinder placed normal to a ground surface. The motivation for the research is to have a model with which one can simulate scattering from a single tree and which can be used as a fundamental element in a model for estimating the attenuation in a forest comprised of multiple tree trunks. Comparisons are made to the plane wave case, the transparent cylinder case, and the rigid and soft ground cases as a method of theoretically verifying the model for the contemplated range of model parameters. Agreement is regarded as excellent for these benchmark cases. Model sensitivity to five parameters is also explored. An experiment was performed to study the scattering from a cylinder normal to a ground surface. The data from the experiment is analyzed with a transfer function method to yield frequency and impulse responses, and calculations based on the analytic model are compared to the experimental data. Thesis advisor: David C. Swanson Copies of this thesis written in English can be obtained from
Sajnóg, Adam; Hanć, Anetta; Barałkiewicz, Danuta
2018-05-15
Analysis of clinical specimens by imaging techniques allows to determine the content and distribution of trace elements on the surface of the examined sample. In order to obtain reliable results, the developed procedure should be based not only on the properly prepared sample and performed calibration. It is also necessary to carry out all phases of the procedure in accordance with the principles of chemical metrology whose main pillars are the use of validated analytical methods, establishing the traceability of the measurement results and the estimation of the uncertainty. This review paper discusses aspects related to sampling, preparation and analysis of clinical samples by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) with emphasis on metrological aspects, i.e. selected validation parameters of the analytical method, the traceability of the measurement result and the uncertainty of the result. This work promotes the introduction of metrology principles for chemical measurement with emphasis to the LA-ICP-MS which is the comparative method that requires studious approach to the development of the analytical procedure in order to acquire reliable quantitative results. Copyright © 2018 Elsevier B.V. All rights reserved.
Compressive Detection of Highly Overlapped Spectra Using Walsh-Hadamard-Based Filter Functions.
Corcoran, Timothy C
2018-03-01
In the chemometric context in which spectral loadings of the analytes are already known, spectral filter functions may be constructed which allow the scores of mixtures of analytes to be determined in on-the-fly fashion directly, by applying a compressive detection strategy. Rather than collecting the entire spectrum over the relevant region for the mixture, a filter function may be applied within the spectrometer itself so that only the scores are recorded. Consequently, compressive detection shrinks data sets tremendously. The Walsh functions, the binary basis used in Walsh-Hadamard transform spectroscopy, form a complete orthonormal set well suited to compressive detection. A method for constructing filter functions using binary fourfold linear combinations of Walsh functions is detailed using mathematics borrowed from genetic algorithm work, as a means of optimizing said functions for a specific set of analytes. These filter functions can be constructed to automatically strip the baseline from analysis. Monte Carlo simulations were performed with a mixture of four highly overlapped Raman loadings and with ten excitation-emission matrix loadings; both sets showed a very high degree of spectral overlap. Reasonable estimates of the true scores were obtained in both simulations using noisy data sets, proving the linearity of the method.
ACCELERATING MR PARAMETER MAPPING USING SPARSITY-PROMOTING REGULARIZATION IN PARAMETRIC DIMENSION
Velikina, Julia V.; Alexander, Andrew L.; Samsonov, Alexey
2013-01-01
MR parameter mapping requires sampling along additional (parametric) dimension, which often limits its clinical appeal due to a several-fold increase in scan times compared to conventional anatomic imaging. Data undersampling combined with parallel imaging is an attractive way to reduce scan time in such applications. However, inherent SNR penalties of parallel MRI due to noise amplification often limit its utility even at moderate acceleration factors, requiring regularization by prior knowledge. In this work, we propose a novel regularization strategy, which utilizes smoothness of signal evolution in the parametric dimension within compressed sensing framework (p-CS) to provide accurate and precise estimation of parametric maps from undersampled data. The performance of the method was demonstrated with variable flip angle T1 mapping and compared favorably to two representative reconstruction approaches, image space-based total variation regularization and an analytical model-based reconstruction. The proposed p-CS regularization was found to provide efficient suppression of noise amplification and preservation of parameter mapping accuracy without explicit utilization of analytical signal models. The developed method may facilitate acceleration of quantitative MRI techniques that are not suitable to model-based reconstruction because of complex signal models or when signal deviations from the expected analytical model exist. PMID:23213053
Wetherbee, Gregory A.; Martin, RoseAnn
2017-02-06
The U.S. Geological Survey Branch of Quality Systems operates the Precipitation Chemistry Quality Assurance Project (PCQA) for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) and National Atmospheric Deposition Program/Mercury Deposition Network (NADP/MDN). Since 1978, various programs have been implemented by the PCQA to estimate data variability and bias contributed by changing protocols, equipment, and sample submission schemes within NADP networks. These programs independently measure the field and laboratory components which contribute to the overall variability of NADP wet-deposition chemistry and precipitation depth measurements. The PCQA evaluates the quality of analyte-specific chemical analyses from the two, currently (2016) contracted NADP laboratories, Central Analytical Laboratory and Mercury Analytical Laboratory, by comparing laboratory performance among participating national and international laboratories. Sample contamination and stability are evaluated for NTN and MDN by using externally field-processed blank samples provided by the Branch of Quality Systems. A colocated sampler program evaluates the overall variability of NTN measurements and bias between dissimilar precipitation gages and sample collectors.This report documents historical PCQA operations and general procedures for each of the external quality-assurance programs from 2007 to 2016.
NASA Astrophysics Data System (ADS)
Pietropolli Charmet, Andrea; Cornaton, Yann
2018-05-01
This work presents an investigation of the theoretical predictions yielded by anharmonic force fields having the cubic and quartic force constants are computed analytically by means of density functional theory (DFT) using the recursive scheme developed by M. Ringholm et al. (J. Comput. Chem. 35 (2014) 622). Different functionals (namely B3LYP, PBE, PBE0 and PW86x) and basis sets were used for calculating the anharmonic vibrational spectra of two halomethanes. The benchmark analysis carried out demonstrates the reliability and overall good performances offered by hybrid approaches, where the harmonic data obtained at the coupled cluster with single and double excitations level of theory augmented by a perturbational estimate of the effects of connected triple excitations, CCSD(T), are combined with the fully analytic higher order force constants yielded by DFT functionals. These methods lead to reliable and computationally affordable calculations of anharmonic vibrational spectra with an accuracy comparable to that yielded by hybrid force fields having the anharmonic force fields computed at second order Møller-Plesset perturbation theory (MP2) level of theory using numerical differentiation but without the corresponding potential issues related to computational costs and numerical errors.
Malegori, Cristina; Nascimento Marques, Emanuel José; de Freitas, Sergio Tonetto; Pimentel, Maria Fernanda; Pasquini, Celio; Casiraghi, Ernestina
2017-04-01
The main goal of this study was to investigate the analytical performances of a state-of-the-art device, one of the smallest dispersion NIR spectrometers on the market (MicroNIR 1700), making a critical comparison with a benchtop FT-NIR spectrometer in the evaluation of the prediction accuracy. In particular, the aim of this study was to estimate in a non-destructive manner, titratable acidity and ascorbic acid content in acerola fruit during ripening, in a view of direct applicability in field of this new miniaturised handheld device. Acerola (Malpighia emarginata DC.) is a super-fruit characterised by a considerable amount of ascorbic acid, ranging from 1.0% to 4.5%. However, during ripening, acerola colour changes and the fruit may lose as much as half of its ascorbic acid content. Because the variability of chemical parameters followed a non-strictly linear profile, two different regression algorithms were compared: PLS and SVM. Regression models obtained with Micro-NIR spectra give better results using SVM algorithm, for both ascorbic acid and titratable acidity estimation. FT-NIR data give comparable results using both SVM and PLS algorithms, with lower errors for SVM regression. The prediction ability of the two instruments was statistically compared using the Passing-Bablok regression algorithm; the outcomes are critically discussed together with the regression models, showing the suitability of the portable Micro-NIR for in field monitoring of chemical parameters of interest in acerola fruits. Copyright © 2016 Elsevier B.V. All rights reserved.
Performance of local optimization in single-plane fluoroscopic analysis for total knee arthroplasty.
Prins, A H; Kaptein, B L; Stoel, B C; Lahaye, D J P; Valstar, E R
2015-11-05
Fluoroscopy-derived joint kinematics plays an important role in the evaluation of knee prostheses. Fluoroscopic analysis requires estimation of the 3D prosthesis pose from its 2D silhouette in the fluoroscopic image, by optimizing a dissimilarity measure. Currently, extensive user-interaction is needed, which makes analysis labor-intensive and operator-dependent. The aim of this study was to review five optimization methods for 3D pose estimation and to assess their performance in finding the correct solution. Two derivative-free optimizers (DHSAnn and IIPM) and three gradient-based optimizers (LevMar, DoNLP2 and IpOpt) were evaluated. For the latter three optimizers two different implementations were evaluated: one with a numerically approximated gradient and one with an analytically derived gradient for computational efficiency. On phantom data, all methods were able to find the 3D pose within 1mm and 1° in more than 85% of cases. IpOpt had the highest success-rate: 97%. On clinical data, the success rates were higher than 85% for the in-plane positions, but not for the rotations. IpOpt was the most expensive method and the application of an analytically derived gradients accelerated the gradient-based methods by a factor 3-4 without any differences in success rate. In conclusion, 85% of the frames can be analyzed automatically in clinical data and only 15% of the frames require manual supervision. The optimal success-rate on phantom data (97% with IpOpt) on phantom data indicates that even less supervision may become feasible. Copyright © 2015 Elsevier Ltd. All rights reserved.
Analytical Model for Estimating the Zenith Angle Dependence of Terrestrial Cosmic Ray Fluxes
Sato, Tatsuhiko
2016-01-01
A new model called “PHITS-based Analytical Radiation Model in the Atmosphere (PARMA) version 4.0” was developed to facilitate instantaneous estimation of not only omnidirectional but also angular differential energy spectra of cosmic ray fluxes anywhere in Earth’s atmosphere at nearly any given time. It consists of its previous version, PARMA3.0, for calculating the omnidirectional fluxes and several mathematical functions proposed in this study for expressing their zenith-angle dependences. The numerical values of the parameters used in these functions were fitted to reproduce the results of the extensive air shower simulation performed by Particle and Heavy Ion Transport code System (PHITS). The angular distributions of ground-level muons at large zenith angles were specially determined by introducing an optional function developed on the basis of experimental data. The accuracy of PARMA4.0 was closely verified using multiple sets of experimental data obtained under various global conditions. This extension enlarges the model’s applicability to more areas of research, including design of cosmic-ray detectors, muon radiography, soil moisture monitoring, and cosmic-ray shielding calculation. PARMA4.0 is available freely and is easy to use, as implemented in the open-access EXcel-based Program for Calculating Atmospheric Cosmic-ray Spectrum (EXPACS). PMID:27490175
Analytical Model for Estimating the Zenith Angle Dependence of Terrestrial Cosmic Ray Fluxes.
Sato, Tatsuhiko
2016-01-01
A new model called "PHITS-based Analytical Radiation Model in the Atmosphere (PARMA) version 4.0" was developed to facilitate instantaneous estimation of not only omnidirectional but also angular differential energy spectra of cosmic ray fluxes anywhere in Earth's atmosphere at nearly any given time. It consists of its previous version, PARMA3.0, for calculating the omnidirectional fluxes and several mathematical functions proposed in this study for expressing their zenith-angle dependences. The numerical values of the parameters used in these functions were fitted to reproduce the results of the extensive air shower simulation performed by Particle and Heavy Ion Transport code System (PHITS). The angular distributions of ground-level muons at large zenith angles were specially determined by introducing an optional function developed on the basis of experimental data. The accuracy of PARMA4.0 was closely verified using multiple sets of experimental data obtained under various global conditions. This extension enlarges the model's applicability to more areas of research, including design of cosmic-ray detectors, muon radiography, soil moisture monitoring, and cosmic-ray shielding calculation. PARMA4.0 is available freely and is easy to use, as implemented in the open-access EXcel-based Program for Calculating Atmospheric Cosmic-ray Spectrum (EXPACS).
Bayesian Monte Carlo and Maximum Likelihood Approach for ...
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood estimation (BMCML) to calibrate a lake oxygen recovery model. We first derive an analytical solution of the differential equation governing lake-averaged oxygen dynamics as a function of time-variable wind speed. Statistical inferences on model parameters and predictive uncertainty are then drawn by Bayesian conditioning of the analytical solution on observed daily wind speed and oxygen concentration data obtained from an earlier study during two recovery periods on a eutrophic lake in upper state New York. The model is calibrated using oxygen recovery data for one year and statistical inferences were validated using recovery data for another year. Compared with essentially two-step, regression and optimization approach, the BMCML results are more comprehensive and performed relatively better in predicting the observed temporal dissolved oxygen levels (DO) in the lake. BMCML also produced comparable calibration and validation results with those obtained using popular Markov Chain Monte Carlo technique (MCMC) and is computationally simpler and easier to implement than the MCMC. Next, using the calibrated model, we derive an optimal relationship between liquid film-transfer coefficien
Pastor, Dena A; Lazowski, Rory A
2018-01-01
The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.
Pilot testing of SHRP 2 reliability data and analytical products: Florida. [supporting datasets
DOT National Transportation Integrated Search
2014-01-01
SHRP 2 initiated the L38 project to pilot test products from five of the programs completed projects. The products support reliability estimation and use based on data analyses, analytical techniques, and decision-making framework. The L38 project...
ESTIMATING UNCERTAINITIES IN FACTOR ANALYTIC MODELS
When interpreting results from factor analytic models as used in receptor modeling, it is important to quantify the uncertainties in those results. For example, if the presence of a species on one of the factors is necessary to interpret the factor as originating from a certain ...
Analytic semigroups: Applications to inverse problems for flexible structures
NASA Technical Reports Server (NTRS)
Banks, H. T.; Rebnord, D. A.
1990-01-01
Convergence and stability results for least squares inverse problems involving systems described by analytic semigroups are presented. The practical importance of these results is demonstrated by application to several examples from problems of estimation of material parameters in flexible structures using accelerometer data.
Risk analysis of analytical validations by probabilistic modification of FMEA.
Barends, D M; Oldenhof, M T; Vredenbregt, M J; Nauta, M J
2012-05-01
Risk analysis is a valuable addition to validation of an analytical chemistry process, enabling not only detecting technical risks, but also risks related to human failures. Failure Mode and Effect Analysis (FMEA) can be applied, using a categorical risk scoring of the occurrence, detection and severity of failure modes, and calculating the Risk Priority Number (RPN) to select failure modes for correction. We propose a probabilistic modification of FMEA, replacing the categorical scoring of occurrence and detection by their estimated relative frequency and maintaining the categorical scoring of severity. In an example, the results of traditional FMEA of a Near Infrared (NIR) analytical procedure used for the screening of suspected counterfeited tablets are re-interpretated by this probabilistic modification of FMEA. Using this probabilistic modification of FMEA, the frequency of occurrence of undetected failure mode(s) can be estimated quantitatively, for each individual failure mode, for a set of failure modes, and the full analytical procedure. Copyright © 2012 Elsevier B.V. All rights reserved.
Development of devices for self-injection: using tribological analysis to optimize injection force
Lange, Jakob; Urbanek, Leos; Burren, Stefan
2016-01-01
This article describes the use of analytical models and physical measurements to characterize and optimize the tribological behavior of pen injectors for self-administration of biopharmaceuticals. One of the main performance attributes of this kind of device is its efficiency in transmitting the external force applied by the user on to the cartridge inside the pen in order to effectuate an injection. This injection force characteristic is heavily influenced by the frictional properties of the polymeric materials employed in the mechanism. Standard friction tests are available for characterizing candidate materials, but they use geometries and conditions far removed from the actual situation inside a pen injector and thus do not always generate relevant data. A new test procedure, allowing the direct measurement of the coefficient of friction between two key parts of a pen injector mechanism using real parts under simulated use conditions, is presented. In addition to the absolute level of friction, the test method provides information on expected evolution of friction over lifetime as well as on expected consistency between individual devices. Paired with an analytical model of the pen mechanism, the frictional data allow the expected overall injection system force efficiency to be estimated. The test method and analytical model are applied to a range of polymer combinations with different kinds of lubrication. It is found that material combinations used without lubrication generally have unsatisfactory performance, that the use of silicone-based internal lubricating additives improves performance, and that the best results can be achieved with external silicone-based lubricants. Polytetrafluoroethylene-based internal lubrication and external lubrication are also evaluated but found to provide only limited benefits unless used in combination with silicone. PMID:27274319
Sichilongo, Kwenga; Chinyama, Mompati; Massele, Amos; Vento, Sandro
2014-01-15
A contrast between the analytical performance characteristics using gas chromatography-mass spectrometry (GC-MS) liquid chromatography-mass spectrometry (LC-MS) and liquid chromatography-ultraviolet (LC-UV) detection for the determination of the antiretroviral drug (ARV) nevirapine (NVP) in fortified human plasma after QuEChERS extraction has been made. Analytical performance characteristics, i.e. linearities, instrument detection limits (IDLs), limits of quantitation (LOQs), method detection limits (MDLs), % mean recoveries and the corresponding relative standard deviations (%RSDs) were estimated using techniques above. Using GC-MS, the correlation coefficients (r(2)) were ≥0.990, which were deemed acceptable linearities. The MDLs ranged between 11.1-29.8μg/L and 13.7-36.0μg/L using helium and hydrogen carrier gases respectively. The LOQs ranged between 16.5-66.7μg/L and 28.4-98.7μg/L using helium and hydrogen carrier gases respectively with a % mean recovery of 83% and %RSD of 4.6%. Using LC-MS and LC-UV, the correlation coefficients (r(2)) were ≥0.990. The MDLs were ranged between 3.14 and 47.1μg/L. The LOQs ranged between 2.85 and 90.0μg/L respectively. The MDLs using GC-MS, LC-MS and LC-UV were below the therapeutic range for NVP in human plasma is considered to be between 2300μg/L (Cmin) and 8000μg/L (Cmax). This study also demonstrated that helium can be substituted with hydrogen which is relatively cheaper and easily obtainable even by use of a generator. Copyright © 2013 Elsevier B.V. All rights reserved.
G Archana; Dhodapkar, Rita; Kumar, Anupama
2016-09-01
The present study reports a precise and simple offline solid-phase extraction (SPE) coupled with reversed-phase high-performance liquid chromatography (RP-HPLC) method for the simultaneous determination of five representative and commonly present pharmaceuticals and personal care products (PPCPs), a new class of emerging pollutants in the aquatic environment. The target list of analytes including ciprofloxacin, acetaminophen, caffeine benzophenone and irgasan were separated by a simple HPLC method. The column used was a reversed-phase C18 column, and the mobile phase was 1 % acetic acid and methanol (20:80 v/v) under isocratic conditions, at a flow rate of 1 mL min(-1). The analytes were separated and detected within 15 min using the photodiode array detector (PDA). The linearity of the calibration curves were obtained with correlation coefficients 0.98-0.99.The limit of detection (LOD), limit of quantification (LOQ), precision, accuracy and ruggedness demonstrated the reproducibility, specificity and sensitivity of the developed method. Prior to the analysis, the SPE was performed using a C18 cartridge to preconcentrate the targeted analytes from the environmental water samples. The developed method was applied to evaluate and fingerprint PPCPs in sewage collected from a residential engineering college campus, polluted water bodies such as Nag river and Pili river and the influent and effluent samples from a sewage treatment plant (STP) situated at Nagpur city, in the peak summer season. This method is useful for estimation of pollutants present in microquantities in the surface water bodies and treated sewage as compared to nanolevel pollutants detected by mass spectrometry (MS) detectors.