Adjusting slash pine growth and yield for silvicultural treatments
Stephen R. Logan; Barry D. Shiver
2006-01-01
With intensive silvicultural treatments such as fertilization and competition control now commonplace in today's slash pine (Pinus elliottii Engelm.) plantations, a method to adjust current growth and yield models is required to accurately account for yield increases due to these practices. Some commonly used ad-hoc methods, such as raising site...
Tenon, Mathieu; Feuillère, Nicolas; Roller, Marc; Birtić, Simona
2017-04-15
Yucca GRAS-labelled saponins have been and are increasingly used in food/feed, pharmaceutical or cosmetic industries. Existing techniques presently used for Yucca steroidal saponin quantification remain either inaccurate and misleading or accurate but time consuming and cost prohibitive. The method reported here addresses all of the above challenges. HPLC/ELSD technique is an accurate and reliable method that yields results of appropriate repeatability and reproducibility. This method does not over- or under-estimate levels of steroidal saponins. HPLC/ELSD method does not require each and every pure standard of saponins, to quantify the group of steroidal saponins. The method is a time- and cost-effective technique that is suitable for routine industrial analyses. HPLC/ELSD methods yield a saponin fingerprints specific to the plant species. As the method is capable of distinguishing saponin profiles from taxonomically distant species, it can unravel plant adulteration issues. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Application of a rising plate meter to estimate forage yield on dairy farms in Pennsylvania
USDA-ARS?s Scientific Manuscript database
Accurately assessing pasture forage yield is necessary for producers who want to budget feed expenses and make informed pasture management decisions. Clipping and weighing forage from a known area is a direct method to measure pasture forage yield, however it is time consuming. The rising plate mete...
ROI on yield data analysis systems through a business process management strategy
NASA Astrophysics Data System (ADS)
Rehani, Manu; Strader, Nathan; Hanson, Jeff
2005-05-01
The overriding motivation for yield engineering is profitability. This is achieved through application of yield management. The first application is to continually reduce waste in the form of yield loss. New products, new technologies and the dynamic state of the process and equipment keep introducing new ways to cause yield loss. In response, the yield management efforts have to continually come up with new solutions to minimize it. The second application of yield engineering is to aid in accurate product pricing. This is achieved through predicting future results of the yield engineering effort. The more accurate the yield prediction, the more accurate the wafer start volume, the more accurate the wafer pricing. Another aspect of yield prediction pertains to gauging the impact of a yield problem and predicting how long that will last. The ability to predict such impacts again feeds into wafer start calculations and wafer pricing. The question then is that if the stakes on yield management are so high why is it that most yield management efforts are run like science and engineering projects and less like manufacturing? In the eighties manufacturing put the theory of constraints1 into practice and put a premium on stability and predictability in manufacturing activities, why can't the same be done for yield management activities? This line of introspection led us to define and implement a business process to manage the yield engineering activities. We analyzed the best known methods (BKM) and deployed a workflow tool to make them the standard operating procedure (SOP) for yield managment. We present a case study in deploying a Business Process Management solution for Semiconductor Yield Engineering in a high-mix ASIC environment. We will present a description of the situation prior to deployment, a window into the development process and a valuation of the benefits.
Accuracy of least-squares methods for the Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Bochev, Pavel B.; Gunzburger, Max D.
1993-01-01
Recently there has been substantial interest in least-squares finite element methods for velocity-vorticity-pressure formulations of the incompressible Navier-Stokes equations. The main cause for this interest is the fact that algorithms for the resulting discrete equations can be devised which require the solution of only symmetric, positive definite systems of algebraic equations. On the other hand, it is well-documented that methods using the vorticity as a primary variable often yield very poor approximations. Thus, here we study the accuracy of these methods through a series of computational experiments, and also comment on theoretical error estimates. It is found, despite the failure of standard methods for deriving error estimates, that computational evidence suggests that these methods are, at the least, nearly optimally accurate. Thus, in addition to the desirable matrix properties yielded by least-squares methods, one also obtains accurate approximations.
Minimum number of measurements for evaluating soursop (Annona muricata L.) yield.
Sánchez, C F B; Teodoro, P E; Londoño, S; Silva, L A; Peixoto, L A; Bhering, L L
2017-05-31
Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of soursop (Annona muricata L.) genotypes based on fruit yield. Sixteen measurements of fruit yield from 71 soursop genotypes were carried out between 2000 and 2016. In order to estimate r with the best accuracy, four procedures were used: analysis of variance, principal component analysis based on the correlation matrix, principal component analysis based on the phenotypic variance and covariance matrix, and structural analysis based on the correlation matrix. The minimum number of measurements needed to predict the actual value of individuals was estimated. Principal component analysis using the phenotypic variance and covariance matrix provided the most accurate estimates of both r and the number of measurements required for accurate evaluation of fruit yield in soursop. Our results indicate that selection of soursop genotypes with high fruit yield can be performed based on the third and fourth measurements in the early years and/or based on the eighth and ninth measurements at more advanced stages.
NASA Technical Reports Server (NTRS)
Guruswamy, G. P.; Goorjian, P. M.
1984-01-01
An efficient coordinate transformation technique is presented for constructing grids for unsteady, transonic aerodynamic computations for delta-type wings. The original shearing transformation yielded computations that were numerically unstable and this paper discusses the sources of those instabilities. The new shearing transformation yields computations that are stable, fast, and accurate. Comparisons of those two methods are shown for the flow over the F5 wing that demonstrate the new stability. Also, comparisons are made with experimental data that demonstrate the accuracy of the new method. The computations were made by using a time-accurate, finite-difference, alternating-direction-implicit (ADI) algorithm for the transonic small-disturbance potential equation.
Toward more accurate loss tangent measurements in reentrant cavities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moyer, R. D.
1980-05-01
Karpova has described an absolute method for measurement of dielectric properties of a solid in a coaxial reentrant cavity. His cavity resonance equation yields very accurate results for dielectric constants. However, he presented only approximate expressions for the loss tangent. This report presents more exact expressions for that quantity and summarizes some experimental results.
Xenon Defects in Uranium Dioxide From First Principles and Interatomic Potentials
NASA Astrophysics Data System (ADS)
Thompson, Alexander
In this thesis, we examine the defect energetics and migration energies of xenon atoms in uranium dioxide (UO2) from first principles and interatomic potentials. We also parameterize new, accurate interatomic potentials for xenon and uranium dioxide. To achieve accurate energetics and provide a foundation for subsequent calculations, we address difficulties in finding consistent energetics within Hubbard U corrected density functional theory (DFT+U). We propose a method of slowly ramping the U parameter in order to guide the calculation into low energy orbital occupations. We find that this method is successful for a variety of materials. We then examine the defect energetics of several noble gas atoms in UO2 for several different defect sites. We show that the energy to incorporate large noble gas atoms into interstitial sites is so large that it is energetically favorable for a Schottky defect cluster to be created to relieve the strain. We find that, thermodynamically, xenon will rarely ever be in the interstitial site of UO2. To study larger defects associated with the migration of xenon in UO 2, we turn to interatomic potentials. We benchmark several previously published potentials against DFT+U defect energetics and migration barriers. Using a combination of molecular dynamics and nudged elastic band calculations, we find a new, low energy migration pathway for xenon in UO2. We create a new potential for xenon that yields accurate defect energetics. We fit this new potential with a method we call Iterative Potential Refinement that parameterizes potentials to first principles data via a genetic algorithm. The potential finds accurate energetics for defects with relatively low amounts of strain (xenon in defect clusters). It is important to find accurate energetics for these sorts of low-strain defects because they essentially represent small xenon bubbles. Finally, we parameterize a new UO2 potential that simultaneously yields accurate vibrational properties and defect energetics, important properties for UO2 because of the high temperature and defective reactor environment.. Previously published potentials could only yield accurate defect energetics or accurate phonons, but never both.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyar, M. Darby; McCanta, Molly; Breves, Elly
2016-03-01
Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yield accurate resultsmore » from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
A Streaming Language Implementation of the Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Barth, Timothy; Knight, Timothy
2005-01-01
We present a Brook streaming language implementation of the 3-D discontinuous Galerkin method for compressible fluid flow on tetrahedral meshes. Efficient implementation of the discontinuous Galerkin method using the streaming model of computation introduces several algorithmic design challenges. Using a cycle-accurate simulator, performance characteristics have been obtained for the Stanford Merrimac stream processor. The current Merrimac design achieves 128 Gflops per chip and the desktop board is populated with 16 chips yielding a peak performance of 2 Teraflops. Total parts cost for the desktop board is less than $20K. Current cycle-accurate simulations for discretizations of the 3-D compressible flow equations yield approximately 40-50% of the peak performance of the Merrimac streaming processor chip. Ongoing work includes the assessment of the performance of the same algorithm on the 2 Teraflop desktop board with a target goal of achieving 1 Teraflop performance.
Constitutive Modeling of Piezoelectric Polymer Composites
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Gates, Tom (Technical Monitor)
2003-01-01
A new modeling approach is proposed for predicting the bulk electromechanical properties of piezoelectric composites. The proposed model offers the same level of convenience as the well-known Mori-Tanaka method. In addition, it is shown to yield predicted properties that are, in most cases, more accurate or equally as accurate as the Mori-Tanaka scheme. In particular, the proposed method is used to determine the electromechanical properties of four piezoelectric polymer composite materials as a function of inclusion volume fraction. The predicted properties are compared to those calculated using the Mori-Tanaka and finite element methods.
Estimating tar and nicotine exposure: human smoking versus machine generated smoke yields.
St Charles, F K; Kabbani, A A; Borgerding, M F
2010-02-01
Determine human smoked (HS) cigarette yields of tar and nicotine for smokers using their own brand in their everyday environment. A robust, filter analysis method was used to estimate the tar and nicotine yields for 784 subjects. Seventeen brands were chosen to represent a wide range of styles: 85 and 100 mm lengths; menthol and non-menthol; 17, 23, and 25 mm circumference; with tar yields [Federal Trade Commission (FTC) method] ranging from 1 to 18 mg. Tar bands chosen corresponded to yields of 1-3 mg, 4-6 mg, 7-12 mg, and 13+ mg. A significant difference (p<0.0001) in HS yields of tar and nicotine between tar bands was found. Machine-smoked yields were reasonable predictors of the HS yields for groups of subjects, but the relationship was neither exact nor linear. Neither the FTC, the Massachusetts (MA) nor the Canadian Intensive (CI) machine-smoking methods accurately reflect the HS yields across all brands. The FTC method was closest for the 7-12 mg and 13+ mg products and the MA method was closest for the 1-3mg products. The HS yields for the 4-6 mg products were approximately midway between the FTC and the MA yields. HS nicotine yields corresponded well with published urinary and plasma nicotine biomarker studies. 2009 Elsevier Inc. All rights reserved.
Improving Seasonal Crop Monitoring and Forecasting for Soybean and Corn in Iowa
NASA Astrophysics Data System (ADS)
Togliatti, K.; Archontoulis, S.; Dietzel, R.; VanLoocke, A.
2016-12-01
Accurately forecasting crop yield in advance of harvest could greatly benefit farmers, however few evaluations have been conducted to determine the effectiveness of forecasting methods. We tested one such method that used a combination of short-term weather forecasting from the Weather Research and Forecasting Model (WRF) to predict in season weather variables, such as, maximum and minimum temperature, precipitation and radiation at 4 different forecast lengths (2 weeks, 1 week, 3 days, and 0 days). This forecasted weather data along with the current and historic (previous 35 years) data from the Iowa Environmental Mesonet was combined to drive Agricultural Production Systems sIMulator (APSIM) simulations to forecast soybean and corn yields in 2015 and 2016. The goal of this study is to find the forecast length that reduces the variability of simulated yield predictions while also increasing the accuracy of those predictions. APSIM simulations of crop variables were evaluated against bi-weekly field measurements of phenology, biomass, and leaf area index from early and late planted soybean plots located at the Agricultural Engineering and Agronomy Research Farm in central Iowa as well as the Northwest Research Farm in northwestern Iowa. WRF model predictions were evaluated against observed weather data collected at the experimental fields. Maximum temperature was the most accurately predicted variable, followed by minimum temperature and radiation, and precipitation was least accurate according to RMSE values and the number of days that were forecasted within a 20% error of the observed weather. Our analysis indicated that for the majority of months in the growing season the 3 day forecast performed the best. The 1 week forecast came in second and the 2 week forecast was the least accurate for the majority of months. Preliminary results for yield indicate that the 2 week forecast is the least variable of the forecast lengths, however it also is the least accurate. The 3 day and 1 week forecast have a better accuracy, with an increase in variability.
High-speed engine/component performance assessment using exergy and thrust-based methods
NASA Technical Reports Server (NTRS)
Riggins, D. W.
1996-01-01
This investigation summarizes a comparative study of two high-speed engine performance assessment techniques based on energy (available work) and thrust-potential (thrust availability). Simple flow-fields utilizing Rayleigh heat addition and one-dimensional flow with friction are used to demonstrate the fundamental inability of conventional energy techniques to predict engine component performance, aid in component design, or accurately assess flow losses. The use of the thrust-based method on these same examples demonstrates its ability to yield useful information in all these categories. Energy and thrust are related and discussed from the stand-point of their fundamental thermodynamic and fluid dynamic definitions in order to explain the differences in information obtained using the two methods. The conventional definition of energy is shown to include work which is inherently unavailable to an aerospace Brayton engine. An engine-based energy is then developed which accurately accounts for this inherently unavailable work; performance parameters based on this quantity are then shown to yield design and loss information equivalent to the thrust-based method.
Spectral estimates of intercepted solar radiation by corn and soybean canopies
NASA Technical Reports Server (NTRS)
Gallo, K. P.; Brooks, C. C.; Daughtry, C. S. T.; Bauer, M. E.; Vanderbilt, V. C.
1982-01-01
Attention is given to the development of methods for combining spectral and meteorological data in crop yield models which are capable of providing accurate estimates of crop condition and yields throughout the growing season. The present investigation is concerned with initial tests of these concepts using spectral and agronomic data acquired in controlled experiments. The data were acquired at the Purdue University Agronomy Farm, 10 km northwest of West Lafayette, Indiana. Data were obtained throughout several growing seasons for corn and soybeans. Five methods or models for predicting yields were examined. On the basis of the obtained results, it is concluded that estimating intercepted solar radiation using spectral data is a viable approach for merging spectral and meteorological data in crop yield models.
Valero, Enrique; Adán, Antonio; Cerrada, Carlos
2012-01-01
In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369
Ren, Jianqiang; Chen, Zhongxin; Tang, Huajun
2006-12-01
Taking Jining City of Shandong Province, one of the most important winter wheat production regions in Huanghuaihai Plain as an example, the winter wheat yield was estimated by using the 250 m MODIS-NDVI data smoothed by Savitzky-Golay filter. The NDVI values between 0. 20 and 0. 80 were selected, and the sum of NDVI value for each county was calculated to build its relation with winter wheat yield. By using stepwise regression method, the linear regression model between NDVI and winter wheat yield was established, with the precision validated by the ground survey data. The results showed that the relative error of predicted yield was between -3.6% and 3.9%, suggesting that the method was relatively accurate and feasible.
Wang, Yan-Bin; Hu, Yu-Zhong; Li, Wen-Le; Zhang, Wei-Song; Zhou, Feng; Luo, Zhi
2014-10-01
In the present paper, based on the fast evaluation technique of near infrared, a method to predict the yield of atmos- pheric and vacuum line was developed, combined with H/CAMS software. Firstly, the near-infrared (NIR) spectroscopy method for rapidly determining the true boiling point of crude oil was developed. With commercially available crude oil spectroscopy da- tabase and experiments test from Guangxi Petrochemical Company, calibration model was established and a topological method was used as the calibration. The model can be employed to predict the true boiling point of crude oil. Secondly, the true boiling point based on NIR rapid assay was converted to the side-cut product yield of atmospheric/vacuum distillation unit by H/CAMS software. The predicted yield and the actual yield of distillation product for naphtha, diesel, wax and residual oil were compared in a 7-month period. The result showed that the NIR rapid crude assay can predict the side-cut product yield accurately. The near infrared analytic method for predicting yield has the advantages of fast analysis, reliable results, and being easy to online operate, and it can provide elementary data for refinery planning optimization and crude oil blending.
Stokes, Ashley M.; Semmineh, Natenael; Quarles, C. Chad
2015-01-01
Purpose A combined biophysical- and pharmacokinetic-based method is proposed to separate, quantify, and correct for both T1 and T2* leakage effects using dual-echo DSC acquisitions to provide more accurate hemodynamic measures, as validated by a reference intravascular contrast agent (CA). Methods Dual-echo DSC-MRI data were acquired in two rodent glioma models. The T1 leakage effects were removed and also quantified in order to subsequently correct for the remaining T2* leakage effects. Pharmacokinetic, biophysical, and combined biophysical and pharmacokinetic models were used to obtain corrected cerebral blood volume (CBV) and cerebral blood flow (CBF), and these were compared with CBV and CBF from an intravascular CA. Results T1-corrected CBV was significantly overestimated compared to MION CBV, while T1+T2*-correction yielded CBV values closer to the reference values. The pharmacokinetic and simplified biophysical methods showed similar results and underestimated CBV in tumors exhibiting strong T2* leakage effects. The combined method was effective for correcting T1 and T2* leakage effects across tumor types. Conclusions Correcting for both T1 and T2* leakage effects yielded more accurate measures of CBV. The combined correction method yields more reliable CBV measures than either correction method alone, but for certain brain tumor types (e.g., gliomas) the simplified biophysical method may provide a robust and computationally efficient alternative. PMID:26362714
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moridis, G.
1992-03-01
The Laplace Transform Boundary Element (LTBE) method is a recently introduced numerical method, and has been used for the solution of diffusion-type PDEs. It completely eliminates the time dependency of the problem and the need for time discretization, yielding solutions numerical in space and semi-analytical in time. In LTBE solutions are obtained in the Laplace spare, and are then inverted numerically to yield the solution in time. The Stehfest and the DeHoog formulations of LTBE, based on two different inversion algorithms, are investigated. Both formulations produce comparable, extremely accurate solutions.
ERIC Educational Resources Information Center
Kubiak, Sheryl Pimlott; Nnawulezi, Nkiru; Karim, Nidal; Sullivan, Cris M.; Beeble, Marisa L.
2012-01-01
Definitions vary on what constitutes sexual and/or physical abuse, and scholars have debated on which methods might yield the most accurate response rates for capturing this sensitive information. Although some studies suggest respondents prefer methods that provide anonymity, previous studies have not utilized high-risk or stigmatized…
Real-time yield estimation based on deep learning
NASA Astrophysics Data System (ADS)
Rahnemoonfar, Maryam; Sheppard, Clay
2017-05-01
Crop yield estimation is an important task in product management and marketing. Accurate yield prediction helps farmers to make better decision on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits is very time consuming and expensive process and it is not practical for big fields. Robotic systems including Unmanned Aerial Vehicles (UAV) and Unmanned Ground Vehicles (UGV), provide an efficient, cost-effective, flexible, and scalable solution for product management and yield prediction. Recently huge data has been gathered from agricultural field, however efficient analysis of those data is still a challenging task. Computer vision approaches currently face diffident challenges in automatic counting of fruits or flowers including occlusion caused by leaves, branches or other fruits, variance in natural illumination, and scale. In this paper a novel deep convolutional network algorithm was developed to facilitate the accurate yield prediction and automatic counting of fruits and vegetables on the images. Our method is robust to occlusion, shadow, uneven illumination and scale. Experimental results in comparison to the state-of-the art show the effectiveness of our algorithm.
Corrêa, A M; Pereira, M I S; de Abreu, H K A; Sharon, T; de Melo, C L P; Ito, M A; Teodoro, P E; Bhering, L L
2016-10-17
The common bean, Phaseolus vulgaris, is predominantly grown on small farms and lacks accurate genotype recommendations for specific micro-regions in Brazil. This contributes to a low national average yield. The aim of this study was to use the methods of the harmonic mean of the relative performance of genetic values (HMRPGV) and the centroid, for selecting common bean genotypes with high yield, adaptability, and stability for the Cerrado/Pantanal ecotone region in Brazil. We evaluated 11 common bean genotypes in three trials carried out in the dry season in Aquidauana in 2013, 2014, and 2015. A likelihood ratio test detected a significant interaction between genotype x year, contributing 54% to the total phenotypic variation in grain yield. The three genotypes selected by the joint analysis of genotypic values in all years (Carioca Precoce, BRS Notável, and CNFC 15875) were the same as those recommended by the HMRPGV method. Using the centroid method, genotypes BRS Notável and CNFC 15875 were considered ideal genotypes based on their high stability to unfavorable environments and high responsiveness to environmental improvement. We identified a high association between the methods of adaptability and stability used in this study. However, the use of centroid method provided a more accurate and precise recommendation of the behavior of the evaluated genotypes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malik, Afshan N., E-mail: afshan.malik@kcl.ac.uk; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana
2011-08-19
Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that themore » methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.« less
Payne, Courtney E; Wolfrum, Edward J
2015-01-01
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. We present individual model statistics to demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. It is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.
Progress Toward Accurate Measurements of Power Consumptions of DBD Plasma Actuators
NASA Technical Reports Server (NTRS)
Ashpis, David E.; Laun, Matthew C.; Griebeler, Elmer L.
2012-01-01
The accurate measurement of power consumption by Dielectric Barrier Discharge (DBD) plasma actuators is a challenge due to the characteristics of the actuator current signal. Micro-discharges generate high-amplitude, high-frequency current spike transients superimposed on a low-amplitude, low-frequency current. We have used a high-speed digital oscilloscope to measure the actuator power consumption using the Shunt Resistor method and the Monitor Capacitor method. The measurements were performed simultaneously and compared to each other in a time-accurate manner. It was found that low signal-to-noise ratios of the oscilloscopes used, in combination with the high dynamic range of the current spikes, make the Shunt Resistor method inaccurate. An innovative, nonlinear signal compression circuit was applied to the actuator current signal and yielded excellent agreement between the two methods. The paper describes the issues and challenges associated with performing accurate power measurements. It provides insights into the two methods including new insight into the Lissajous curve of the Monitor Capacitor method. Extension to a broad range of parameters and further development of the compression hardware will be performed in future work.
Analytical Wave Functions for Ultracold Collisions.
NASA Astrophysics Data System (ADS)
Cavagnero, M. J.
1998-05-01
Secular perturbation theory of long-range interactions(M. J. Cavagnero, PRA 50) 2841, (1994). has been generalized to yield accurate wave functions for near threshold processes, including low-energy scattering processes of interest at ultracold temperatures. In particular, solutions of Schrödinger's equation have been obtained for motion in the combined r-6, r-8, and r-10 potentials appropriate for describing an utlracold collision of two neutral ground state atoms. Scattering lengths and effective ranges appropriate to such potentials are readily calculated at distances comparable to the LeRoy radius, where exchange forces can be neglected, thereby eliminating the need to integrate Schrödinger's equation to large internuclear distances. Our method yields accurate base pair solutions well beyond the energy range of effective range theories, making possible the application of multichannel quantum defect theory [MQDT] and R-matrix methods to the study of ultracold collisions.
Boric Acid in Kjeldahl Analysis
ERIC Educational Resources Information Center
Cruz, Gregorio
2013-01-01
The use of boric acid in the Kjeldahl determination of nitrogen is a variant of the original method widely applied in many laboratories all over the world. Its use is recommended by control organizations such as ISO, IDF, and EPA because it yields reliable and accurate results. However, the chemical principles the method is based on are not…
Trujillo-Esquivel, Elías; Franco, Bernardo; Flores-Martínez, Alberto; Ponce-Noyola, Patricia; Mora-Montes, Héctor M
2016-08-02
Analysis of gene expression is a common research tool to study networks controlling gene expression, the role of genes with unknown function, and environmentally induced responses of organisms. Most of the analytical tools used to analyze gene expression rely on accurate cDNA synthesis and quantification to obtain reproducible and quantifiable results. Thus far, most commercial kits for isolation and purification of cDNA target double-stranded molecules, which do not accurately represent the abundance of transcripts. In the present report, we provide a simple and fast method to purify single-stranded cDNA, exhibiting high purity and yield. This method is based on the treatment with RNase H and RNase A after cDNA synthesis, followed by separation in silica spin-columns and ethanol precipitation. In addition, our method avoids the use of DNase I to eliminate genomic DNA from RNA preparations, which improves cDNA yield. As a case report, our method proved to be useful in the purification of single-stranded cDNA from the pathogenic fungus Sporothrix schenckii.
Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi
2016-02-01
Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62 mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.
Analytical study to define a helicopter stability derivative extraction method, volume 1
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1973-01-01
A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.
NASA Astrophysics Data System (ADS)
Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu
2015-12-01
Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.
Acoustic Full Waveform Inversion to Characterize Near-surface Chemical Explosions
NASA Astrophysics Data System (ADS)
Kim, K.; Rodgers, A. J.
2015-12-01
Recent high-quality, atmospheric overpressure data from chemical high-explosive experiments provide a unique opportunity to characterize near-surface explosions, specifically estimating yield and source time function. Typically, yield is estimated from measured signal features, such as peak pressure, impulse, duration and/or arrival time of acoustic signals. However, the application of full waveform inversion to acoustic signals for yield estimation has not been fully explored. In this study, we apply a full waveform inversion method to local overpressure data to extract accurate pressure-time histories of acoustics sources during chemical explosions. A robust and accurate inversion technique for acoustic source is investigated using numerical Green's functions that take into account atmospheric and topographic propagation effects. The inverted pressure-time history represents the pressure fluctuation at the source region associated with the explosion, and thus, provides a valuable information about acoustic source mechanisms and characteristics in greater detail. We compare acoustic source properties (i.e., peak overpressure, duration, and non-isotropic shape) of a series of explosions having different emplacement conditions and investigate the relationship of the acoustic sources to the yields of explosions. The time histories of acoustic sources may refine our knowledge of sound-generation mechanisms of shallow explosions, and thereby allow for accurate yield estimation based on acoustic measurements. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission
NASA Technical Reports Server (NTRS)
Imaoka, Atsushi; Kihara, Masami
1996-01-01
An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.
Payne, Courtney E.; Wolfrum, Edward J.
2015-03-12
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Payne, Courtney E.; Wolfrum, Edward J.
Obtaining accurate chemical composition and reactivity (measures of carbohydrate release and yield) information for biomass feedstocks in a timely manner is necessary for the commercialization of biofuels. Our objective was to use near-infrared (NIR) spectroscopy and partial least squares (PLS) multivariate analysis to develop calibration models to predict the feedstock composition and the release and yield of soluble carbohydrates generated by a bench-scale dilute acid pretreatment and enzymatic hydrolysis assay. Major feedstocks included in the calibration models are corn stover, sorghum, switchgrass, perennial cool season grasses, rice straw, and miscanthus. Here are the results: We present individual model statistics tomore » demonstrate model performance and validation samples to more accurately measure predictive quality of the models. The PLS-2 model for composition predicts glucan, xylan, lignin, and ash (wt%) with uncertainties similar to primary measurement methods. A PLS-2 model was developed to predict glucose and xylose release following pretreatment and enzymatic hydrolysis. An additional PLS-2 model was developed to predict glucan and xylan yield. PLS-1 models were developed to predict the sum of glucose/glucan and xylose/xylan for release and yield (grams per gram). The release and yield models have higher uncertainties than the primary methods used to develop the models. In conclusion, it is possible to build effective multispecies feedstock models for composition, as well as carbohydrate release and yield. The model for composition is useful for predicting glucan, xylan, lignin, and ash with good uncertainties. The release and yield models have higher uncertainties; however, these models are useful for rapidly screening sample populations to identify unusual samples.« less
Variability of Currents in Great South Channel and Over Georges Bank: Observation and Modeling
1992-06-01
Rizzoli motivated me to study the driv:,: mechanism of stratified tidal rectification using diagnostic analysis methods . Conversations with Glen...drifter trajectories in the 1988 and 1989 surveys give further encouragement that the analysis method yields an accurate picture of the nontidal flow...harmonic truncation method . Scaling analysis argues that this method is not appropriate for a step topography because it is valid only when the
Remote-sensing-based rapid assessment of flood crop loss to support USDA flooding decision-making
NASA Astrophysics Data System (ADS)
Di, L.; Yu, G.; Yang, Z.; Hipple, J.; Shrestha, R.
2016-12-01
Floods often cause significant crop loss in the United States. Timely and objective assessment of flood-related crop loss is very important for crop monitoring and risk management in agricultural and disaster-related decision-making in USDA. Among all flood-related information, crop yield loss is particularly important. Decision on proper mitigation, relief, and monetary compensation relies on it. Currently USDA mostly relies on field surveys to obtain crop loss information and compensate farmers' loss claim. Such methods are expensive, labor intensive, and time consumptive, especially for a large flood that affects a large geographic area. Recent studies have demonstrated that Earth observation (EO) data are useful in post-flood crop loss assessment for a large geographic area objectively, timely, accurately, and cost effectively. There are three stages of flood damage assessment, including rapid assessment, early recovery assessment, and in-depth assessment. EO-based flood assessment methods currently rely on the time-series of vegetation index to assess the yield loss. Such methods are suitable for in-depth assessment but are less suitable for rapid assessment since the after-flood vegetation index time series is not available. This presentation presents a new EO-based method for the rapid assessment of crop yield loss immediately after a flood event to support the USDA flood decision making. The method is based on the historic records of flood severity, flood duration, flood date, crop type, EO-based both before- and immediate-after-flood crop conditions, and corresponding crop yield loss. It hypotheses that a flood of same severity occurring at the same pheonological stage of a crop will cause the similar damage to the crop yield regardless the flood years. With this hypothesis, a regression-based rapid assessment algorithm can be developed by learning from historic records of flood events and corresponding crop yield loss. In this study, historic records of MODIS-based flood and vegetation products and USDA/NASS crop type and crop yield data are used to train the regression-based rapid assessment algorithm. Validation of the rapid assessment algorithm indicates it can predict the yield loss at 90% accuracy, which is accurate enough to support USDA on flood-related quick response and mitigation.
A fast numerical scheme for causal relativistic hydrodynamics with dissipation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takamoto, Makoto, E-mail: takamoto@tap.scphys.kyoto-u.ac.jp; Inutsuka, Shu-ichiro
2011-08-01
Highlights: {yields} We have developed a new multi-dimensional numerical scheme for causal relativistic hydrodynamics with dissipation. {yields} Our new scheme can calculate the evolution of dissipative relativistic hydrodynamics faster and more effectively than existing schemes. {yields} Since we use the Riemann solver for solving the advection steps, our method can capture shocks very accurately. - Abstract: In this paper, we develop a stable and fast numerical scheme for relativistic dissipative hydrodynamics based on Israel-Stewart theory. Israel-Stewart theory is a stable and causal description of dissipation in relativistic hydrodynamics although it includes relaxation process with the timescale for collision of constituentmore » particles, which introduces stiff equations and makes practical numerical calculation difficult. In our new scheme, we use Strang's splitting method, and use the piecewise exact solutions for solving the extremely short timescale problem. In addition, since we split the calculations into inviscid step and dissipative step, Riemann solver can be used for obtaining numerical flux for the inviscid step. The use of Riemann solver enables us to capture shocks very accurately. Simple numerical examples are shown. The present scheme can be applied to various high energy phenomena of astrophysics and nuclear physics.« less
Forecasting volcanic air pollution in Hawaii: Tests of time series models
NASA Astrophysics Data System (ADS)
Reikard, Gordon
2012-12-01
Volcanic air pollution, known as vog (volcanic smog) has recently become a major issue in the Hawaiian islands. Vog is caused when volcanic gases react with oxygen and water vapor. It consists of a mixture of gases and aerosols, which include sulfur dioxide and other sulfates. The source of the volcanic gases is the continuing eruption of Mount Kilauea. This paper studies predicting vog using statistical methods. The data sets include time series for SO2 and SO4, over locations spanning the west, south and southeast coasts of Hawaii, and the city of Hilo. The forecasting models include regressions and neural networks, and a frequency domain algorithm. The most typical pattern for the SO2 data is for the frequency domain method to yield the most accurate forecasts over the first few hours, and at the 24 h horizon. The neural net places second. For the SO4 data, the results are less consistent. At two sites, the neural net generally yields the most accurate forecasts, except at the 1 and 24 h horizons, where the frequency domain technique wins narrowly. At one site, the neural net and the frequency domain algorithm yield comparable errors over the first 5 h, after which the neural net dominates. At the remaining site, the frequency domain method is more accurate over the first 4 h, after which the neural net achieves smaller errors. For all the series, the average errors are well within one standard deviation of the actual data at all the horizons. However, the errors also show irregular outliers. In essence, the models capture the central tendency of the data, but are less effective in predicting the extreme events.
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric
2018-06-01
Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.
NASA Astrophysics Data System (ADS)
Seraphin, Pierre; Gonçalvès, Julio; Vallet-Coulomb, Christine; Champollion, Cédric
2018-03-01
Spatially distributed values of the specific yield, a fundamental parameter for transient groundwater mass balance calculations, were obtained by means of three independent methods for the Crau plain, France. In contrast to its traditional use to assess recharge based on a given specific yield, the water-table fluctuation (WTF) method, applied using major recharging events, gave a first set of reference values. Then, large infiltration processes recorded by monitored boreholes and caused by major precipitation events were interpreted in terms of specific yield by means of a one-dimensional vertical numerical model solving Richards' equations within the unsaturated zone. Finally, two gravity field campaigns, at low and high piezometric levels, were carried out to assess the groundwater mass variation and thus alternative specific yield values. The range obtained by the WTF method for this aquifer made of alluvial detrital material was 2.9- 26%, in line with the scarce data available so far. The average spatial value of specific yield by the WTF method (9.1%) is consistent with the aquifer scale value from the hydro-gravimetric approach. In this investigation, an estimate of the hitherto unknown spatial distribution of the specific yield over the Crau plain was obtained using the most reliable method (the WTF method). A groundwater mass balance calculation over the domain using this distribution yielded similar results to an independent quantification based on a stable isotope-mixing model. This agreement reinforces the relevance of such estimates, which can be used to build a more accurate transient hydrogeological model.
Improved sample management in the cylindrical-tube microelectrophoresis method
NASA Technical Reports Server (NTRS)
Smolka, A. J. K.
1980-01-01
A modification to an analytical microelectrophoresis system is described that improves the manipulation of the sample particles and fluid. The apparatus modification and improved operational procedure should yield more accurate measurements of particle mobilities and permit less skilled operators to use the apparatus.
Calculations of separated 3-D flows with a pressure-staggered Navier-Stokes equations solver
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1991-01-01
A Navier-Stokes equations solver based on a pressure correction method with a pressure-staggered mesh and calculations of separated three-dimensional flows are presented. It is shown that the velocity pressure decoupling, which occurs when various pressure correction algorithms are used for pressure-staggered meshes, is caused by the ill-conditioned discrete pressure correction equation. The use of a partial differential equation for the incremental pressure eliminates the velocity pressure decoupling mechanism by itself and yields accurate numerical results. Example flows considered are a three-dimensional lid driven cavity flow and a laminar flow through a 90 degree bend square duct. For the lid driven cavity flow, the present numerical results compare more favorably with the measured data than those obtained using a formally third order accurate quadratic upwind interpolation scheme. For the curved duct flow, the present numerical method yields a grid independent solution with a very small number of grid points. The calculated velocity profiles are in good agreement with the measured data.
Mass spectrometry-based protein identification with accurate statistical significance assignment.
Alves, Gelio; Yu, Yi-Kuo
2015-03-01
Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.
An efficient scan diagnosis methodology according to scan failure mode for yield enhancement
NASA Astrophysics Data System (ADS)
Kim, Jung-Tae; Seo, Nam-Sik; Oh, Ghil-Geun; Kim, Dae-Gue; Lee, Kyu-Taek; Choi, Chi-Young; Kim, InSoo; Min, Hyoung Bok
2008-12-01
Yield has always been a driving consideration during fabrication of modern semiconductor industry. Statistically, the largest portion of wafer yield loss is defective scan failure. This paper presents efficient failure analysis methods for initial yield ramp up and ongoing product with scan diagnosis. Result of our analysis shows that more than 60% of the scan failure dies fall into the category of shift mode in the very deep submicron (VDSM) devices. However, localization of scan shift mode failure is very difficult in comparison to capture mode failure because it is caused by the malfunction of scan chain. Addressing the biggest challenge, we propose the most suitable analysis method according to scan failure mode (capture / shift) for yield enhancement. In the event of capture failure mode, this paper describes the method that integrates scan diagnosis flow and backside probing technology to obtain more accurate candidates. We also describe several unique techniques, such as bulk back-grinding solution, efficient backside probing and signal analysis method. Lastly, we introduce blocked chain analysis algorithm for efficient analysis of shift failure mode. In this paper, we contribute to enhancement of the yield as a result of the combination of two methods. We confirm the failure candidates with physical failure analysis (PFA) method. The direct feedback of the defective visualization is useful to mass-produce devices in a shorter time. The experimental data on mass products show that our method produces average reduction by 13.7% in defective SCAN & SRAM-BIST failure rates and by 18.2% in wafer yield rates.
Loheide, Steven P.; Butler, James J.; Gorelick, Steven M.
2005-01-01
Groundwater consumption by phreatophytes is a difficult‐to‐measure but important component of the water budget in many arid and semiarid environments. Over the past 70 years the consumptive use of groundwater by phreatophytes has been estimated using a method that analyzes diurnal trends in hydrographs from wells that are screened across the water table (White, 1932). The reliability of estimates obtained with this approach has never been rigorously evaluated using saturated‐unsaturated flow simulation. We present such an evaluation for common flow geometries and a range of hydraulic properties. Results indicate that the major source of error in the White method is the uncertainty in the estimate of specific yield. Evapotranspirative consumption of groundwater will often be significantly overpredicted with the White method if the effects of drainage time and the depth to the water table on specific yield are ignored. We utilize the concept of readily available specific yield as the basis for estimation of the specific yield value appropriate for use with the White method. Guidelines are defined for estimating readily available specific yield based on sediment texture. Use of these guidelines with the White method should enable the evapotranspirative consumption of groundwater to be more accurately quantified.
Measurements of Electrical and Electron Emission Properties of Highly Insulating Materials
NASA Technical Reports Server (NTRS)
Dennison, J. R.; Brunson, Jerilyn; Hoffman, Ryan; Abbott, Jonathon; Thomson, Clint; Sim, Alec
2005-01-01
Highly insulating materials often acquire significant charges when subjected to fluxes of electrons, ions, or photons. This charge can significantly modify the materials properties of the materials and have profound effects on the functionality of the materials in a variety of applications. These include charging of spacecraft materials due to interactions with the severe space environment, enhanced contamination due to charging in Lunar of Martian environments, high power arching of cables and sources, modification of tethers and ion thrusters for propulsion, and scanning electron microscopy, to name but a few examples. This paper describes new techniques and measurements of the electron emission properties and resistivity of highly insulating materials. Electron yields are a measure of the number of electrons emitted from a material per incident particle (electron, ion or photon). Electron yields depend on incident species, energy and angle, and on the material. They determine the net charge acquired by a material subject to a give incident flu. New pulsed-beam techniques will be described that allow accurate measurement of the yields for uncharged insulators and measurements of how the yields are modified as charge builds up in the insulator. A key parameter in modeling charge dissipation is the resistivity of insulating materials. This determines how charge will accumulate and redistribute across an insulator, as well as the time scale for charge transport and dissipation. Comparison of new long term constant-voltage methods and charge storage methods for measuring resistivity of highly insulating materials will be compared to more commonly used, but less accurate methods.
Random Forests for Global and Regional Crop Yield Predictions.
Jeong, Jig Han; Resop, Jonathan P; Mueller, Nathaniel D; Fleisher, David H; Yun, Kyungdahm; Butler, Ethan E; Timlin, Dennis J; Shim, Kyo-Moon; Gerber, James S; Reddy, Vangimalla R; Kim, Soo-Hyung
2016-01-01
Accurate predictions of crop yield are critical for developing effective agricultural and food policies at the regional and global scales. We evaluated a machine-learning method, Random Forests (RF), for its ability to predict crop yield responses to climate and biophysical variables at global and regional scales in wheat, maize, and potato in comparison with multiple linear regressions (MLR) serving as a benchmark. We used crop yield data from various sources and regions for model training and testing: 1) gridded global wheat grain yield, 2) maize grain yield from US counties over thirty years, and 3) potato tuber and maize silage yield from the northeastern seaboard region. RF was found highly capable of predicting crop yields and outperformed MLR benchmarks in all performance statistics that were compared. For example, the root mean square errors (RMSE) ranged between 6 and 14% of the average observed yield with RF models in all test cases whereas these values ranged from 14% to 49% for MLR models. Our results show that RF is an effective and versatile machine-learning method for crop yield predictions at regional and global scales for its high accuracy and precision, ease of use, and utility in data analysis. RF may result in a loss of accuracy when predicting the extreme ends or responses beyond the boundaries of the training data.
Pholwat, Suporn; Liu, Jie; Stroup, Suzanne; Gratz, Jean; Banu, Sayera; Rahman, S M Mazidur; Ferdous, Sara Sabrina; Foongladda, Suporn; Boonlert, Duangjai; Ogarkov, Oleg; Zhdanova, Svetlana; Kibiki, Gibson; Heysell, Scott; Houpt, Eric
2015-02-24
Genotypic methods for drug susceptibility testing of Mycobacterium tuberculosis are desirable to speed the diagnosis and proper therapy of tuberculosis (TB). However, the numbers of genes and polymorphisms implicated in resistance have proliferated, challenging diagnostic design. We developed a microfluidic TaqMan array card (TAC) that utilizes both sequence-specific probes and high-resolution melt analysis (HRM), providing two layers of detection of mutations. Twenty-seven primer pairs and 40 probes were designed to interrogate 3,200 base pairs of critical regions of the inhA, katG, rpoB, embB, rpsL, rrs, eis, gyrA, gyrB, and pncA genes. The method was evaluated on 230 clinical M. tuberculosis isolates from around the world, and it yielded 96.1% accuracy (2,431/2,530) in comparison to that of Sanger sequencing and 87% accuracy in comparison to that of the slow culture-based susceptibility testing. This TAC-HRM method integrates assays for 10 genes to yield fast, comprehensive, and accurate drug susceptibility results for the 9 major antibiotics used to treat TB and could be deployed to improve treatment outcomes. Multidrug-resistant tuberculosis threatens global tuberculosis control efforts. Optimal therapy utilizes susceptibility test results to guide individualized treatment regimens; however, the susceptibility testing methods in use are technically difficult and slow. We developed an integrated TaqMan array card method with high-resolution melt analysis that interrogates 10 genes to yield a fast, comprehensive, and accurate drug susceptibility result for the 9 major antituberculosis antibiotics. Copyright © 2015 Pholwat et al.
Can the electronegativity equalization method predict spectroscopic properties?
Verstraelen, T; Bultinck, P
2015-02-05
The electronegativity equalization method is classically used as a method allowing the fast generation of atomic charges using a set of calibrated parameters and provided knowledge of the molecular structure. Recently, it has started being used for the calculation of other reactivity descriptors and for the development of polarizable and reactive force fields. For such applications, it is of interest to know whether the method, through the inclusion of the molecular geometry in the Taylor expansion of the energy, would also allow sufficiently accurate predictions of spectroscopic data. In this work, relevant quantities for IR spectroscopy are considered, namely the dipole derivatives and the Cartesian Hessian. Despite careful calibration of parameters for this specific task, it is shown that the current models yield insufficiently accurate results. Copyright © 2013 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wu, T.; Li, T.; Li, J.; Wang, G.
2017-12-01
Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.
MRI volumetry of prefrontal cortex
NASA Astrophysics Data System (ADS)
Sheline, Yvette I.; Black, Kevin J.; Lin, Daniel Y.; Pimmel, Joseph; Wang, Po; Haller, John W.; Csernansky, John G.; Gado, Mokhtar; Walkup, Ronald K.; Brunsden, Barry S.; Vannier, Michael W.
1995-05-01
Prefrontal cortex volumetry by brain magnetic resonance (MR) is required to estimate changes postulated to occur in certain psychiatric and neurologic disorders. A semiautomated method with quantitative characterization of its performance is sought to reliably distinguish small prefrontal cortex volume changes within individuals and between groups. Stereological methods were tested by a blinded comparison of measurements applied to 3D MR scans obtained using an MPRAGE protocol. Fixed grid stereologic methods were used to estimate prefrontal cortex volumes on a graphic workstation, after the images are scaled from 16 to 8 bits using a histogram method. In addition images were resliced into coronal sections perpendicular to the bicommissural plane. Prefrontal cortex volumes were defined as all sections of the frontal lobe anterior to the anterior commissure. Ventricular volumes were excluded. Stereological measurement yielded high repeatability and precision, and was time efficient for the raters. The coefficient of error was
Estimating rice yield from MODIS-Landsat fusion data in Taiwan
NASA Astrophysics Data System (ADS)
Chen, C. R.; Chen, C. F.; Nguyen, S. T.
2017-12-01
Rice production monitoring with remote sensing is an important activity in Taiwan due to official initiatives. Yield estimation is a challenge in Taiwan because rice fields are small and fragmental. High spatiotemporal satellite data providing phenological information of rice crops is thus required for this monitoring purpose. This research aims to develop data fusion approaches to integrate daily Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat data for rice yield estimation in Taiwan. In this study, the low-resolution MODIS LST and emissivity data are used as reference data sources to obtain the high-resolution LST from Landsat data using the mixed-pixel analysis technique, and the time-series EVI data were derived the fusion of MODIS and Landsat spectral band data using STARFM method. The LST and EVI simulated results showed the close agreement between the LST and EVI obtained by the proposed methods with the reference data. The rice-yield model was established using EVI and LST data based on information of rice crop phenology collected from 371 ground survey sites across the country in 2014. The results achieved from the fusion datasets compared with the reference data indicated the close relationship between the two datasets with the correlation coefficient (R2) of 0.75 and root mean square error (RMSE) of 338.7 kgs, which were more accurate than those using the coarse-resolution MODIS LST data (R2 = 0.71 and RMSE = 623.82 kgs). For the comparison of total production, 64 towns located in the west part of Taiwan were used. The results also confirmed that the model using fusion datasets produced more accurate results (R2 = 0.95 and RMSE = 1,243 tons) than that using the course-resolution MODIS data (R2 = 0.91 and RMSE = 1,749 tons). This study demonstrates the application of MODIS-Landsat fusion data for rice yield estimation at the township level in Taiwan. The results obtained from the methods used in this study could be useful to policymakers; and thus, the methods can be transferable to other regions in the world for rice yield estimation.
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2005-01-01
Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.
Simple method for quick estimation of aquifer hydrogeological parameters
NASA Astrophysics Data System (ADS)
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
Radiation from Large Gas Volumes and Heat Exchange in Steam Boiler Furnaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, A. N., E-mail: tgtu-kafedra-ese@mail.ru
2015-09-15
Radiation from large cylindrical gas volumes is studied as a means of simulating the flare in steam boiler furnaces. Calculations of heat exchange in a furnace by the zonal method and by simulation of the flare with cylindrical gas volumes are described. The latter method is more accurate and yields more reliable information on heat transfer processes taking place in furnaces.
A conceptual guide to detection probability for point counts and other count-based survey methods
D. Archibald McCallum
2005-01-01
Accurate and precise estimates of numbers of animals are vitally needed both to assess population status and to evaluate management decisions. Various methods exist for counting birds, but most of those used with territorial landbirds yield only indices, not true estimates of population size. The need for valid density estimates has spawned a number of models for...
NASA Technical Reports Server (NTRS)
Chambers, J. R.; Grafton, S. B.; Lutze, F. H.
1981-01-01
Dynamic stability derivatives are evaluated on the basis of rolling-flow, curved-flow and snaking tests. Attention is given to the hardware associated with curved-flow, rolling-flow and oscillatory pure-yawing wind-tunnel tests. It is found that the snaking technique, when combined with linear- and forced-oscillation methods, yields an important method for evaluating beta derivatives for current configurations at high angles of attack. Since the rolling flow model is fixed during testing, forced oscillations may be imparted to the model, permitting the measurement of damping and cross-derivatives. These results, when coupled with basic rolling-flow or rotary-balance data, yield a highly accurate mathematical model for studies of incipient spin and spin entry.
Parallel tempering for the traveling salesman problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Percus, Allon; Wang, Richard; Hyman, Jeffrey
We explore the potential of parallel tempering as a combinatorial optimization method, applying it to the traveling salesman problem. We compare simulation results of parallel tempering with a benchmark implementation of simulated annealing, and study how different choices of parameters affect the relative performance of the two methods. We find that a straightforward implementation of parallel tempering can outperform simulated annealing in several crucial respects. When parameters are chosen appropriately, both methods yield close approximation to the actual minimum distance for an instance with 200 nodes. However, parallel tempering yields more consistently accurate results when a series of independent simulationsmore » are performed. Our results suggest that parallel tempering might offer a simple but powerful alternative to simulated annealing for combinatorial optimization problems.« less
Digital simulation of an arbitrary stationary stochastic process by spectral representation.
Yura, Harold T; Hanson, Steen G
2011-04-01
In this paper we present a straightforward, efficient, and computationally fast method for creating a large number of discrete samples with an arbitrary given probability density function and a specified spectral content. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In contrast to previous work, where the analyses were limited to auto regressive and or iterative techniques to obtain satisfactory results, we find that a single application of the inverse transform method yields satisfactory results for a wide class of arbitrary probability distributions. Although a single application of the inverse transform technique does not conserve the power spectra exactly, it yields highly accurate numerical results for a wide range of probability distributions and target power spectra that are sufficient for system simulation purposes and can thus be regarded as an accurate engineering approximation, which can be used for wide range of practical applications. A sufficiency condition is presented regarding the range of parameter values where a single application of the inverse transform method yields satisfactory agreement between the simulated and target power spectra, and a series of examples relevant for the optics community are presented and discussed. Outside this parameter range the agreement gracefully degrades but does not distort in shape. Although we demonstrate the method here focusing on stationary random processes, we see no reason why the method could not be extended to simulate non-stationary random processes. © 2011 Optical Society of America
Solving the shrinkage-induced PDMS alignment registration issue in multilayer soft lithography
NASA Astrophysics Data System (ADS)
Moraes, Christopher; Sun, Yu; Simmons, Craig A.
2009-06-01
Shrinkage of polydimethylsiloxane (PDMS) complicates alignment registration between layers during multilayer soft lithography fabrication. This often hinders the development of large-scale microfabricated arrayed devices. Here we report a rapid method to construct large-area, multilayered devices with stringent alignment requirements. This technique, which exploits a previously unrecognized aspect of sandwich mold fabrication, improves device yield, enables highly accurate alignment over large areas of multilayered devices and does not require strict regulation of fabrication conditions or extensive calibration processes. To demonstrate this technique, a microfabricated Braille display was developed and characterized. High device yield and accurate alignment within 15 µm were achieved over three layers for an array of 108 Braille units spread over a 6.5 cm2 area, demonstrating the fabrication of well-aligned devices with greater ease and efficiency than previously possible.
Using groundwater levels to estimate recharge
Healy, R.W.; Cook, P.G.
2002-01-01
Accurate estimation of groundwater recharge is extremely important for proper management of groundwater systems. Many different approaches exist for estimating recharge. This paper presents a review of methods that are based on groundwater-level data. The water-table fluctuation method may be the most widely used technique for estimating recharge; it requires knowledge of specific yield and changes in water levels over time. Advantages of this approach include its simplicity and an insensitivity to the mechanism by which water moves through the unsaturated zone. Uncertainty in estimates generated by this method relate to the limited accuracy with which specific yield can be determined and to the extent to which assumptions inherent in the method are valid. Other methods that use water levels (mostly based on the Darcy equation) are also described. The theory underlying the methods is explained. Examples from the literature are used to illustrate applications of the different methods.
Solav, Dana; Camomilla, Valentina; Cereatti, Andrea; Barré, Arnaud; Aminian, Kamiar; Wolf, Alon
2017-09-06
The aim of this study was to analyze the accuracy of bone pose estimation based on sub-clusters of three skin-markers characterized by triangular Cosserat point elements (TCPEs) and to evaluate the capability of four instantaneous physical parameters, which can be measured non-invasively in vivo, to identify the most accurate TCPEs. Moreover, TCPE pose estimations were compared with the estimations of two least squares minimization methods applied to the cluster of all markers, using rigid body (RBLS) and homogeneous deformation (HDLS) assumptions. Analysis was performed on previously collected in vivo treadmill gait data composed of simultaneous measurements of the gold-standard bone pose by bi-plane fluoroscopy tracking the subjects' knee prosthesis and a stereophotogrammetric system tracking skin-markers affected by soft tissue artifact. Femur orientation and position errors estimated from skin-marker clusters were computed for 18 subjects using clusters of up to 35 markers. Results based on gold-standard data revealed that instantaneous subsets of TCPEs exist which estimate the femur pose with reasonable accuracy (median root mean square error during stance/swing: 1.4/2.8deg for orientation, 1.5/4.2mm for position). A non-invasive and instantaneous criteria to select accurate TCPEs for pose estimation (4.8/7.3deg, 5.8/12.3mm), was compared with RBLS (4.3/6.6deg, 6.9/16.6mm) and HDLS (4.6/7.6deg, 6.7/12.5mm). Accounting for homogeneous deformation, using HDLS or selected TCPEs, yielded more accurate position estimations than RBLS method, which, conversely, yielded more accurate orientation estimations. Further investigation is required to devise effective criteria for cluster selection that could represent a significant improvement in bone pose estimation accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan
2017-12-01
Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.
SU-C-BRA-06: Automatic Brain Tumor Segmentation for Stereotactic Radiosurgery Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Y; Stojadinovic, S; Jiang, S
Purpose: Stereotactic radiosurgery (SRS), which delivers a potent dose of highly conformal radiation to the target in a single fraction, requires accurate tumor delineation for treatment planning. We present an automatic segmentation strategy, that synergizes intensity histogram thresholding, super-voxel clustering, and level-set based contour evolving methods to efficiently and accurately delineate SRS brain tumors on contrast-enhance T1-weighted (T1c) Magnetic Resonance Images (MRI). Methods: The developed auto-segmentation strategy consists of three major steps. Firstly, tumor sites are localized through 2D slice intensity histogram scanning. Then, super voxels are obtained through clustering the corresponding voxels in 3D with reference to the similaritymore » metrics composited from spatial distance and intensity difference. The combination of the above two could generate the initial contour surface. Finally, a localized region active contour model is utilized to evolve the surface to achieve the accurate delineation of the tumors. The developed method was evaluated on numerical phantom data, synthetic BRATS (Multimodal Brain Tumor Image Segmentation challenge) data, and clinical patients’ data. The auto-segmentation results were quantitatively evaluated by comparing to ground truths with both volume and surface similarity metrics. Results: DICE coefficient (DC) was performed as a quantitative metric to evaluate the auto-segmentation in the numerical phantom with 8 tumors. DCs are 0.999±0.001 without noise, 0.969±0.065 with Rician noise and 0.976±0.038 with Gaussian noise. DC, NMI (Normalized Mutual Information), SSIM (Structural Similarity) and Hausdorff distance (HD) were calculated as the metrics for the BRATS and patients’ data. Assessment of BRATS data across 25 tumor segmentation yield DC 0.886±0.078, NMI 0.817±0.108, SSIM 0.997±0.002, and HD 6.483±4.079mm. Evaluation on 8 patients with total 14 tumor sites yield DC 0.872±0.070, NMI 0.824±0.078, SSIM 0.999±0.001, and HD 5.926±6.141mm. Conclusion: The developed automatic segmentation strategy, which yields accurate brain tumor delineation in evaluation cases, is promising for its application in SRS treatment planning.« less
Improved modified energy ratio method using a multi-window approach for accurate arrival picking
NASA Astrophysics Data System (ADS)
Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun
2017-04-01
To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.
Tang, Yuting; Zhang, Yue; Rosenberg, Julian N.; ...
2016-11-08
Microalgae are a valuable source of lipid feedstocks for biodiesel and valuable omega-3 fatty acids. Nannochloropsis gaditana has emerged as a promising producer of eicosapentaenoic acid (EPA) due to its fast growth rate and high EPA content. In the present study, the fatty acid profile of Nannochloropsis gaditana was found to be naturally high in EPA and devoid of docosahexaenoic acid (DHA), thereby providing an opportunity to maximize the efficacy of EPA production. Using an optimized one-step in situ transesterification method (methanol:biomass = 90 mL/g; HCl 5% by vol.; 70 °C; 1.5 h), the maximum fatty acid methyl ester (FAME)more » yield of Nannochloropsis gaditana cultivated under rich condition was quantified as 10.04% ± 0.08% by weight with EPA-yields as high as 4.02% ± 0.17% based on dry biomass. The total FAME and EPA yields were 1.58- and 1.23-fold higher separately than that obtained using conventional two-step method (solvent system: methanol and chloroform). Furthermore, this one-step in situ method provides a fast and simple method to measure fatty acid methyl ester (FAME) yields and could serve as a promising method to generate eicosapentaenoic acid methyl ester from microalgae.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Yuting; Zhang, Yue; Rosenberg, Julian N.
Microalgae are a valuable source of lipid feedstocks for biodiesel and valuable omega-3 fatty acids. Nannochloropsis gaditana has emerged as a promising producer of eicosapentaenoic acid (EPA) due to its fast growth rate and high EPA content. In the present study, the fatty acid profile of Nannochloropsis gaditana was found to be naturally high in EPA and devoid of docosahexaenoic acid (DHA), thereby providing an opportunity to maximize the efficacy of EPA production. Using an optimized one-step in situ transesterification method (methanol:biomass = 90 mL/g; HCl 5% by vol.; 70 °C; 1.5 h), the maximum fatty acid methyl ester (FAME)more » yield of Nannochloropsis gaditana cultivated under rich condition was quantified as 10.04% ± 0.08% by weight with EPA-yields as high as 4.02% ± 0.17% based on dry biomass. The total FAME and EPA yields were 1.58- and 1.23-fold higher separately than that obtained using conventional two-step method (solvent system: methanol and chloroform). Furthermore, this one-step in situ method provides a fast and simple method to measure fatty acid methyl ester (FAME) yields and could serve as a promising method to generate eicosapentaenoic acid methyl ester from microalgae.« less
Covariance Matrix Evaluations for Independent Mass Fission Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terranova, N., E-mail: nicholas.terranova@unibo.it; Serot, O.; Archier, P.
2015-01-15
Recent needs for more accurate fission product yields include covariance information to allow improved uncertainty estimations of the parameters used by design codes. The aim of this work is to investigate the possibility to generate more reliable and complete uncertainty information on independent mass fission yields. Mass yields covariances are estimated through a convolution between the multi-Gaussian empirical model based on Brosa's fission modes, which describe the pre-neutron mass yields, and the average prompt neutron multiplicity curve. The covariance generation task has been approached using the Bayesian generalized least squared method through the CONRAD code. Preliminary results on mass yieldsmore » variance-covariance matrix will be presented and discussed from physical grounds in the case of {sup 235}U(n{sub th}, f) and {sup 239}Pu(n{sub th}, f) reactions.« less
Calculation of K-shell fluorescence yields for low-Z elements
NASA Astrophysics Data System (ADS)
Nekkab, M.; Kahoul, A.; Deghfel, B.; Aylikci, N. Küp; Aylikçi, V.
2015-03-01
The analytical methods based on X-ray fluorescence are advantageous for practical applications in a variety of fields including atomic physics, X-ray fluorescence surface chemical analysis and medical research and so the accurate fluorescence yields (ωK) are required for these applications. In this contribution we report a new parameters for calculation of K-shell fluorescence yields (ωK) of elements in the range of 11≤Z≤30. The experimental data are interpolated by using the famous analytical function (ωk/(1 -ωk)) 1 /q (were q=3, 3.5 and 4) vs Z to deduce the empirical K-shell fluorescence yields. A comparison is made between the results of the procedures followed here and those theoretical and other semi-empirical fluorescence yield values. Reasonable agreement was typically obtained between our result and other works.
A Cubic Radial Basis Function in the MLPG Method for Beam Problems
NASA Technical Reports Server (NTRS)
Raju, I. S.; Phillips, D. R.
2002-01-01
A non-compactly supported cubic radial basis function implementation of the MLPG method for beam problems is presented. The evaluation of the derivatives of the shape functions obtained from the radial basis function interpolation is much simpler than the evaluation of the moving least squares shape function derivatives. The radial basis MLPG yields results as accurate or better than those obtained by the conventional MLPG method for problems with discontinuous and other complex loading conditions.
Forecasting space weather: Can new econometric methods improve accuracy?
NASA Astrophysics Data System (ADS)
Reikard, Gordon
2011-06-01
Space weather forecasts are currently used in areas ranging from navigation and communication to electric power system operations. The relevant forecast horizons can range from as little as 24 h to several days. This paper analyzes the predictability of two major space weather measures using new time series methods, many of them derived from econometrics. The data sets are the A p geomagnetic index and the solar radio flux at 10.7 cm. The methods tested include nonlinear regressions, neural networks, frequency domain algorithms, GARCH models (which utilize the residual variance), state transition models, and models that combine elements of several techniques. While combined models are complex, they can be programmed using modern statistical software. The data frequency is daily, and forecasting experiments are run over horizons ranging from 1 to 7 days. Two major conclusions stand out. First, the frequency domain method forecasts the A p index more accurately than any time domain model, including both regressions and neural networks. This finding is very robust, and holds for all forecast horizons. Combining the frequency domain method with other techniques yields a further small improvement in accuracy. Second, the neural network forecasts the solar flux more accurately than any other method, although at short horizons (2 days or less) the regression and net yield similar results. The neural net does best when it includes measures of the long-term component in the data.
Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods
NASA Astrophysics Data System (ADS)
Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.
2014-12-01
Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.
Reverse radiance: a fast accurate method for determining luminance
NASA Astrophysics Data System (ADS)
Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay
2012-10-01
Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.
Simulation of solute transport across low-permeability barrier walls
Harte, P.T.; Konikow, Leonard F.; Hornberger, G.Z.
2006-01-01
Low-permeability, non-reactive barrier walls are often used to contain contaminants in an aquifer. Rates of solute transport through such barriers are typically many orders of magnitude slower than rates through the aquifer. Nevertheless, the success of remedial actions may be sensitive to these low rates of transport. Two numerical simulation methods for representing low-permeability barriers in a finite-difference groundwater-flow and transport model were tested. In the first method, the hydraulic properties of the barrier were represented directly on grid cells and in the second method, the intercell hydraulic-conductance values were adjusted to approximate the reduction in horizontal flow, allowing use of a coarser and computationally efficient grid. The alternative methods were tested and evaluated on the basis of hypothetical test problems and a field case involving tetrachloroethylene (PCE) contamination at a Superfund site in New Hampshire. For all cases, advective transport across the barrier was negligible, but preexisting numerical approaches to calculate dispersion yielded dispersive fluxes that were greater than expected. A transport model (MODFLOW-GWT) was modified to (1) allow different dispersive and diffusive properties to be assigned to the barrier than the adjacent aquifer and (2) more accurately calculate dispersion from concentration gradients and solute fluxes near barriers. The new approach yields reasonable and accurate concentrations for the test cases. ?? 2006.
Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation
Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon
2005-01-01
Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRET-induced signal, and for differences in the detection efficiency and quantum yield of the probes. ALEX yields accurate FRET independent of instrumental factors, such as excitation intensity or detector alignment. Using DNA fragments, we showed that ALEX-based distances agree well with predictions from a cylindrical model of DNA; ALEX-based distances fit better to theory than distances obtained at the ensemble level. Distance measurements within transcription complexes agreed well with ensemble-FRET measurements, and with structural models based on ensemble-FRET and x-ray crystallography. ALEX can benefit structural analysis of biomolecules, especially when such molecules are inaccessible to conventional structural methods due to heterogeneity or transient nature. PMID:15653725
Determining the best method of Nellcor pulse oximeter sensor application in neonates.
Saraswat, A; Simionato, L; Dawson, J A; Thio, M; Kamlin, C O F; Owen, L; Schmölzer, G; Davis, P G
2012-05-01
To identify the optimal sensor application method that gave the quickest display of accurate heart rate (HR) data using the Nellcor OxiMax N-600x pulse oximeter (PO). Stable infants who were monitored with an electrocardiograph were included. Three sensor application techniques were studied: (i) sensor connected to cable, then applied to infant; (ii) sensor connected to cable, applied to investigator's finger, and then to infant; (iii) sensor applied to infant, then connected to cable. The order of techniques tested was randomized for each infant. Time taken to apply the PO sensor, to display data and to display accurate data (HR(PO) = HR(ECG) ± 3 bpm) were recorded using a stopwatch. Forty infants were studied [mean (SD) birthweight, 1455 (872) g; gestational age, 31 (4) weeks; post-menstrual age, 34 (4) weeks]. Method 3 acquired any data significantly faster than methods 1 (p = 0.013; CI, -9.6 to -3.0 sec) and 2 (p = 0.004; CI, -5.9 to -1.2 sec). Method 3 acquired accurate data significantly faster than method 1 (p = 0.016; CI, -9.4 to -1.0 sec), but not method 2 (p = 0.28). Applying the sensor to the infant before connecting it to the cable yields the fastest acquisition of accurate HR data from the Nellcor PO. © 2011 The Author(s)/Acta Paediatrica © 2011 Foundation Acta Paediatrica.
Getting Good Results from Survey Research: Part III
ERIC Educational Resources Information Center
McNamara, James F.
2004-01-01
This article is the third contribution to a research methods series dedicated to getting good results from survey research. In this series, "good results" is a stenographic term used to define surveys that yield accurate and meaningful information that decision makers can use with confidence when conducting program evaluation and policy assessment…
NASA Astrophysics Data System (ADS)
Tomes, John J.; Finlayson, Chris E.
2016-09-01
We report upon the exploitation of the latest 3D printing technologies to provide low-cost instrumentation solutions, for use in an undergraduate level final-year project. The project addresses prescient research issues in optoelectronics, which would otherwise be inaccessible to such undergraduate student projects. The experimental use of an integrating sphere in conjunction with a desktop spectrometer presents opportunities to use easily handled, low cost materials as a means to illustrate many areas of physics such as spectroscopy, lasers, optics, simple circuits, black body radiation and data gathering. Presented here is a 3rd year undergraduate physics project which developed a low cost (£25) method to manufacture an experimentally accurate integrating sphere by 3D printing. Details are given of both a homemade internal reflectance coating formulated from readily available materials, and a robust instrument calibration method using a tungsten bulb. The instrument is demonstrated to give accurate and reproducible experimental measurements of luminescence quantum yield of various semiconducting fluorophores, in excellent agreement with literature values.
Heßelmann, Andreas
2015-04-14
Molecular excitation energies have been calculated with time-dependent density-functional theory (TDDFT) using random-phase approximation Hessians augmented with exact exchange contributions in various orders. It has been observed that this approach yields fairly accurate local valence excitations if combined with accurate asymptotically corrected exchange-correlation potentials used in the ground-state Kohn-Sham calculations. The inclusion of long-range particle-particle with hole-hole interactions in the kernel leads to errors of 0.14 eV only for the lowest excitations of a selection of three alkene, three carbonyl, and five azabenzene molecules, thus surpassing the accuracy of a number of common TDDFT and even some wave function correlation methods. In the case of long-range charge-transfer excitations, the method typically underestimates accurate reference excitation energies by 8% on average, which is better than with standard hybrid-GGA functionals but worse compared to range-separated functional approximations.
Upadhya, Vinayak; Pai, Sandeep R.; Sharma, Ajay K.; Hegde, Harsha V.; Kholkute, Sanjiva D.; Joshi, Rajesh K.
2014-01-01
Effects of varying temperatures with constant pressure of solvent on extraction efficiency of two chemically different alkaloids were studied. Camptothecin (CPT) from stem of Nothapodytes nimmoniana (Grah.) Mabb. and piperine from the fruits of Piper nigrum L. were extracted using Accelerated Solvent Extractor (ASE). Three cycles of extraction for a particular sample cell at a given temperature assured complete extraction. CPT and piperine were determined and quantified by using a simple and efficient UFLC-PDA (245 and 343 nm) method. Temperature increased efficiency of extraction to yield higher amount of CPT, whereas temperature had diminutive effect on yield of piperine. Maximum yield for CPT was achieved at 80°C and for piperine at 40°C. Thus, the study determines compound specific extraction of CPT from N. nimmoniana and piperine from P. nigrum using ASE method. The present study indicates the use of this method for simple, fast, and accurate extraction of the compound of interest. PMID:24527258
Floating shock fitting via Lagrangian adaptive meshes
NASA Technical Reports Server (NTRS)
Vanrosendale, John
1994-01-01
In recent works we have formulated a new approach to compressible flow simulation, combining the advantages of shock-fitting and shock-capturing. Using a cell-centered Roe scheme discretization on unstructured meshes, we warp the mesh while marching to steady state, so that mesh edges align with shocks and other discontinuities. This new algorithm, the Shock-fitting Lagrangian Adaptive Method (SLAM) is, in effect, a reliable shock-capturing algorithm which yields shock-fitted accuracy at convergence. Shock-capturing algorithms like this, which warp the mesh to yield shock-fitted accuracy, are new and relatively untried. However, their potential is clear. In the context of sonic booms, accurate calculation of near-field sonic boom signatures is critical to the design of the High Speed Civil Transport (HSCT). SLAM should allow computation of accurate N-wave pressure signatures on comparatively coarse meshes, significantly enhancing our ability to design low-boom configurations for high-speed aircraft.
Shi, Yan
2014-02-01
Degradation of fermentable monosaccharides is one of the primary concerns for acid prehydrolysis of lignocellulosic biomass. Recently, in our research on degradation of pure monosaccharides in aqueous SO₂ solution by gas chromatography (GC) analysis, we found that detected yield was not actual yield of each monosaccharide due to the existence of sugar-bisulfite adducts, and a new method was developed by ourselves which led to accurate detection of recovery yield of each monosaccharide in aqueous SO₂ solution by GC analysis. By the use of this method, degradation of each monosaccharide in aqueous SO₂ was investigated and results showed that sugar-bisulfite adducts have different inhibiting effect on degradation of each monosaccharide in aqueous SO₂ because of their different stability. In addition, NMR testing also demonstrated possible existence of reaction between conjugated based HSO₃(-) and aldehyde group of sugars in acid system.
Sun, Ye; Tao, Jing; Zhang, Geoff G Z; Yu, Lian
2010-09-01
A previous method for measuring solubilities of crystalline drugs in polymers has been improved to enable longer equilibration and used to survey the solubilities of indomethacin (IMC) and nifedipine (NIF) in two homo-polymers [polyvinyl pyrrolidone (PVP) and polyvinyl acetate (PVAc)] and their co-polymer (PVP/VA). These data are important for understanding the stability of amorphous drug-polymer dispersions, a strategy actively explored for delivering poorly soluble drugs. Measuring solubilities in polymers is difficult because their high viscosities impede the attainment of solubility equilibrium. In this method, a drug-polymer mixture prepared by cryo-milling is annealed at different temperatures and analyzed by differential scanning calorimetry to determine whether undissolved crystals remain and thus the upper and lower bounds of the equilibrium solution temperature. The new annealing method yielded results consistent with those obtained with the previous scanning method at relatively high temperatures, but revised slightly the previous results at lower temperatures. It also lowered the temperature of measurement closer to the glass transition temperature. For D-mannitol and IMC dissolving in PVP, the polymer's molecular weight has little effect on the weight-based solubility. For IMC and NIF, the dissolving powers of the polymers follow the order PVP > PVP/VA > PVAc. In each polymer studied, NIF is less soluble than IMC. The activities of IMC and NIF dissolved in various polymers are reasonably well fitted to the Flory-Huggins model, yielding the relevant drug-polymer interaction parameters. The new annealing method yields more accurate data than the previous scanning method when solubility equilibrium is slow to achieve. In practice, these two methods can be combined for efficiency. The measured solubilities are not readily anticipated, which underscores the importance of accurate experimental data for developing predictive models.
Gradient Augmented Level Set Method for Two Phase Flow Simulations with Phase Change
NASA Astrophysics Data System (ADS)
Anumolu, C. R. Lakshman; Trujillo, Mario F.
2016-11-01
A sharp interface capturing approach is presented for two-phase flow simulations with phase change. The Gradient Augmented Levelset method is coupled with the two-phase momentum and energy equations to advect the liquid-gas interface and predict heat transfer with phase change. The Ghost Fluid Method (GFM) is adopted for velocity to discretize the advection and diffusion terms in the interfacial region. Furthermore, the GFM is employed to treat the discontinuity in the stress tensor, velocity, and temperature gradient yielding an accurate treatment in handling jump conditions. Thermal convection and diffusion terms are approximated by explicitly identifying the interface location, resulting in a sharp treatment for the energy solution. This sharp treatment is extended to estimate the interfacial mass transfer rate. At the computational cell, a d-cubic Hermite interpolating polynomial is employed to describe the interface location, which is locally fourth-order accurate. This extent of subgrid level description provides an accurate methodology for treating various interfacial processes with a high degree of sharpness. The ability to predict the interface and temperature evolutions accurately is illustrated by comparing numerical results with existing 1D to 3D analytical solutions.
Satellite-based assessment of grassland yields
NASA Astrophysics Data System (ADS)
Grant, K.; Siegmund, R.; Wagner, M.; Hartmann, S.
2015-04-01
Cutting date and frequency are important parameters determining grassland yields in addition to the effects of weather, soil conditions, plant composition and fertilisation. Because accurate and area-wide data of grassland yields are currently not available, cutting frequency can be used to estimate yields. In this project, a method to detect cutting dates via surface changes in radar images is developed. The combination of this method with a grassland yield model will result in more reliable and regional-wide numbers of grassland yields. For the test-phase of the monitoring project, a study area situated southeast of Munich, Germany, was chosen due to its high density of managed grassland. For determining grassland cutting robust amplitude change detection techniques are used evaluating radar amplitude or backscatter statistics before and after the cutting event. CosmoSkyMed and Sentinel-1A data were analysed. All detected cuts were verified according to in-situ measurements recorded in a GIS database. Although the SAR systems had various acquisition geometries, the amount of detected grassland cut was quite similar. Of 154 tested grassland plots, covering in total 436 ha, 116 and 111 cuts were detected using CosmoSkyMed and Sentinel-1A radar data, respectively. Further improvement of radar data processes as well as additional analyses with higher sample number and wider land surface coverage will follow for optimisation of the method and for validation and generalisation of the results of this feasibility study. The automation of this method will than allow for an area-wide and cost efficient cutting date detection service improving grassland yield models.
Stability of compressible Taylor-Couette flow
NASA Technical Reports Server (NTRS)
Kao, Kai-Hsiung; Chow, Chuen-Yen
1991-01-01
Compressible stability equations are solved using the spectral collocation method in an attempt to study the effects of temperature difference and compressibility on the stability of Taylor-Couette flow. It is found that the Chebyshev collocation spectral method yields highly accurate results using fewer grid points for solving stability problems. Comparisons are made between the result obtained by assuming small Mach number with a uniform temperature distribution and that based on fully incompressible analysis.
Pomes, M.L.; Thurman, E.M.; Aga, D.S.; Goolsby, D.A.
1998-01-01
Triazine and chloroacetanilide concentrations in rainfall samples collected from a 23-state region of the United States were analyzed with microtiter-plate enzyme-linked immunosorbent assay (ELISA). Thirty-six percent of rainfall samples (2072 out of 5691) were confirmed using gas chromatography/mass spectrometry (GC/MS) to evaluate the operating performance of ELISA as a screening test. Comparison of ELISA to GC/MS results showed that the two ELISA methods accurately reported GC/MS results (m = 1), but with more variability evident with the triazine than with the chloroacetanilide ELISA. Bayes's rule, a standardized method to report the results of screening tests, indicated that the two ELISA methods yielded comparable predictive values (80%), but the triazine ELISA yielded a false- positive rate of 11.8% and the chloroacetanilide ELISA yielded a false- negative rate of 23.1%. The false-positive rate for the triazine ELISA may arise from cross reactivity with an unknown triazine or metabolite. The false-negative rate of the chloroacetanilide ELISA probably resulted from a combination of low sensitivity at the reporting limit of 0.15 ??g/L and a distribution characterized by 75% of the samples at or below the reporting limit of 0.15 ??g/L.Triazine and chloroacetanilide concentrations in rainfall samples collected from a 23-state region of the United States were analyzed with microtiter-plate enzyme-linked immunosorbent assay (ELISA). Thirty-six percent of rainfall samples (2072 out of 5691) were confirmed using gas chromatography/mass spectrometry (GC/MS) to evaluate the operating performance of ELISA as a screening test. Comparison of ELISA to GC/MS results showed that the two ELISA methods accurately reported GC/MS results (m = 1), but with more variability evident with the triazine than with the chloroacetanilide ELISA. Bayes's rule, a standardized method to report the results of screening tests, indicated that the two ELISA methods yielded comparable predictive values (80%), but the triazine ELISA yielded a false-positive rate of 11.8% and the chloroacetanilide ELISA yielded a false-negative rate of 23.1%. The false-positive rate for the triazine ELISA may arise from cross reactivity with an unknown triazine or metabolite. The false-negative rate of the chloroacetanilide ELISA probably resulted from a combination of low sensitivity at the reporting limit of 0.15 ??g/L and a distribution characterized by 75% of the samples at or below the reporting limit of 0.15 ??g/L.
NASA Astrophysics Data System (ADS)
Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.
2018-04-01
Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.
Determination of optical band gap of powder-form nanomaterials with improved accuracy
NASA Astrophysics Data System (ADS)
Ahsan, Ragib; Khan, Md. Ziaur Rahman; Basith, Mohammed Abdul
2017-10-01
Accurate determination of a material's optical band gap lies in the precise measurement of its absorption coefficients, either from its absorbance via the Beer-Lambert law or diffuse reflectance spectrum via the Kubelka-Munk function. Absorption coefficients of powder-form nanomaterials calculated from absorbance spectrum do not match those calculated from diffuse reflectance spectrum, implying the inaccuracy of the traditional optical band gap measurement method for such samples. We have modified the Beer-Lambert law and the Kubelka-Munk function with proper approximations for powder-form nanomaterials. Applying the modified method for powder-form nanomaterial samples, both absorbance and diffuse reflectance spectra yield exactly the same absorption coefficients and therefore accurately determine the optical band gap.
White, Alec F.; Epifanovsky, Evgeny; McCurdy, C. William; ...
2017-06-21
The method of complex basis functions is applied to molecular resonances at correlated levels of theory. Møller-Plesset perturbation theory at second order and equation-of-motion electron attachment coupled-cluster singles and doubles (EOM-EA-CCSD) methods based on a non-Hermitian self-consistent-field reference are used to compute accurate Siegert energies for shape resonances in small molecules including N 2 - , CO - , CO 2 - , and CH 2 O - . Analytic continuation of complex θ-trajectories is used to compute Siegert energies, and the θ-trajectories of energy differences are found to yield more consistent results than those of total energies.more » Furthermore, the ability of such methods to accurately compute complex potential energy surfaces is investigated, and the possibility of using EOM-EA-CCSD for Feshbach resonances is explored in the context of e-helium scattering.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Alec F.; Epifanovsky, Evgeny; McCurdy, C. William
The method of complex basis functions is applied to molecular resonances at correlated levels of theory. Møller-Plesset perturbation theory at second order and equation-of-motion electron attachment coupled-cluster singles and doubles (EOM-EA-CCSD) methods based on a non-Hermitian self-consistent-field reference are used to compute accurate Siegert energies for shape resonances in small molecules including N 2 - , CO - , CO 2 - , and CH 2 O - . Analytic continuation of complex θ-trajectories is used to compute Siegert energies, and the θ-trajectories of energy differences are found to yield more consistent results than those of total energies.more » Furthermore, the ability of such methods to accurately compute complex potential energy surfaces is investigated, and the possibility of using EOM-EA-CCSD for Feshbach resonances is explored in the context of e-helium scattering.« less
Power measurements of spark discharge experiments.
Navarro-Gonzalez, R; Romero, A; Honda, Y
1998-04-01
An accurate and precise knowledge of the amount of energy introduced into prebiotic discharge experiments is important to understand the relative roles of different energy sources in the synthesis of organic compounds in the primitive Earth's atmosphere and other planetary atmospheres. Two methods widely used to determine the power of spark discharges were evaluated, namely calorimetric and oscilloscopic, using a chemically inert gas. The power dissipated by the spark in argon at 500 Torr was determined to be 2.4 (+12%/-17%) J s-1 by calorimetry and 5.3 (+/- 15%) J s-1 by the oscilloscope. The difference between the two methods was attributed to (1) an incomplete conversion of the electric energy into heat, and (2) heat loss from the spark channel to the connecting cables through the electrodes. The latter contribution leads to an unwanted effect in the spark channel by lowering the spark product yields as the spark channel cools by mixing with surrounding air and by losing heat to the electrodes. Once the concentrations of the spark products have frozen at the freeze-out temperature, any additional loss of heat from the spark channel to the electrodes has no consequence in product yields. Therefore, neither methods accurately determines the net energy transferred to the system. With a lack of a quantitative knowledge of the amount of heat loss from the spark channel during the interval from ignition of the spark to when the freeze-out temperature is reached, it is recommended to derive the energy yields of the spark products from the mean value of the two methods with the uncertainty being their standard deviation. For the case of argon at 500 Torr, this would be 3.8 (+/-50%) J s-1.
Automatic yield-line analysis of slabs using discontinuity layout optimization
Gilbert, Matthew; He, Linwei; Smith, Colin C.; Le, Canh V.
2014-01-01
The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nekkab, M., E-mail: mohammed-nekkab@yahoo.com; LESIMS laboratory, Physics Department, Faculty of Sciences, University of Setif 1, 19000 Setif; Kahoul, A.
The analytical methods based on X-ray fluorescence are advantageous for practical applications in a variety of fields including atomic physics, X-ray fluorescence surface chemical analysis and medical research and so the accurate fluorescence yields (ω{sub K}) are required for these applications. In this contribution we report a new parameters for calculation of K-shell fluorescence yields (ω{sub K}) of elements in the range of 11≤Z≤30. The experimental data are interpolated by using the famous analytical function (ω{sub k}/(1−ω{sub k})){sup 1/q} (were q=3, 3.5 and 4) vs Z to deduce the empirical K-shell fluorescence yields. A comparison is made between the resultsmore » of the procedures followed here and those theoretical and other semi-empirical fluorescence yield values. Reasonable agreement was typically obtained between our result and other works.« less
A new method for ultrasound detection of interfacial position in gas-liquid two-phase flow.
Coutinho, Fábio Rizental; Ofuchi, César Yutaka; de Arruda, Lúcia Valéria Ramos; Neves, Flávio; Morales, Rigoberto E M
2014-05-22
Ultrasonic measurement techniques for velocity estimation are currently widely used in fluid flow studies and applications. An accurate determination of interfacial position in gas-liquid two-phase flows is still an open problem. The quality of this information directly reflects on the accuracy of void fraction measurement, and it provides a means of discriminating velocity information of both phases. The algorithm known as Velocity Matched Spectrum (VM Spectrum) is a velocity estimator that stands out from other methods by returning a spectrum of velocities for each interrogated volume sample. Interface detection of free-rising bubbles in quiescent liquid presents some difficulties for interface detection due to abrupt changes in interface inclination. In this work a method based on velocity spectrum curve shape is used to generate a spatial-temporal mapping, which, after spatial filtering, yields an accurate contour of the air-water interface. It is shown that the proposed technique yields a RMS error between 1.71 and 3.39 and a probability of detection failure and false detection between 0.89% and 11.9% in determining the spatial-temporal gas-liquid interface position in the flow of free rising bubbles in stagnant liquid. This result is valid for both free path and with transducer emitting through a metallic plate or a Plexiglas pipe.
A New Method for Ultrasound Detection of Interfacial Position in Gas-Liquid Two-Phase Flow
Coutinho, Fábio Rizental; Ofuchi, César Yutaka; de Arruda, Lúcia Valéria Ramos; Jr., Flávio Neves; Morales, Rigoberto E. M.
2014-01-01
Ultrasonic measurement techniques for velocity estimation are currently widely used in fluid flow studies and applications. An accurate determination of interfacial position in gas-liquid two-phase flows is still an open problem. The quality of this information directly reflects on the accuracy of void fraction measurement, and it provides a means of discriminating velocity information of both phases. The algorithm known as Velocity Matched Spectrum (VM Spectrum) is a velocity estimator that stands out from other methods by returning a spectrum of velocities for each interrogated volume sample. Interface detection of free-rising bubbles in quiescent liquid presents some difficulties for interface detection due to abrupt changes in interface inclination. In this work a method based on velocity spectrum curve shape is used to generate a spatial-temporal mapping, which, after spatial filtering, yields an accurate contour of the air-water interface. It is shown that the proposed technique yields a RMS error between 1.71 and 3.39 and a probability of detection failure and false detection between 0.89% and 11.9% in determining the spatial-temporal gas-liquid interface position in the flow of free rising bubbles in stagnant liquid. This result is valid for both free path and with transducer emitting through a metallic plate or a Plexiglas pipe. PMID:24858961
Computational fragment-based screening using RosettaLigand: the SAMPL3 challenge
NASA Astrophysics Data System (ADS)
Kumar, Ashutosh; Zhang, Kam Y. J.
2012-05-01
SAMPL3 fragment based virtual screening challenge provides a valuable opportunity for researchers to test their programs, methods and screening protocols in a blind testing environment. We participated in SAMPL3 challenge and evaluated our virtual fragment screening protocol, which involves RosettaLigand as the core component by screening a 500 fragments Maybridge library against bovine pancreatic trypsin. Our study reaffirmed that the real test for any virtual screening approach would be in a blind testing environment. The analyses presented in this paper also showed that virtual screening performance can be improved, if a set of known active compounds is available and parameters and methods that yield better enrichment are selected. Our study also highlighted that to achieve accurate orientation and conformation of ligands within a binding site, selecting an appropriate method to calculate partial charges is important. Another finding is that using multiple receptor ensembles in docking does not always yield better enrichment than individual receptors. On the basis of our results and retrospective analyses from SAMPL3 fragment screening challenge we anticipate that chances of success in a fragment screening process could be increased significantly with careful selection of receptor structures, protein flexibility, sufficient conformational sampling within binding pocket and accurate assignment of ligand and protein partial charges.
Smoothing of climate time series revisited
NASA Astrophysics Data System (ADS)
Mann, Michael E.
2008-08-01
We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.
Murrell, Ebony G.; Juliano, Steven A.
2012-01-01
Resource competition theory predicts that R*, the equilibrium resource amount yielding zero growth of a consumer population, should predict species' competitive abilities for that resource. This concept has been supported for unicellular organisms, but has not been well-tested for metazoans, probably due to the difficulty of raising experimental populations to equilibrium and measuring population growth rates for species with long or complex life cycles. We developed an index (Rindex) of R* based on demography of one insect cohort, growing from egg to adult in a non-equilibrium setting, and tested whether Rindex yielded accurate predictions of competitive abilities using mosquitoes as a model system. We estimated finite rate of increase (λ′) from demographic data for cohorts of three mosquito species raised with different detritus amounts, and estimated each species' Rindex using nonlinear regressions of λ′ vs. initial detritus amount. All three species' Rindex differed significantly, and accurately predicted competitive hierarchy of the species determined in simultaneous pairwise competition experiments. Our Rindex could provide estimates and rigorous statistical comparisons of competitive ability for organisms for which typical chemostat methods and equilibrium population conditions are impractical. PMID:22970128
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately andin situusing crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormal size distribution.more » The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. This work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
Garbarino, John R.; Hoffman, Gerald L.
1999-01-01
A hydrochloric acid in-bottle digestion procedure is used to partially digest wholewater samples prior to determining recoverable elements by various analytical methods. The use of hydrochloric acid is problematic for some methods of analysis because of spectral interference. The inbottle digestion procedure has been modified to eliminate such interference by using nitric acid instead of hydrochloric acid in the digestion. Implications of this modification are evaluated by comparing results for a series of synthetic whole-water samples. Results are also compared with those obtained by using U.S. Environmental Protection Agency (1994) (USEPA) Method 200.2 total-recoverable digestion procedure. Percentage yields that use the nitric acid inbottle digestion procedure are within 10 percent of the hydrochloric acid in-bottle yields for 25 of the 26 elements determined in two of the three synthetic whole-water samples tested. Differences in percentage yields for the third synthetic whole-water sample were greater than 10 percent for 16 of the 26 elements determined. The USEPA method was the most rigorous for solubilizing elements from particulate matter in all three synthetic whole-water samples. Nevertheless, the variability in the percentage yield by using the USEPA digestion procedure was generally greater than the in-bottle digestion procedure, presumably because of the difficulty in controlling the digestion conditions accurately.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dyar, M. Darby; McCanta, Molly; Breves, Elly
2016-03-01
Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe 3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe 3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yieldmore » accurate results from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2003-01-01
An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2001-01-01
An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number
Accurate radiative transfer calculations for layered media.
Selden, Adrian C
2016-07-01
Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.
Nano-Scale Characterization of Al-Mg Nanocrystalline Alloys
NASA Astrophysics Data System (ADS)
Harvey, Evan; Ladani, Leila
Materials with nano-scale microstructure have become increasingly popular due to their benefit of substantially increased strengths. The increase in strength as a result of decreasing grain size is defined by the Hall-Petch equation. With increased interest in miniaturization of components, methods of mechanical characterization of small volumes of material are necessary because traditional means such as tensile testing becomes increasingly difficult with such small test specimens. This study seeks to characterize elastic-plastic properties of nanocrystalline Al-5083 through nanoindentation and related data analysis techniques. By using nanoindentation, accurate predictions of the elastic modulus and hardness of the alloy were attained. Also, the employed data analysis model provided reasonable estimates of the plastic properties (strain-hardening exponent and yield stress) lending credibility to this procedure as an accurate, full mechanical characterization method.
Monitoring stream sediment loads in response to agriculture in Prince Edward Island, Canada.
Alberto, Ashley; St-Hilaire, Andre; Courtenay, Simon C; van den Heuvel, Michael R
2016-07-01
Increased agricultural land use leads to accelerated erosion and deposition of fine sediment in surface water. Monitoring of suspended sediment yields has proven challenging due to the spatial and temporal variability of sediment loading. Reliable sediment yield calculations depend on accurate monitoring of these highly episodic sediment loading events. This study aims to quantify precipitation-induced loading of suspended sediments on Prince Edward Island, Canada. Turbidity is considered to be a reasonably accurate proxy for suspended sediment data. In this study, turbidity was used to monitor suspended sediment concentration (SSC) and was measured for 2 years (December 2012-2014) in three subwatersheds with varying degrees of agricultural land use ranging from 10 to 69 %. Comparison of three turbidity meter calibration methods, two using suspended streambed sediment and one using automated sampling during rainfall events, revealed that the use of SSC samples constructed from streambed sediment was not an accurate replacement for water column sampling during rainfall events for calibration. Different particle size distributions in the three rivers produced significant impacts on the calibration methods demonstrating the need for river-specific calibration. Rainfall-induced sediment loading was significantly greater in the most agriculturally impacted site only when the load per rainfall event was corrected for runoff volume (total flow minus baseflow), flow increase intensity (the slope between the start of a runoff event and the peak of the hydrograph), and season. Monitoring turbidity, in combination with sediment modeling, may offer the best option for management purposes.
Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité
2017-01-01
The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.
Geijsen, Debby E.; Zum Vörde Sive Vörding, Paul J.; Schooneveldt, Gerben; Sijbrands, Jan; Hulshof, Maarten C.; de la Rosette, Jean; de Reijke, Theo M.; Crezee, Hans
2013-01-01
Abstract Background and Purpose: The effectiveness of locoregional hyperthermia combined with intravesical instillation of mitomycin C to reduce the risk of recurrence and progression of intermediate- and high-risk nonmuscle-invasive bladder cancer is currently investigated in clinical trials. Clinically effective locoregional hyperthermia delivery necessitates adequate thermal dosimetry; thus, optimal thermometry methods are needed to monitor accurately the temperature distribution throughout the bladder wall. The aim of the study was to evaluate the technical feasibility of a novel intravesical device (multi-sensor probe) developed to monitor the local bladder wall temperatures during loco-regional C-HT. Materials and Methods: A multisensor thermocouple probe was designed for deployment in the human bladder, using special sensors to cover the bladder wall in different directions. The deployment of the thermocouples against the bladder wall was evaluated with visual, endoscopic, and CT imaging in bladder phantoms, porcine models, and human bladders obtained from obduction for bladder volumes and different deployment sizes of the probe. Finally, porcine bladders were embedded in a phantom and subjected to locoregional heating to compare probe temperatures with additional thermometry inside and outside the bladder wall. Results: The 7.5 cm thermocouple probe yielded optimal bladder wall contact, adapting to different bladder volumes. Temperature monitoring was shown to be accurate and representative for the actual bladder wall temperature. Conclusions: Use of this novel multisensor probe could yield a more accurate monitoring of the bladder wall temperature during locoregional chemohyperthermia. PMID:24112045
Clausner, Tommy; Dalal, Sarang S.; Crespo-García, Maité
2017-01-01
The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. PMID:28559791
NASA Astrophysics Data System (ADS)
Lee, H.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.
2017-12-01
Cloud radar Doppler spectra provide rich information for evaluating the fidelity of particle size distributions from cloud models. The intrinsic simplifications of bulk microphysics schemes generally preclude the generation of plausible Doppler spectra, unlike bin microphysics schemes, which develop particle size distributions more organically at substantial computational expense. However, bin microphysics schemes face the difficulty of numerical diffusion leading to overly rapid large drop formation, particularly while solving the stochastic collection equation (SCE). Because such numerical diffusion can cause an even greater overestimation of radar reflectivity, an accurate method for solving the SCE is essential for bin microphysics schemes to accurately simulate Doppler spectra. While several methods have been proposed to solve the SCE, here we examine those of Berry and Reinhardt (1974, BR74), Jacobson et al. (1994, J94), and Bott (2000, B00). Using a simple box model to simulate drop size distribution evolution during precipitation formation with a realistic kernel, it is shown that each method yields a converged solution as the resolution of the drop size grid increases. However, the BR74 and B00 methods yield nearly identical size distributions in time, whereas the J94 method produces consistently larger drops throughout the simulation. In contrast to an earlier study, the performance of the B00 method is found to be satisfactory; it converges at relatively low resolution and long time steps, and its computational efficiency is the best among the three methods considered here. Finally, a series of idealized stratocumulus large-eddy simulations are performed using the J94 and B00 methods. The reflectivity size distributions and Doppler spectra obtained from the different SCE solution methods are presented and compared with observations.
Epipolar Rectification for CARTOSAT-1 Stereo Images Using SIFT and RANSAC
NASA Astrophysics Data System (ADS)
Akilan, A.; Sudheer Reddy, D.; Nagasubramanian, V.; Radhadevi, P. V.; Varadan, G.
2014-11-01
Cartosat-1 provides stereo images of spatial resolution 2.5 m with high fidelity of geometry. Stereo camera on the spacecraft has look angles of +26 degree and -5 degree respectively that yields effective along track stereo. Any DSM generation algorithm can use the stereo images for accurate 3D reconstruction and measurement of ground. Dense match points and pixel-wise matching are prerequisite in DSM generation to capture discontinuities and occlusions for accurate 3D modelling application. Epipolar image matching reduces the computational effort from two dimensional area searches to one dimensional. Thus, epipolar rectification is preferred as a pre-processing step for accurate DSM generation. In this paper we explore a method based on SIFT and RANSAC for epipolar rectification of cartosat-1 stereo images.
2010-01-01
Catalytic graphitization for 14C-accelerator mass spectrometry (14C-AMS) produced various forms of elemental carbon. Our high-throughput Zn reduction method (C/Fe = 1:5, 500 °C, 3 h) produced the AMS target of graphite-coated iron powder (GCIP), a mix of nongraphitic carbon and Fe3C. Crystallinity of the AMS targets of GCIP (nongraphitic carbon) was increased to turbostratic carbon by raising the C/Fe ratio from 1:5 to 1:1 and the graphitization temperature from 500 to 585 °C. The AMS target of GCIP containing turbostratic carbon had a large isotopic fractionation and a low AMS ion current. The AMS target of GCIP containing turbostratic carbon also yielded less accurate/precise 14C-AMS measurements because of the lower graphitization yield and lower thermal conductivity that were caused by the higher C/Fe ratio of 1:1. On the other hand, the AMS target of GCIP containing nongraphitic carbon had higher graphitization yield and better thermal conductivity over the AMS target of GCIP containing turbostratic carbon due to optimal surface area provided by the iron powder. Finally, graphitization yield and thermal conductivity were stronger determinants (over graphite crystallinity) for accurate/precise/high-throughput biological, biomedical, and environmental14C-AMS applications such as absorption, distribution, metabolism, elimination (ADME), and physiologically based pharmacokinetics (PBPK) of nutrients, drugs, phytochemicals, and environmental chemicals. PMID:20163100
An evaluation of the lamb vision system as a predictor of lamb carcass red meat yield percentage.
Brady, A S; Belk, K E; LeValley, S B; Dalsted, N L; Scanga, J A; Tatum, J D; Smith, G C
2003-06-01
An objective method for predicting red meat yield in lamb carcasses is needed to accurately assess true carcass value. This study was performed to evaluate the ability of the lamb vision system (LVS; Research Management Systems USA, Fort Collins, CO) to predict fabrication yields of lamb carcasses. Lamb carcasses (n = 246) were evaluated using LVS and hot carcass weight (HCW), as well as by USDA expert and on-line graders, before fabrication of carcass sides to either bone-in or boneless cuts. On-line whole number, expert whole-number, and expert nearest-tenth USDA yield grades and LVS + HCW estimates accounted for 53, 52, 58, and 60%, respectively, of the observed variability in boneless, saleable meat yields, and accounted for 56, 57, 62, and 62%, respectively, of the variation in bone-in, saleable meat yields. The LVS + HCW system predicted 77, 65, 70, and 87% of the variation in weights of boneless shoulders, racks, loins, and legs, respectively, and 85, 72, 75, and 86% of the variation in weights of bone-in shoulders, racks, loins, and legs, respectively. Addition of longissimus muscle area (REA), adjusted fat thickness (AFT), or both REA and AFT to LVS + HCW models resulted in improved prediction of boneless saleable meat yields by 5, 3, and 5 percentage points, respectively. Bone-in, saleable meat yield estimations were improved in predictive accuracy by 7.7, 6.6, and 10.1 percentage points, and in precision, when REA alone, AFT alone, or both REA and AFT, respectively, were added to the LVS + HCW output models. Use of LVS + HCW to predict boneless red meat yields of lamb carcasses was more accurate than use of current on-line whole-number, expert whole-number, or expert nearest-tenth USDA yield grades. Thus, LVS + HCW output, when used alone or in combination with AFT and/or REA, improved on-line estimation of boneless cut yields from lamb carcasses. The ability of LVS + HCW to predict yields of wholesale cuts suggests that LVS could be used as an objective means for pricing carcasses in a value-based marketing system.
Prediction of beef carcass salable yield and trimmable fat using bioelectrical impedance analysis.
Zollinger, B L; Farrow, R L; Lawrence, T E; Latman, N S
2010-03-01
Bioelectrical impedance technology (BIA) is capable of providing an objective method of beef carcass yield estimation with the rapidity of yield grading. Electrical resistance (Rs), reactance (Xc), impedance (I), hot carcass weight (HCW), fat thickness between the 12th and 13th ribs (FT), estimated percentage kidney, pelvic, and heart fat (KPH%), longissimus muscle area (LMA), length between electrodes (LGE) as well as three derived carcass values that included electrical volume (EVOL), reactive density (XcD), and resistive density (RsD) were determined for the carcasses of 41 commercially fed cattle. Carcasses were subsequently fabricated into salable beef products reflective of industry standards. Equations were developed to predict percentage salable carcass yield (SY%) and percentage trimmable fat (FT%). Resulting equations accounted for 81% and 84% of variation in SY% and FT%, respectively. These results indicate that BIA technology is an accurate predictor of beef carcass composition. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Jorgenson, Philip C. E.
2007-01-01
A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.
NASA Technical Reports Server (NTRS)
Paknys, J. R.
1982-01-01
The reflector antenna may be thought of as an aperture antenna. The classical solution for the radiation pattern of such an antenna is found by the aperture integration (AI) method. Success with this method depends on how accurately the aperture currents are known beforehand. In the past, geometrical optics (GO) has been employed to find the aperture currents. This approximation is suitable for calculating the main beam and possibly the first few sidelobes. A better approximation is to use aperture currents calculated from the geometrical theory of diffraction (GTD). Integration of the GTD currents over and extended aperture yields more accurate results for the radiation pattern. This approach is useful when conventional AI and GTD solutions have no common region of validity. This problem arises in reflector antennas. Two dimensional models of parabolic reflectors are studied; however, the techniques discussed can be applied to any aperture antenna.
NASA Astrophysics Data System (ADS)
Kluber, Alexander; Hayre, Robert; Cox, Daniel
2012-02-01
Motivated by the need to find beta-structure aggregation nuclei for the polyQ diseases such as Huntington's, we have undertaken a search for length dependent structure in model polyglutamine proteins. We use the Onufriev-Bashford-Case (OBC) generalized Born implicit solvent GPU based AMBER11 molecular dynamics with the parm96 force field coupled with a replica exchange method to characterize monomeric strands of polyglutamine as a function of chain length and temperature. This force field and solvation method has been shown among other methods to accurately reproduce folded metastability in certain small peptides, and to yield accurately de novo folded structures in a millisecond time-scale protein. Using GPU molecular dynamics we can sample out into the microsecond range. Additionally, explicit solvent runs will be used to verify results from the implicit solvent runs. We will assess order using measures of secondary structure and hydrogen bond content.
Accurate Modeling Method for Cu Interconnect
NASA Astrophysics Data System (ADS)
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
Efficient hybrid-symbolic methods for quantum mechanical calculations
NASA Astrophysics Data System (ADS)
Scott, T. C.; Zhang, Wenxing
2015-06-01
We present hybrid symbolic-numerical tools to generate optimized numerical code for rapid prototyping and fast numerical computation starting from a computer algebra system (CAS) and tailored to any given quantum mechanical problem. Although a major focus concerns the quantum chemistry methods of H. Nakatsuji which has yielded successful and very accurate eigensolutions for small atoms and molecules, the tools are general and may be applied to any basis set calculation with a variational principle applied to its linear and non-linear parameters.
Lage, Sandra; Gentili, Francesco G
2018-06-01
A systematic qualitative and quantitative analysis of fatty acid methyl esters (FAMEs) is crucial for microalgae species selection for biodiesel production. The aim of this study is to identify the best method to assess microalgae FAMEs composition and content. A single-step method, was tested with and without purification steps-that is, separation of lipid classes by thin-layer chromatography (TLC) or solid-phase extraction (SPE). The efficiency of a direct transesterification method was also evaluated. Additionally, the yield of the FAMEs and the profiles of the microalgae samples with different pretreatments (boiled in isopropanol, freezing, oven-dried and freeze-dried) were compared. The application of a purification step after lipid extraction proved to be essential for an accurate FAMEs characterisation. The purification methods, which included TLC and SPE, provided superior results compared to not purifying the samples. Freeze-dried microalgae produced the lowest FAMEs yield. However, FAMEs profiles were generally equivalent among the pretreatments. Copyright © 2018 Elsevier Ltd. All rights reserved.
Young's moduli of carbon materials investigated by various classical molecular dynamics schemes
NASA Astrophysics Data System (ADS)
Gayk, Florian; Ehrens, Julian; Heitmann, Tjark; Vorndamme, Patrick; Mrugalla, Andreas; Schnack, Jürgen
2018-05-01
For many applications classical carbon potentials together with classical molecular dynamics are employed to calculate structures and physical properties of such carbon-based materials where quantum mechanical methods fail either due to the excessive size, irregular structure or long-time dynamics. Although such potentials, as for instance implemented in LAMMPS, yield reasonably accurate bond lengths and angles for several carbon materials such as graphene, it is not clear how accurate they are in terms of mechanical properties such as for instance Young's moduli. We performed large-scale classical molecular dynamics investigations of three carbon-based materials using the various potentials implemented in LAMMPS as well as the EDIP potential of Marks. We show how the Young's moduli vary with classical potentials and compare to experimental results. Since classical descriptions of carbon are bound to be approximations it is not astonishing that different realizations yield differing results. One should therefore carefully check for which observables a certain potential is suited. Our aim is to contribute to such a clarification.
A model-updating procedure to stimulate piezoelectric transducers accurately.
Piranda, B; Ballandras, S; Steichen, W; Hecart, B
2001-09-01
The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Liu, Youhua
2000-01-01
At the preliminary design stage of a wing structure, an efficient simulation, one needing little computation but yielding adequately accurate results for various response quantities, is essential in the search of optimal design in a vast design space. In the present paper, methods of using sensitivities up to 2nd order, and direct application of neural networks are explored. The example problem is how to decide the natural frequencies of a wing given the shape variables of the structure. It is shown that when sensitivities cannot be obtained analytically, the finite difference approach is usually more reliable than a semi-analytical approach provided an appropriate step size is used. The use of second order sensitivities is proved of being able to yield much better results than the case where only the first order sensitivities are used. When neural networks are trained to relate the wing natural frequencies to the shape variables, a negligible computation effort is needed to accurately determine the natural frequencies of a new design.
Qadri, S M; Johnson, S; Smith, J C; Zubairi, S; Gillum, R L
1981-01-01
The ability of several anaerobic bacteria to hydrolyze esculin to esculetin is used by clinical microbiologists and taxonomists in the differentiation and identification of both gram-positive and gram-negative microorganisms. Conventional methods used for determining esculin hydrolysis by anaerobic bacteria require 24 to 48 h for completion. In this paper we evaluate two procedures which yield rapid results. A total of 738 anaerobic bacteria were used in this study. A total of 99% of the esculin-hydrolyzing anaerobic bacteria gave positive results with the spot test in 1 h, whereas the other test method, the PathoTec strip test (General Diagnostics, Morris Plains, N.J.), required 4 h for 96% of the strains tested to yield positive reactions. Both tests showed a 100% specificity when compared with the standard broth test and are easy to perform, accurate, and economical. The spot test is superior to the PathoTec strip test in yielding results more rapidly. PMID:7016896
Curry, Allison E.; Pfeiffer, Melissa R.; Myers, Rachel K.; Durbin, Dennis R.; Elliott, Michael R.
2014-01-01
Traditional methods for determining crash responsibility—most commonly moving violation citations—may not accurately characterize at-fault status among crash-involved drivers given that: (1) issuance may vary by factors that are independent of fault (e.g., driver age, gender), and (2) these methods do not capture driver behaviors that are not illegal but still indicative of fault. We examined the statistical implications of using moving violations to determine crash responsibility in young driver crashes by comparing it with a method based on crash-contributing driver actions. We selected all drivers in police-reported passenger-vehicle crashes (2010–2011) that involved a New Jersey driver <21 years old (79,485 drivers < age 21, 61,355 drivers ≥ age 21.) For each driver, crash responsibility was determined from the crash report using two alternative methods: (1) issuance of a moving violation citation; and (2) presence of a driver action (e.g., failure to yield, inattention). Overall, 18% of crash-involved drivers were issued a moving violation while 50% had a driver action. Only 32.2% of drivers with a driver action were cited for a moving violation. Further, the likelihood of being cited given the presence of a driver action was higher among certain driver subgroups—younger drivers, male drivers, and drivers in single-vehicle and more severe crashes. Specifically among young drivers, those driving at night, carrying peer passengers, and having a suspended or no license were more often cited. Conversely, fatally-injured drivers were almost never cited. We also demonstrated that using citation data may lead to statistical bias in the characterization of at-fault drivers and of quasi-induced exposure measures. Studies seeking to accurately determine crash responsibility should thoughtfully consider the potential sources of bias that may result from using legal culpability methods. For many studies, determining driver responsibility via the identification of driver actions may yield more accurate characterizations of at-fault drivers. PMID:24398139
Climate driven crop planting date in the ACME Land Model (ALM): Impacts on productivity and yield
NASA Astrophysics Data System (ADS)
Drewniak, B.
2017-12-01
Climate is one of the key drivers of crop suitability and productivity in a region. The influence of climate and weather on the growing season determine the amount of time crops spend in each growth phase, which in turn impacts productivity and, more importantly, yields. Planting date can have a strong influence on yields with earlier planting generally resulting in higher yields, a sensitivity that is also present in some crop models. Furthermore, planting date is already changing and may continue, especially if longer growing seasons caused by future climate change drive early (or late) planting decisions. Crop models need an accurate method to predict plant date to allow these models to: 1) capture changes in crop management to adapt to climate change, 2) accurately model the timing of crop phenology, and 3) improve crop simulated influences on carbon, nutrient, energy, and water cycles. Previous studies have used climate as a predictor for planting date. Climate as a plant date predictor has more advantages than fixed plant dates. For example, crop expansion and other changes in land use (e.g., due to changing temperature conditions), can be accommodated without additional model inputs. As such, a new methodology to implement a predictive planting date based on climate inputs is added to the Accelerated Climate Model for Energy (ACME) Land Model (ALM). The model considers two main sources of climate data important for planting: precipitation and temperature. This method expands the current temperature threshold planting trigger and improves the estimated plant date in ALM. Furthermore, the precipitation metric for planting, which synchronizes the crop growing season with the wettest months, allows tropical crops to be introduced to the model. This presentation will demonstrate how the improved model enhances the ability of ALM to capture planting date compared with observations. More importantly, the impact of changing the planting date and introducing tropical crops will be explored. Those impacts include discussions on productivity, yield, and influences on carbon and energy fluxes.
Udompaisarn, Somsiri; Arthan, Dumrongkiet; Somana, Jamorn
2017-04-19
An enzymatic method for specific determination of stevioside content was established. Recombinant β-glucosidase BT_3567 (rBT_3567) from Bacteroides thetaiotaomicron HB-13 exhibited selective hydrolysis of stevioside at β-1,2-glycosidic bond to yield rubusoside and glucose. Coupling of this enzyme with glucose oxidase and peroxidase allowed for quantitation of stevioside content in Stevia samples by using a colorimetric-based approach. The series of reactions for stevioside determination can be completed within 1 h at 37 °C. Stevioside determination using the enzymatic assay strongly correlated with results obtained from HPLC quantitation (r 2 = 0.9629, n = 16). The percentages of coefficient variation (CV) of within day (n = 12) and between days (n = 12) assays were lower than 5%, and accuracy ranges were 95-105%. This analysis demonstrates that the enzymatic method developed in this study is specific, easy to perform, accurate, and yields reproducible results.
Molecular method for determining sex of walruses
Fischbach, Anthony S.; Jay, C.V.; Jackson, J.V.; Andersen, L.W.; Sage, G.K.; Talbot, S.L.
2008-01-01
We evaluated the ability of a set of published trans-species molecular sexing primers and a set of walrus-specific primers, which we developed, to accurately identify sex of 235 Pacific walruses (Odobenus rosmarus divergens). The trans-species primers were developed for mammals and targeted the X- and Y-gametologs of the zinc finger protein genes (ZFX, ZFY). We extended this method by using these primers to obtain sequence from Pacific and Atlantic walrus (0. r. rosmarus) ZFX and ZFY genes to develop new walrus-specific primers, which yield polymerase chain reaction products of distinct lengths (327 and 288 base pairs from the X- and Y-chromosome, respectively), allowing them to be used for sex determination. Both methods yielded a determination of sex in all but 1-2% of samples with an accuracy of 99.6-100%. Our walrus-specific primers offer the advantage of small fragment size and facile application to automated electrophoresis and visualization.
Kleijn, Roelco J.; van Winden, Wouter A.; Ras, Cor; van Gulik, Walter M.; Schipper, Dick; Heijnen, Joseph J.
2006-01-01
In this study we developed a new method for accurately determining the pentose phosphate pathway (PPP) split ratio, an important metabolic parameter in the primary metabolism of a cell. This method is based on simultaneous feeding of unlabeled glucose and trace amounts of [U-13C]gluconate, followed by measurement of the mass isotopomers of the intracellular metabolites surrounding the 6-phosphogluconate node. The gluconate tracer method was used with a penicillin G-producing chemostat culture of the filamentous fungus Penicillium chrysogenum. For comparison, a 13C-labeling-based metabolic flux analysis (MFA) was performed for glycolysis and the PPP of P. chrysogenum. For the first time mass isotopomer measurements of 13C-labeled primary metabolites are reported for P. chrysogenum and used for a 13C-based MFA. Estimation of the PPP split ratio of P. chrysogenum at a growth rate of 0.02 h−1 yielded comparable values for the gluconate tracer method and the 13C-based MFA method, 51.8% and 51.1%, respectively. A sensitivity analysis of the estimated PPP split ratios showed that the 95% confidence interval was almost threefold smaller for the gluconate tracer method than for the 13C-based MFA method (40.0 to 63.5% and 46.0 to 56.5%, respectively). From these results we concluded that the gluconate tracer method permits accurate determination of the PPP split ratio but provides no information about the remaining cellular metabolism, while the 13C-based MFA method permits estimation of multiple fluxes but provides a less accurate estimate of the PPP split ratio. PMID:16820467
Ren, Jingzheng
2018-01-01
Anaerobic digestion process has been recognized as a promising way for waste treatment and energy recovery in a sustainable way. Modelling of anaerobic digestion system is significantly important for effectively and accurately controlling, adjusting, and predicting the system for higher methane yield. The GM(1,N) approach which does not need the mechanism or a large number of samples was employed to model the anaerobic digestion system to predict methane yield. In order to illustrate the proposed model, an illustrative case about anaerobic digestion of municipal solid waste for methane yield was studied, and the results demonstrate that GM(1,N) model can effectively simulate anaerobic digestion system at the cases of poor information with less computational expense. Copyright © 2017 Elsevier Ltd. All rights reserved.
Evaluation of the pulse-contour method of determining stroke volume in man.
NASA Technical Reports Server (NTRS)
Alderman, E. L.; Branzi, A.; Sanders, W.; Brown, B. W.; Harrison, D. C.
1972-01-01
The pulse-contour method for determining stroke volume has been employed as a continuous rapid method of monitoring the cardiovascular status of patients. Twenty-one patients with ischemic heart disease and 21 patients with mitral valve disease were subjected to a variety of hemodynamic interventions. The pulse-contour estimations, using three different formulas derived by Warner, Kouchoukos, and Herd, were compared with indicator-dilution outputs. A comparison of the results of the two methods for determining stroke volume yielded correlation coefficients ranging from 0.59 to 0.84. The better performing Warner formula yielded a coefficient of variation of about 20%. The type of hemodynamic interventions employed did not significantly affect the results using the pulse-contour method. Although the correlation of the pulse-contour and indicator-dilution stroke volumes is high, the coefficient of variation is such that small changes in stroke volume cannot be accurately assessed by the pulse-contour method. However, the simplicity and rapidity of this method compared to determination of cardiac output by Fick or indicator-dilution methods makes it a potentially useful adjunct for monitoring critically ill patients.
Accurate paleointensities - the multi-method approach
NASA Astrophysics Data System (ADS)
de Groot, Lennart
2016-04-01
The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
Incorporation of MRI-AIF Information For Improved Kinetic Modelling of Dynamic PET Data
NASA Astrophysics Data System (ADS)
Sari, Hasan; Erlandsson, Kjell; Thielemans, Kris; Atkinson, David; Ourselin, Sebastien; Arridge, Simon; Hutton, Brian F.
2015-06-01
In the analysis of dynamic PET data, compartmental kinetic analysis methods require an accurate knowledge of the arterial input function (AIF). Although arterial blood sampling is the gold standard of the methods used to measure the AIF, it is usually not preferred as it is an invasive method. An alternative method is the simultaneous estimation method (SIME), where physiological parameters and the AIF are estimated together, using information from different anatomical regions. Due to the large number of parameters to estimate in its optimisation, SIME is a computationally complex method and may sometimes fail to give accurate estimates. In this work, we try to improve SIME by utilising an input function derived from a simultaneously obtained DSC-MRI scan. With the assumption that the true value of one of the six parameter PET-AIF model can be derived from an MRI-AIF, the method is tested using simulated data. The results indicate that SIME can yield more robust results when the MRI information is included with a significant reduction in absolute bias of Ki estimates.
Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Suhwan; Kim, Min-Cheol; Sim, Eunji
2017-05-01
All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex
NASA Astrophysics Data System (ADS)
Kabiri, K.
2017-09-01
The capabilities of Sentinel-2A imagery to determine bathymetric information in shallow coastal waters were examined. In this regard, two Sentinel-2A images (acquired on February and March 2016 in calm weather and relatively low turbidity) were selected from Nayband Bay, located in the northern Persian Gulf. In addition, a precise and accurate bathymetric map for the study area were obtained and used for both calibrating the models and validating the results. Traditional linear and ratio transform techniques, as well as a novel integrated method, were employed to determine depth values. All possible combinations of the three bands (Band 2: blue (458-523 nm), Band 3: green (543-578 nm), and Band 4: red (650-680 nm), spatial resolution: 10 m) have been considered (11 options) using the traditional linear and ratio transform techniques, together with 10 model options for the integrated method. The accuracy of each model was assessed by comparing the determined bathymetric information with field measured values. The correlation coefficients (R2), and root mean square errors (RMSE) for validation points were calculated for all models and for two satellite images. When compared with the linear transform method, the method employing ratio transformation with a combination of all three bands yielded more accurate results (R2Mac = 0.795, R2Feb = 0.777, RMSEMac = 1.889 m, and RMSEFeb =2.039 m). Although most of the integrated transform methods (specifically the method including all bands and band ratios) have yielded the highest accuracy, these increments were not significant, hence the ratio transformation has selected as optimum method.
2013-01-01
Background A major hindrance to the development of high yielding biofuel feedstocks is the ability to rapidly assess large populations for fermentable sugar yields. Whilst recent advances have outlined methods for the rapid assessment of biomass saccharification efficiency, none take into account the total biomass, or the soluble sugar fraction of the plant. Here we present a holistic high-throughput methodology for assessing sweet Sorghum bicolor feedstocks at 10 days post-anthesis for total fermentable sugar yields including stalk biomass, soluble sugar concentrations, and cell wall saccharification efficiency. Results A mathematical method for assessing whole S. bicolor stalks using the fourth internode from the base of the plant proved to be an effective high-throughput strategy for assessing stalk biomass, soluble sugar concentrations, and cell wall composition and allowed calculation of total stalk fermentable sugars. A high-throughput method for measuring soluble sucrose, glucose, and fructose using partial least squares (PLS) modelling of juice Fourier transform infrared (FTIR) spectra was developed. The PLS prediction was shown to be highly accurate with each sugar attaining a coefficient of determination (R 2 ) of 0.99 with a root mean squared error of prediction (RMSEP) of 11.93, 5.52, and 3.23 mM for sucrose, glucose, and fructose, respectively, which constitutes an error of <4% in each case. The sugar PLS model correlated well with gas chromatography–mass spectrometry (GC-MS) and brix measures. Similarly, a high-throughput method for predicting enzymatic cell wall digestibility using PLS modelling of FTIR spectra obtained from S. bicolor bagasse was developed. The PLS prediction was shown to be accurate with an R 2 of 0.94 and RMSEP of 0.64 μg.mgDW-1.h-1. Conclusions This methodology has been demonstrated as an efficient and effective way to screen large biofuel feedstock populations for biomass, soluble sugar concentrations, and cell wall digestibility simultaneously allowing a total fermentable yield calculation. It unifies and simplifies previous screening methodologies to produce a holistic assessment of biofuel feedstock potential. PMID:24365407
NASA Astrophysics Data System (ADS)
Pellereau, E.; Taïeb, J.; Chatillon, A.; Alvarez-Pol, H.; Audouin, L.; Ayyad, Y.; Bélier, G.; Benlliure, J.; Boutoux, G.; Caamaño, M.; Casarejos, E.; Cortina-Gil, D.; Ebran, A.; Farget, F.; Fernández-Domínguez, B.; Gorbinet, T.; Grente, L.; Heinz, A.; Johansson, H.; Jurado, B.; Kelić-Heil, A.; Kurz, N.; Laurent, B.; Martin, J.-F.; Nociforo, C.; Paradela, C.; Pietri, S.; Rodríguez-Sánchez, J. L.; Schmidt, K.-H.; Simon, H.; Tassan-Got, L.; Vargas, J.; Voss, B.; Weick, H.
2017-05-01
SOFIA (Studies On Fission with Aladin) is a novel experimental program, dedicated to accurate measurements of fission-fragment isotopic yields. The setup allows us to fully identify, in nuclear charge and mass, both fission fragments in coincidence for the whole fission-fragment range. It was installed at the GSI facility (Darmstadt), to benefit from the relativistic heavy-ion beams available there, and thus to use inverse kinematics. This paper reports on fission yields obtained in electromagnetically induced fission of 238U.
NASA Technical Reports Server (NTRS)
Haugen, H. K.; Weitz, E.; Leone, S. R.
1985-01-01
Various techniques have been used to study photodissociation dynamics of the halogens and interhalogens. The quantum yields obtained by these techniques differ widely. The present investigation is concerned with a qualitatively new approach for obtaining highly accurate quantum yields for electronically excited states. This approach makes it possible to obtain an accuracy of 1 percent to 3 percent. It is shown that measurement of the initial transient gain/absorption vs the final absorption in a single time-resolved signal is a very accurate technique in the study of absolute branching fractions in photodissociation. The new technique is found to be insensitive to pulse and probe laser characteristics, molecular absorption cross sections, and absolute precursor density.
A trans-phase granular continuum relation and its use in simulation
NASA Astrophysics Data System (ADS)
Kamrin, Ken; Dunatunga, Sachith; Askari, Hesam
The ability to model a large granular system as a continuum would offer tremendous benefits in computation time compared to discrete particle methods. However, two infamous problems arise in the pursuit of this vision: (i) the constitutive relation for granular materials is still unclear and hotly debated, and (ii) a model and corresponding numerical method must wear ``many hats'' as, in general circumstances, it must be able to capture and accurately represent the material as it crosses through its collisional, dense-flowing, and solid-like states. Here we present a minimal trans-phase model, merging an elastic response beneath a fictional yield criterion, a mu(I) rheology for liquid-like flow above the static yield criterion, and a disconnection rule to model separation of the grains into a low-temperature gas. We simulate our model with a meshless method (in high strain/mixing cases) and the finite-element method. It is able to match experimental data in many geometries, including collapsing columns, impact on granular beds, draining silos, and granular drag problems.
Sieracki, M E; Reichenbach, S E; Webb, K L
1989-01-01
The accurate measurement of bacterial and protistan cell biomass is necessary for understanding their population and trophic dynamics in nature. Direct measurement of fluorescently stained cells is often the method of choice. The tedium of making such measurements visually on the large numbers of cells required has prompted the use of automatic image analysis for this purpose. Accurate measurements by image analysis require an accurate, reliable method of segmenting the image, that is, distinguishing the brightly fluorescing cells from a dark background. This is commonly done by visually choosing a threshold intensity value which most closely coincides with the outline of the cells as perceived by the operator. Ideally, an automated method based on the cell image characteristics should be used. Since the optical nature of edges in images of light-emitting, microscopic fluorescent objects is different from that of images generated by transmitted or reflected light, it seemed that automatic segmentation of such images may require special considerations. We tested nine automated threshold selection methods using standard fluorescent microspheres ranging in size and fluorescence intensity and fluorochrome-stained samples of cells from cultures of cyanobacteria, flagellates, and ciliates. The methods included several variations based on the maximum intensity gradient of the sphere profile (first derivative), the minimum in the second derivative of the sphere profile, the minimum of the image histogram, and the midpoint intensity. Our results indicated that thresholds determined visually and by first-derivative methods tended to overestimate the threshold, causing an underestimation of microsphere size. The method based on the minimum of the second derivative of the profile yielded the most accurate area estimates for spheres of different sizes and brightnesses and for four of the five cell types tested. A simple model of the optical properties of fluorescing objects and the video acquisition system is described which explains how the second derivative best approximates the position of the edge. Images PMID:2516431
An improved, robust, axial line singularity method for bodies of revolution
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.
1989-01-01
The failures encountered in attempts to increase the range of applicability of the axial line singularity method for representing incompressible, inviscid flow about an inclined and slender body-of-revolution are presently noted to be common to all efforts to solve Fredholm equations of the first kind. It is shown that a previously developed smoothing technique yields a robust method for numerical solution of the governing equations; this technique is easily retrofitted to existing codes, and allows the number of circularities to be increased until the most accurate line singularity solution is obtained.
Application of artificial neural networks in nonlinear analysis of trusses
NASA Technical Reports Server (NTRS)
Alam, J.; Berke, L.
1991-01-01
A method is developed to incorporate neural network model based upon the Backpropagation algorithm for material response into nonlinear elastic truss analysis using the initial stiffness method. Different network configurations are developed to assess the accuracy of neural network modeling of nonlinear material response. In addition to this, a scheme based upon linear interpolation for material data, is also implemented for comparison purposes. It is found that neural network approach can yield very accurate results if used with care. For the type of problems under consideration, it offers a viable alternative to other material modeling methods.
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.; ...
2017-04-13
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamez-Mendoza, Liliana; Terban, Maxwell W.; Billinge, Simon J. L.
The particle size of supported catalysts is a key characteristic for determining structure–property relationships. It is a challenge to obtain this information accurately and in situ using crystallographic methods owing to the small size of such particles (<5 nm) and the fact that they are supported. In this work, the pair distribution function (PDF) technique was used to obtain the particle size distribution of supported Pt catalysts as they grow under typical synthesis conditions. The PDF of Pt nanoparticles grown on zeolite X was isolated and refined using two models: a monodisperse spherical model (single particle size) and a lognormalmore » size distribution. The results were compared and validated using scanning transmission electron microscopy (STEM) results. Both models describe the same trends in average particle size with temperature, but the results of the number-weighted lognormal size distributions can also accurately describe the mean size and the width of the size distributions obtained from STEM. Since the PDF yields crystallite sizes, these results suggest that the grown Pt nanoparticles are monocrystalline. As a result, this work shows that refinement of the PDF of small supported monocrystalline nanoparticles can yield accurate mean particle sizes and distributions.« less
Canseco Grellet, M A; Castagnaro, A; Dantur, K I; De Boeck, G; Ahmed, P M; Cárdenas, G J; Welin, B; Ruiz, R M
2016-10-01
To calculate fermentation efficiency in a continuous ethanol production process, we aimed to develop a robust mathematical method based on the analysis of metabolic by-product formation. This method is in contrast to the traditional way of calculating ethanol fermentation efficiency, where the ratio between the ethanol produced and the sugar consumed is expressed as a percentage of the theoretical conversion yield. Comparison between the two methods, at industrial scale and in sensitivity studies, showed that the indirect method was more robust and gave slightly higher fermentation efficiency values, although fermentation efficiency of the industrial process was found to be low (~75%). The traditional calculation method is simpler than the indirect method as it only requires a few chemical determinations in samples collected. However, a minor error in any measured parameter will have an important impact on the calculated efficiency. In contrast, the indirect method of calculation requires a greater number of determinations but is much more robust since an error in any parameter will only have a minor effect on the fermentation efficiency value. The application of the indirect calculation methodology in order to evaluate the real situation of the process and to reach an optimum fermentation yield for an industrial-scale ethanol production is recommended. Once a high fermentation yield has been reached the traditional method should be used to maintain the control of the process. Upon detection of lower yields in an optimized process the indirect method should be employed as it permits a more accurate diagnosis of causes of yield losses in order to correct the problem rapidly. The low fermentation efficiency obtained in this study shows an urgent need for industrial process optimization where the indirect calculation methodology will be an important tool to determine process losses. © 2016 The Society for Applied Microbiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less
NASA Astrophysics Data System (ADS)
Duru, Kenneth; Dunham, Eric M.
2016-01-01
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.
NASA Astrophysics Data System (ADS)
Zhang, Zhongya; Pan, Bing; Grédiac, Michel; Song, Weidong
2018-04-01
The virtual fields method (VFM) is generally used with two-dimensional digital image correlation (2D-DIC) or grid method (GM) for identifying constitutive parameters. However, when small out-of-plane translation/rotation occurs to the test specimen, 2D-DIC and GM are prone to yield inaccurate measurements, which further lessen the accuracy of the parameter identification using VFM. In this work, an easy-to-implement but effective "special" stereo-DIC (SS-DIC) method is proposed for accuracy-enhanced VFM identification. The SS-DIC can not only deliver accurate deformation measurement without being affected by unavoidable out-of-plane movement/rotation of a test specimen, but can also ensure evenly distributed calculation data in space, which leads to simple data processing. Based on the accurate kinematics fields with evenly distributed measured points determined by SS-DIC method, constitutive parameters can be identified by VFM with enhanced accuracy. Uniaxial tensile tests of a perforated aluminum plate and pure shear tests of a prismatic aluminum specimen verified the effectiveness and accuracy of the proposed method. Experimental results show that the constitutive parameters identified by VFM using SS-DIC are more accurate and stable than those identified by VFM using 2D-DIC. It is suggested that the proposed SS-DIC can be used as a standard measuring tool for mechanical identification using VFM.
NASA Astrophysics Data System (ADS)
Rahayu, A. P.; Hartatik, T.; Purnomoadi, A.; Kurnianto, E.
2018-02-01
The aims of this study were to estimate 305 day first lactation milk yield of Indonesian Holstein cattle from cumulative monthly and bimonthly test day records and to analyze its accuracy.The first lactation records of 258 dairy cows from 2006 to 2014 consisted of 2571 monthly (MTDY) and 1281 bimonthly test day yield (BTDY) records were used. Milk yields were estimated by regression method. Correlation coefficients between actual and estimated milk yield by cumulative MTDY were 0.70, 0.78, 0.83, 0.86, 0.89, 0.92, 0.94 and 0.96 for 2-9 months, respectively, meanwhile by cumulative BTDY were 0.69, 0.81, 0.87 and 0.92 for 2, 4, 6 and 8 months, respectively. The accuracy of fitting regression models (R2) increased with the increasing in the number of cumulative test day used. The used of 5 cumulative MTDY was considered sufficient for estimating 305 day first lactation milk yield with 80.6% accuracy and 7% error percentage of estimation. The estimated milk yield from MTDY was more accurate than BTDY by 1.1 to 2% less error percentage in the same time.
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
NASA Astrophysics Data System (ADS)
Bordui, P. F.; Loiacono, G. M.
1984-07-01
A method is presented for in-line bulk supersaturation measurement in crystal growth from aqueous solution. The method is based on a computer-controlled concentration measurement exploiting an experimentally predetermined cross-correlation between the concentration, electrical conductivity, and temperature of the growth solution. The method was applied to Holden crystallization of potassium dihydrogen phosphate (KDP). An extensive conductivity-temperature-concentration data base was generated for this system over a temperature range of 31 to 41°C. The method yielded continous, automated bulk supersaturation output accurate to within ±0.05 g KDP100 g water (±0.15% relative supersaturation).
NASA Astrophysics Data System (ADS)
Sakimoto, S. E. H.
2016-12-01
Planetary volcanism has redefined what is considered volcanism. "Magma" now may be considered to be anything from the molten rock familiar at terrestrial volcanoes to cryovolcanic ammonia-water mixes erupted on an outer solar system moon. However, even with unfamiliar compositions and source mechanisms, we find familiar landforms such as volcanic channels, lakes, flows, and domes and thus a multitude of possibilities for modeling. As on Earth, these landforms lend themselves to analysis for estimating storage, eruption and/or flow rates. This has potential pitfalls, as extension of the simplified analytic models we often use for terrestrial features into unfamiliar parameter space might yield misleading results. Our most commonly used tools for estimating flow and cooling have tended to lag significantly behind state-of-the-art; the easiest methods to use are neither realistic or accurate, but the more realistic and accurate computational methods are not simple to use. Since the latter computational tools tend to be both expensive and require a significant learning curve, there is a need for a user-friendly approach that still takes advantage of their accuracy. One method is use of the computational package for generation of a server-based tool that allows less computationally inclined users to get accurate results over their range of input parameters for a given problem geometry. A second method is to use the computational package for the generation of a polynomial empirical solution for each class of flow geometry that can be fairly easily solved by anyone with a spreadsheet. In this study, we demonstrate both approaches for several channel flow and lava lake geometries with terrestrial and extraterrestrial examples and compare their results. Specifically, we model cooling rectangular channel flow with a yield strength material, with applications to Mauna Loa, Kilauea, Venus, and Mars. This approach also shows promise with model applications to lava lakes, magma flow through cracks, and volcanic dome formation.
How does spatial and temporal resolution of vegetation index impact crop yield estimation?
USDA-ARS?s Scientific Manuscript database
Timely and accurate estimation of crop yield before harvest is critical for food market and administrative planning. Remote sensing data have long been used in crop yield estimation for decades. The process-based approach uses light use efficiency model to estimate crop yield. Vegetation index (VI) ...
Improvements to robotics-inspired conformational sampling in rosetta.
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.
Improvements to Robotics-Inspired Conformational Sampling in Rosetta
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new “next-generation KIC” method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions. PMID:23704889
NASA Astrophysics Data System (ADS)
Reichert, Andreas; Rettinger, Markus; Sussmann, Ralf
2016-09-01
Quantitative knowledge of water vapor absorption is crucial for accurate climate simulations. An open science question in this context concerns the strength of the water vapor continuum in the near infrared (NIR) at atmospheric temperatures, which is still to be quantified by measurements. This issue can be addressed with radiative closure experiments using solar absorption spectra. However, the spectra used for water vapor continuum quantification have to be radiometrically calibrated. We present for the first time a method that yields sufficient calibration accuracy for NIR water vapor continuum quantification in an atmospheric closure experiment. Our method combines the Langley method with spectral radiance measurements of a high-temperature blackbody calibration source (< 2000 K). The calibration scheme is demonstrated in the spectral range 2500 to 7800 cm-1, but minor modifications to the method enable calibration also throughout the remainder of the NIR spectral range. The resulting uncertainty (2σ) excluding the contribution due to inaccuracies in the extra-atmospheric solar spectrum (ESS) is below 1 % in window regions and up to 1.7 % within absorption bands. The overall radiometric accuracy of the calibration depends on the ESS uncertainty, on which at present no firm consensus has been reached in the NIR. However, as is shown in the companion publication Reichert and Sussmann (2016), ESS uncertainty is only of minor importance for the specific aim of this study, i.e., the quantification of the water vapor continuum in a closure experiment. The calibration uncertainty estimate is substantiated by the investigation of calibration self-consistency, which yields compatible results within the estimated errors for 91.1 % of the 2500 to 7800 cm-1 range. Additionally, a comparison of a set of calibrated spectra to radiative transfer model calculations yields consistent results within the estimated errors for 97.7 % of the spectral range.
NASA Technical Reports Server (NTRS)
Green, S.; Cochrane, D. L.; Truhlar, D. G.
1986-01-01
The utility of the energy-corrected sudden (ECS) scaling method is evaluated on the basis of how accurately it predicts the entire matrix of state-to-state rate constants, when the fundamental rate constants are independently known. It is shown for the case of Ar-CO collisions at 500 K that when a critical impact parameter is about 1.75-2.0 A, the ECS method yields excellent excited state rates on the average and has an rms error of less than 20 percent.
Legendre-tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1986-01-01
The numerical approximation of solutions to linear retarded functional differential equations are considered using the so-called Legendre-tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time-differentiation. The approximate solution is then represented as a truncated Legendre series with time-varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximation is made.
A High-Performance Parallel Implementation of the Certified Reduced Basis Method
2010-12-15
point of view of model reduction due to the “curse of dimensionality”. We consider transient thermal conduction in a three– dimensional “ Swiss cheese ... Swiss cheese ” problem (see Figure 7a) there are 54 unique ordered pairs in I. A histogram of 〈δµ〉 values computed for the ntrain = 106 case is given in...our primal-dual RB method yields a very fast and accurate output approxima- tion for the “ Swiss Cheese ” problem. Our goal in this final subsection is
Legendre-Tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1983-01-01
The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.
Strobel, M.L.; Delin, G.N.
1996-01-01
The Neuman (1974) method for unconfined aquifers was used to analyze data collected from the two observation wells during the drawdown and recovery periods, resulting in a range of estimated aquifer hydraulic properties. Aquifer transmissivity ranged from 4,710 to 7,660 ft2/d and aquifer storativity ranged from 8.24 x 10-5 to 1.60 x 10-4. These values are generally in close agreement for all four sets of data, given the limitations of the test, indicating that the test results are accurate and representative of the aquifer hydrogeologic properties. The lack of late-time data made it impossible to accurately assess aquifer specific yield.
MISSE 2 PEACE Polymers Experiment Atomic Oxygen Erosion Yield Error Analysis
NASA Technical Reports Server (NTRS)
McCarthy, Catherine E.; Banks, Bruce A.; deGroh, Kim, K.
2010-01-01
Atomic oxygen erosion of polymers in low Earth orbit (LEO) poses a serious threat to spacecraft performance and durability. To address this, 40 different polymer samples and a sample of pyrolytic graphite, collectively called the PEACE (Polymer Erosion and Contamination Experiment) Polymers, were exposed to the LEO space environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of the Materials International Space Station Experiment 1 & 2 (MISSE 1 & 2). The purpose of the PEACE Polymers experiment was to obtain accurate mass loss measurements in space to combine with ground measurements in order to accurately calculate the atomic oxygen erosion yields of a wide variety of polymeric materials exposed to the LEO space environment for a long period of time. Error calculations were performed in order to determine the accuracy of the mass measurements and therefore of the erosion yield values. The standard deviation, or error, of each factor was incorporated into the fractional uncertainty of the erosion yield for each of three different situations, depending on the post-flight weighing procedure. The resulting error calculations showed the erosion yield values to be very accurate, with an average error of 3.30 percent.
Reflection full-waveform inversion using a modified phase misfit function
NASA Astrophysics Data System (ADS)
Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe
2017-09-01
Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.
Sluiter, Amie; Sluiter, Justin; Wolfrum, Ed; ...
2016-05-20
Accurate and precise chemical characterization of biomass feedstocks and process intermediates is a requirement for successful technical and economic evaluation of biofuel conversion technologies. The uncertainty in primary measurements of the fraction insoluble solid (FIS) content of dilute acid pretreated corn stover slurry is the major contributor to uncertainty in yield calculations for enzymatic hydrolysis of cellulose to glucose. This uncertainty is propagated through process models and impacts modeled fuel costs. The challenge in measuring FIS is obtaining an accurate measurement of insoluble matter in the pretreated materials, while appropriately accounting for all biomass derived components. Three methods were testedmore » to improve this measurement. One used physical separation of liquid and solid phases, and two utilized direct determination of dry matter content in two fractions. We offer a comparison of drying methods. Lastly, our results show utilizing a microwave dryer to directly determine dry matter content is the optimal method for determining FIS, based on the low time requirements and the method optimization done using model slurries.« less
Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Larsson, Johan
2013-01-01
A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.
On plant detection of intact tomato fruits using image analysis and machine learning methods.
Yamamoto, Kyosuke; Guo, Wei; Yoshioka, Yosuke; Ninomiya, Seishi
2014-07-09
Fully automated yield estimation of intact fruits prior to harvesting provides various benefits to farmers. Until now, several studies have been conducted to estimate fruit yield using image-processing technologies. However, most of these techniques require thresholds for features such as color, shape and size. In addition, their performance strongly depends on the thresholds used, although optimal thresholds tend to vary with images. Furthermore, most of these techniques have attempted to detect only mature and immature fruits, although the number of young fruits is more important for the prediction of long-term fluctuations in yield. In this study, we aimed to develop a method to accurately detect individual intact tomato fruits including mature, immature and young fruits on a plant using a conventional RGB digital camera in conjunction with machine learning approaches. The developed method did not require an adjustment of threshold values for fruit detection from each image because image segmentation was conducted based on classification models generated in accordance with the color, shape, texture and size of the images. The results of fruit detection in the test images showed that the developed method achieved a recall of 0.80, while the precision was 0.88. The recall values of mature, immature and young fruits were 1.00, 0.80 and 0.78, respectively.
Theoretical study of the hyperfine parameters of OH
NASA Technical Reports Server (NTRS)
Chong, Delano P.; Langhoff, Stephen R.; Bauschlicher, Charles W., Jr.
1991-01-01
In the present study of the hyperfine parameters of O-17H as a function of the one- and n-particle spaces, all of the parameters except oxygen's spin density, b sub F(O), are sufficiently easily tractable to allow concentration on the computational requirements for accurate determination of b sub F(O). Full configuration-interaction (FCI) calculations in six Gaussian basis sets yield unambiguous results for (1) the effect of uncontracting the O s and p basis sets; (2) that of adding diffuse s and p functions; and (3) that of adding polarization functions to O. The size-extensive modified coupled-pair functional method yields b sub F values which are in fair agreement with FCI results.
Aziz, Omar; Hussain, Saddam; Rizwan, Muhammad; Riaz, Muhammad; Bashir, Saqib; Lin, Lirong; Mehmood, Sajid; Imran, Muhammad; Yaseen, Rizwan; Lu, Guoan
2018-06-01
The looming water resources worldwide necessitate the development of water-saving technologies in rice production. An open greenhouse experiment was conducted on rice during the summer season of 2016 at Huazhong Agricultural University, Wuhan, China, in order to study the influence of irrigation methods and nitrogen (N) inputs on water productivity, N economy, and grain yield of rice. Two irrigation methods, viz. conventional irrigation (CI) and "thin-shallow-moist-dry" irrigation (TSMDI), and three levels of nitrogen, viz. 0 kg N ha -1 (N 0 ), 90 kg N ha -1 (N 1 ), and 180 kg N ha -1 (N 2 ), were examined with three replications. Study data indicated that no significant water by nitrogen interaction on grain yield, biomass, water productivity, N uptake, NUE, and fertilizer N balance was observed. Results revealed that TSMDI method showed significantly higher water productivity and irrigation water applications were reduced by 17.49% in TSMDI compared to CI. Thus, TSMDI enhanced root growth and offered significantly greater water saving along with getting more grain yield compared to CI. Nitrogen tracer ( 15 N) technique accurately assessed the absorption and distribution of added N in the soil crop environment and divulge higher nitrogen use efficiency (NUE) influenced by TSMDI. At the same N inputs, the TSMDI was the optimal method to minimize nitrogen leaching loss by decreasing water leakage about 18.63%, which are beneficial for the ecological environment.
Lei, Huan; Yang, Xiu; Zheng, Bin; ...
2015-11-05
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
Finite difference elastic wave modeling with an irregular free surface using ADER scheme
NASA Astrophysics Data System (ADS)
Almuhaidib, Abdulaziz M.; Nafi Toksöz, M.
2015-06-01
In numerical modeling of seismic wave propagation in the earth, we encounter two important issues: the free surface and the topography of the surface (i.e. irregularities). In this study, we develop a 2D finite difference solver for the elastic wave equation that combines a 4th- order ADER scheme (Arbitrary high-order accuracy using DERivatives), which is widely used in aeroacoustics, with the characteristic variable method at the free surface boundary. The idea is to treat the free surface boundary explicitly by using ghost values of the solution for points beyond the free surface to impose the physical boundary condition. The method is based on the velocity-stress formulation. The ultimate goal is to develop a numerical solver for the elastic wave equation that is stable, accurate and computationally efficient. The solver treats smooth arbitrary-shaped boundaries as simple plane boundaries. The computational cost added by treating the topography is negligible compared to flat free surface because only a small number of grid points near the boundary need to be computed. In the presence of topography, using 10 grid points per shortest shear-wavelength, the solver yields accurate results. Benchmark numerical tests using several complex models that are solved by our method and other independent accurate methods show an excellent agreement, confirming the validity of the method for modeling elastic waves with an irregular free surface.
Tubuxin, Bayaer; Rahimzadeh-Bajgiran, Parinaz; Ginnan, Yusaku; Hosoi, Fumiki; Omasa, Kenji
2015-01-01
This paper illustrates the possibility of measuring chlorophyll (Chl) content and Chl fluorescence parameters by the solar-induced Chl fluorescence (SIF) method using the Fraunhofer line depth (FLD) principle, and compares the results with the standard measurement methods. A high-spectral resolution HR2000+ and an ordinary USB4000 spectrometer were used to measure leaf reflectance under solar and artificial light, respectively, to estimate Chl fluorescence. Using leaves of Capsicum annuum cv. ‘Sven’ (paprika), the relationships between the Chl content and the steady-state Chl fluorescence near oxygen absorption bands of O2B (686nm) and O2A (760nm), measured under artificial and solar light at different growing stages of leaves, were evaluated. The Chl fluorescence yields of ΦF 686nm/ΦF 760nm ratios obtained from both methods correlated well with the Chl content (steady-state solar light: R2 = 0.73; artificial light: R2 = 0.94). The SIF method was less accurate for Chl content estimation when Chl content was high. The steady-state solar-induced Chl fluorescence yield ratio correlated very well with the artificial-light-induced one (R2 = 0.84). A new methodology is then presented to estimate photochemical yield of photosystem II (ΦPSII) from the SIF measurements, which was verified against the standard Chl fluorescence measurement method (pulse-amplitude modulated method). The high coefficient of determination (R2 = 0.74) between the ΦPSII of the two methods shows that photosynthesis process parameters can be successfully estimated using the presented methodology. PMID:26071530
Yue, Zheng-Bo; Zhang, Meng-Lin; Sheng, Guo-Ping; Liu, Rong-Hua; Long, Ying; Xiang, Bing-Ren; Wang, Jin; Yu, Han-Qing
2010-04-01
A near-infrared-reflectance (NIR) spectroscopy-based method is established to determine the main components of aquatic plants as well as their anaerobic rumen biodegradability. The developed method is more rapid and accurate compared to the conventional chemical analysis and biodegradability tests. Moisture, volatile solid, Klason lignin and ash in entire aquatic plants could be accurately predicted using this method with coefficient of determination (r(2)) values of 0.952, 0.916, 0.939 and 0.950, respectively. In addition, the anaerobic rumen biodegradability of aquatic plants, represented as biogas and methane yields, could also be predicted well. The algorithm of continuous wavelet transform for the NIR spectral data pretreatment is able to greatly enhance the robustness and predictive ability of the NIR spectral analysis. These results indicate that NIR spectroscopy could be used to predict the main components of aquatic plants and their anaerobic biodegradability. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo
2017-09-01
Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.
Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery
Liu, Shouyang; Baret, Fred; Andrieu, Bruno; Burger, Philippe; Hemmerlé, Matthieu
2017-01-01
Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects) are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds⋅m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages. PMID:28559901
Accurate van der Waals coefficients from density functional theory
Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn
2012-01-01
The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765
Calculations of steady and transient channel flows with a time-accurate L-U factorization scheme
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1991-01-01
Calculations of steady and unsteady, transonic, turbulent channel flows with a time accurate, lower-upper (L-U) factorization scheme are presented. The L-U factorization scheme is formally second-order accurate in time and space, and it is an extension of the steady state flow solver (RPLUS) used extensively to solve compressible flows. A time discretization method and the implementation of a consistent boundary condition specific to the L-U factorization scheme are also presented. The turbulence is described by the Baldwin-Lomax algebraic turbulence model. The present L-U scheme yields stable numerical results with the use of much smaller artificial dissipations than those used in the previous steady flow solver for steady and unsteady channel flows. The capability to solve time dependent flows is shown by solving very weakly excited and strongly excited, forced oscillatory, channel flows.
Using an analytical geometry method to improve tiltmeter data presentation
Su, W.-J.
2000-01-01
The tiltmeter is a useful tool for geologic and geotechnical applications. To obtain full benefit from the tiltmeter, easy and accurate data presentations should be used. Unfortunately, the most commonly used method for tilt data reduction now may yield inaccurate and low-resolution results. This article describes a simple, accurate, and high-resolution approach developed at the Illinois State Geological Survey for data reduction and presentation. The orientation of tiltplates is determined first by using a trigonometric relationship, followed by a matrix transformation, to obtain the true amount of rotation change of the tiltplate at any given time. The mathematical derivations used for the determination and transformation are then coded into an integrated PC application by adapting the capabilities of commercial spreadsheet, database, and graphics software. Examples of data presentation from tiltmeter applications in studies of landfill covers, characterizations of mine subsidence, and investigations of slope stability are also discussed.
A Kosloff/Basal method, 3D migration program implemented on the CYBER 205 supercomputer
NASA Technical Reports Server (NTRS)
Pyle, L. D.; Wheat, S. R.
1984-01-01
Conventional finite difference migration has relied on approximations to the acoustic wave equation which allow energy to propagate only downwards. Although generally reliable, such approaches usually do not yield an accurate migration for geological structures with strong lateral velocity variations or with steeply dipping reflectors. An earlier study by D. Kosloff and E. Baysal (Migration with the Full Acoustic Wave Equation) examined an alternative approach based on the full acoustic wave equation. The 2D, Fourier type algorithm which was developed was tested by Kosloff and Baysal against synthetic data and against physical model data. The results indicated that such a scheme gives accurate migration for complicated structures. This paper describes the development and testing of a vectorized, 3D migration program for the CYBER 205 using the Kosloff/Baysal method. The program can accept as many as 65,536 zero offset (stacked) traces.
Accurate upwind methods for the Euler equations
NASA Technical Reports Server (NTRS)
Huynh, Hung T.
1993-01-01
A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
NASA Astrophysics Data System (ADS)
Franch, B.; Vermote, E.; Roger, J. C.; Skakun, S.; Becker-Reshef, I.; Justice, C. O.
2017-12-01
Accurate and timely crop yield forecasts are critical for making informed agricultural policies and investments, as well as increasing market efficiency and stability. In Becker-Reshef et al. (2010) and Franch et al. (2015) we developed an empirical generalized model for forecasting winter wheat yield. It is based on the relationship between the Normalized Difference Vegetation Index (NDVI) at the peak of the growing season and the Growing Degree Day (GDD) information extracted from NCEP/NCAR reanalysis data. These methods were applied to MODIS CMG data in Ukraine, the US and China with errors around 10%. However, the NDVI is saturated for yield values higher than 4 MT/ha. As a consequence, the model had to be re-calibrated in each country and the validation of the national yields showed low correlation coefficients. In this study we present a new model based on the extrapolation of the pure wheat signal (100% of wheat within the pixel) from MODIS data at 1km resolution and using the Difference Vegetation Index (DVI). The model has been applied to monitor the national yield of winter wheat in the United States and Ukraine from 2001 to 2016.
A fast complex integer convolution using a hybrid transform
NASA Technical Reports Server (NTRS)
Reed, I. S.; K Truong, T.
1978-01-01
It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.
Conducting Meta-Analyses Based on p Values
van Aert, Robbie C. M.; Wicherts, Jelte M.; van Assen, Marcel A. L. M.
2016-01-01
Because of overwhelming evidence of publication bias in psychology, techniques to correct meta-analytic estimates for such bias are greatly needed. The methodology on which the p-uniform and p-curve methods are based has great promise for providing accurate meta-analytic estimates in the presence of publication bias. However, in this article, we show that in some situations, p-curve behaves erratically, whereas p-uniform may yield implausible estimates of negative effect size. Moreover, we show that (and explain why) p-curve and p-uniform result in overestimation of effect size under moderate-to-large heterogeneity and may yield unpredictable bias when researchers employ p-hacking. We offer hands-on recommendations on applying and interpreting results of meta-analyses in general and p-uniform and p-curve in particular. Both methods as well as traditional methods are applied to a meta-analysis on the effect of weight on judgments of importance. We offer guidance for applying p-uniform or p-curve using R and a user-friendly web application for applying p-uniform. PMID:27694466
Non-Contact Laser Based Ultrasound Evaluation of Canned Foods
NASA Astrophysics Data System (ADS)
Shelton, David
2005-03-01
Laser-Based Ultrasound detection was used to measure the velocity of compression waves transmitted through canned foods. Condensed broth, canned pasta, and non-condensed soup were evaluated in these experiments. Homodyne adaptive optics resulted in measurements that were more accurate than the traditional heterodyne method, as well as yielding a 10 dB gain in signal to noise. A-Scans measured the velocity of ultrasound sent through the center of the can and were able to distinguish the quantity of food stuff in its path, as well as distinguish between meat and potato. B-Scans investigated the heterogeneity of the sample’s contents. The evaluation of canned foods was completely non-contact and would be suitable for continuous monitoring in production. These results were verified by conducting the same experiments with a contact piezo transducer. Although the contact method yields a higher signal to noise ratio than the non-contact method, Laser-Based Ultrasound was able to detect surface waves the contact transducer could not.
An Investigation into the Relationship Between Distillate Yield and Stable Isotope Fractionation
NASA Astrophysics Data System (ADS)
Sowers, T.; Wagner, A. J.
2016-12-01
Recent breakthroughs in laser spectrometry have allowed for faster, more efficient analyses of stable isotopic ratios in water samples. Commercially available instruments from Los Gatos Research and Picarro allow users to quickly analyze a wide range of samples, from seawater to groundwater, with accurate isotope ratios of D/H to within ± 0.2 ‰ and 18O/16O to within ± 0.03 ‰. While these instruments have increased the efficiency of stable isotope laboratories, they come with some major limitations, such as not being able to analyze hypersaline waters. The Los Gatos Research Liquid Water Isotope Analyzer (LWIA) can accurately and consistently measure the stable isotope ratios in waters with salinities ranging from 0 to 4 grams per liter (0 to 40 parts per thousand). In order to analyze water samples with salinities greater than 4 grams per liter, however, it was necessary to develop a consistent method through which to reduce salinity while causing as little fractionation as possible. Using a consistent distillation method, predictable fractionation of δ 18O and δ 2 H values was found to occur. This fractionation occurs according to a linear relationship with respect to the percent yield of the water in the sample. Using this method, samples with high salinity can be analyzed using laser spectrometry instruments, thereby enabling laboratories with Los Gatos or Picarro instruments to analyze those samples in house without having to dilute them using labor-intensive in-house standards or expensive premade standards.
NASA Astrophysics Data System (ADS)
Jain, M.; Singh, B.; Srivastava, A.; Lobell, D. B.
2015-12-01
Food security will be challenged over the upcoming decades due to increased food demand, natural resource degradation, and climate change. In order to identify potential solutions to increase food security in the face of these changes, tools that can rapidly and accurately assess farm productivity are needed. With this aim, we have developed generalizable methods to map crop yields at the field scale using a combination of satellite imagery and crop models, and implement this approach within Google Earth Engine. We use these methods to examine wheat yield trends in Northern India, which provides over 15% of the global wheat supply and where over 80% of farmers rely on wheat as a staple food source. In addition, we identify the extent to which farmers are shifting sow date in response to heat stress, and how well shifting sow date reduces the negative impacts of heat stress on yield. To identify local-level decision-making, we map wheat sow date and yield at a high spatial resolution (30 m) using Landsat satellite imagery from 1980 to the present. This unique dataset allows us to examine sow date decisions at the field scale over 30 years, and by relating these decisions to weather experienced over the same time period, we can identify how farmers learn and adapt cropping decisions based on weather through time.
Design space exploration for early identification of yield limiting patterns
NASA Astrophysics Data System (ADS)
Li, Helen; Zou, Elain; Lee, Robben; Hong, Sid; Liu, Square; Wang, JinYan; Du, Chunshan; Zhang, Recco; Madkour, Kareem; Ali, Hussein; Hsu, Danny; Kabeel, Aliaa; ElManhawy, Wael; Kwan, Joe
2016-03-01
In order to resolve the causality dilemma of which comes first, accurate design rules or real designs, this paper presents a flow for exploration of the layout design space to early identify problematic patterns that will negatively affect the yield. A new random layout generating method called Layout Schema Generator (LSG) is reported in this paper, this method generates realistic design-like layouts without any design rule violation. Lithography simulation is then used on the generated layout to discover the potentially problematic patterns (hotspots). These hotspot patterns are further explored by randomly inducing feature and context variations to these identified hotspots through a flow called Hotspot variation Flow (HSV). Simulation is then performed on these expanded set of layout clips to further identify more problematic patterns. These patterns are then classified into design forbidden patterns that should be included in the design rule checker and legal patterns that need better handling in the RET recipes and processes.
Simard, Valérie; Bernier, Annie; Bélanger, Marie-Ève; Carrier, Julie
2013-06-01
To investigate relations between children's attachment and sleep, using objective and subjective sleep measures. Secondarily, to identify the most accurate actigraphy algorithm for toddlers. 55 mother-child dyads took part in the Strange Situation Procedure (18 months) to assess attachment. At 2 years, children wore an Actiwatch for a 72-hr period, and their mothers completed a sleep diary. The high sensitivity (80) and smoothed actigraphy algorithms provided the most plausible sleep data. Maternal diaries yielded longer estimated sleep duration and shorter wake duration at night and showed poor agreement with actigraphy. More resistant attachment behavior was not associated with actigraphy-assessed sleep, but was associated with longer nocturnal wake duration as estimated by mothers, and with a reduced actigraphy-diary discrepancy. Mothers of children with resistant attachment are more aware of their child's nocturnal awakenings. Researchers and clinicians should select the best sleep measurement method for their specific needs.
Soler, Miguel A; de Marco, Ario; Fortuna, Sara
2016-10-10
Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.
NASA Astrophysics Data System (ADS)
Soler, Miguel A.; De Marco, Ario; Fortuna, Sara
2016-10-01
Nanobodies (VHHs) have proved to be valuable substitutes of conventional antibodies for molecular recognition. Their small size represents a precious advantage for rational mutagenesis based on modelling. Here we address the problem of predicting how Camelidae nanobody sequences can tolerate mutations by developing a simulation protocol based on all-atom molecular dynamics and whole-molecule docking. The method was tested on two sets of nanobodies characterized experimentally for their biophysical features. One set contained point mutations introduced to humanize a wild type sequence, in the second the CDRs were swapped between single-domain frameworks with Camelidae and human hallmarks. The method resulted in accurate scoring approaches to predict experimental yields and enabled to identify the structural modifications induced by mutations. This work is a promising tool for the in silico development of single-domain antibodies and opens the opportunity to customize single functional domains of larger macromolecules.
A study on the plasticity of soda-lime silica glass via molecular dynamics simulations.
Urata, Shingo; Sato, Yosuke
2017-11-07
Molecular dynamics (MD) simulations were applied to construct a plasticity model, which enables one to simulate deformations of soda-lime silica glass (SLSG) by using continuum methods. To model the plasticity, stress induced by uniaxial and a variety of biaxial deformations was measured by MD simulations. We found that the surfaces of yield and maximum stresses, which are evaluated from the equivalent stress-strain curves, are reasonably represented by the Mohr-Coulomb ellipsoid. Comparing a finite element model using the constructed plasticity model to a large scale atomistic model on a nanoindentation simulation of SLSG reveals that the empirical method is accurate enough to evaluate the SLSG mechanical responses. Furthermore, the effect of ion-exchange on the SLSG plasticity was examined by using MD simulations. As a result, it was demonstrated that the effects of the initial compressive stress on the yield and maximum stresses are anisotropic contrary to our expectations.
A study on the plasticity of soda-lime silica glass via molecular dynamics simulations
NASA Astrophysics Data System (ADS)
Urata, Shingo; Sato, Yosuke
2017-11-01
Molecular dynamics (MD) simulations were applied to construct a plasticity model, which enables one to simulate deformations of soda-lime silica glass (SLSG) by using continuum methods. To model the plasticity, stress induced by uniaxial and a variety of biaxial deformations was measured by MD simulations. We found that the surfaces of yield and maximum stresses, which are evaluated from the equivalent stress-strain curves, are reasonably represented by the Mohr-Coulomb ellipsoid. Comparing a finite element model using the constructed plasticity model to a large scale atomistic model on a nanoindentation simulation of SLSG reveals that the empirical method is accurate enough to evaluate the SLSG mechanical responses. Furthermore, the effect of ion-exchange on the SLSG plasticity was examined by using MD simulations. As a result, it was demonstrated that the effects of the initial compressive stress on the yield and maximum stresses are anisotropic contrary to our expectations.
Identification of hydraulic conductivity structure in sand and gravel aquifers: Cape Cod data set
Eggleston, J.R.; Rojstaczer, S.A.; Peirce, J.J.
1996-01-01
This study evaluates commonly used geostatistical methods to assess reproduction of hydraulic conductivity (K) structure and sensitivity under limiting amounts of data. Extensive conductivity measurements from the Cape Cod sand and gravel aquifer are used to evaluate two geostatistical estimation methods, conditional mean as an estimate and ordinary kriging, and two stochastic simulation methods, simulated annealing and sequential Gaussian simulation. Our results indicate that for relatively homogeneous sand and gravel aquifers such as the Cape Cod aquifer, neither estimation methods nor stochastic simulation methods give highly accurate point predictions of hydraulic conductivity despite the high density of collected data. Although the stochastic simulation methods yielded higher errors than the estimation methods, the stochastic simulation methods yielded better reproduction of the measured In (K) distribution and better reproduction of local contrasts in In (K). The inability of kriging to reproduce high In (K) values, as reaffirmed by this study, provides a strong instigation for choosing stochastic simulation methods to generate conductivity fields when performing fine-scale contaminant transport modeling. Results also indicate that estimation error is relatively insensitive to the number of hydraulic conductivity measurements so long as more than a threshold number of data are used to condition the realizations. This threshold occurs for the Cape Cod site when there are approximately three conductivity measurements per integral volume. The lack of improvement with additional data suggests that although fine-scale hydraulic conductivity structure is evident in the variogram, it is not accurately reproduced by geostatistical estimation methods. If the Cape Cod aquifer spatial conductivity characteristics are indicative of other sand and gravel deposits, then the results on predictive error versus data collection obtained here have significant practical consequences for site characterization. Heavily sampled sand and gravel aquifers, such as Cape Cod and Borden, may have large amounts of redundant data, while in more common real world settings, our results suggest that denser data collection will likely improve understanding of permeability structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu Shioumin; Kruijs, Robbert van de; Zoethout, Erwin
Ion sputtering yields for Ru, Mo, and Si under Ar{sup +} ion bombardment in the near-threshold energy range have been studied using an in situ weight-loss method with a Kaufman ion source, Faraday cup, and quartz crystal microbalance. The results are compared to theoretical models. The accuracy of the in situ weight-loss method was verified by thickness-decrease measurements using grazing incidence x-ray reflectometry, and results from both methods are in good agreement. These results provide accurate data sets for theoretical modeling in the near-threshold sputter regime and are of relevance for (optical) surfaces exposed to plasmas, as, for instance, inmore » extreme ultraviolet photolithography.« less
Blainey, Joan B.; Ferré, Ty P.A.; Cordova, Jeffrey T.
2007-01-01
Pumping of an unconfined aquifer can cause local desaturation detectable with high‐resolution gravimetry. A previous study showed that signal‐to‐noise ratios could be predicted for gravity measurements based on a hydrologic model. We show that although changes should be detectable with gravimeters, estimations of hydraulic conductivity and specific yield based on gravity data alone are likely to be unacceptably inaccurate and imprecise. In contrast, a transect of low‐quality drawdown data alone resulted in accurate estimates of hydraulic conductivity and inaccurate and imprecise estimates of specific yield. Combined use of drawdown and gravity data, or use of high‐quality drawdown data alone, resulted in unbiased and precise estimates of both parameters. This study is an example of the value of a staged assessment regarding the likely significance of a new measurement method or monitoring scenario before collecting field data.
NASA Astrophysics Data System (ADS)
Du, J.; Kimball, J. S.; Jones, L. A.; Watts, J. D.
2016-12-01
Climate is one of the key drivers of crop suitability and productivity in a region. The influence of climate and weather on the growing season determine the amount of time crops spend in each growth phase, which in turn impacts productivity and, more importantly, yields. Planting date can have a strong influence on yields with earlier planting generally resulting in higher yields, a sensitivity that is also present in some crop models. Furthermore, planting date is already changing and may continue, especially if longer growing seasons caused by future climate change drive early (or late) planting decisions. Crop models need an accurate method to predict plant date to allow these models to: 1) capture changes in crop management to adapt to climate change, 2) accurately model the timing of crop phenology, and 3) improve crop simulated influences on carbon, nutrient, energy, and water cycles. Previous studies have used climate as a predictor for planting date. Climate as a plant date predictor has more advantages than fixed plant dates. For example, crop expansion and other changes in land use (e.g., due to changing temperature conditions), can be accommodated without additional model inputs. As such, a new methodology to implement a predictive planting date based on climate inputs is added to the Accelerated Climate Model for Energy (ACME) Land Model (ALM). The model considers two main sources of climate data important for planting: precipitation and temperature. This method expands the current temperature threshold planting trigger and improves the estimated plant date in ALM. Furthermore, the precipitation metric for planting, which synchronizes the crop growing season with the wettest months, allows tropical crops to be introduced to the model. This presentation will demonstrate how the improved model enhances the ability of ALM to capture planting date compared with observations. More importantly, the impact of changing the planting date and introducing tropical crops will be explored. Those impacts include discussions on productivity, yield, and influences on carbon and energy fluxes.
Arrival-time picking method based on approximate negentropy for microseismic data
NASA Astrophysics Data System (ADS)
Li, Yue; Ni, Zhuo; Tian, Yanan
2018-05-01
Accurate and dependable picking of the first arrival time for microseismic data is an important part in microseismic monitoring, which directly affects analysis results of post-processing. This paper presents a new method based on approximate negentropy (AN) theory for microseismic arrival time picking in condition of much lower signal-to-noise ratio (SNR). According to the differences in information characteristics between microseismic data and random noise, an appropriate approximation of negentropy function is selected to minimize the effect of SNR. At the same time, a weighted function of the differences between maximum and minimum value of AN spectrum curve is designed to obtain a proper threshold function. In this way, the region of signal and noise is distinguished to pick the first arrival time accurately. To demonstrate the effectiveness of AN method, we make many experiments on a series of synthetic data with different SNR from -1 dB to -12 dB and compare it with previously published Akaike information criterion (AIC) and short/long time average ratio (STA/LTA) methods. Experimental results indicate that these three methods can achieve well picking effect when SNR is from -1 dB to -8 dB. However, when SNR is as low as -8 dB to -12 dB, the proposed AN method yields more accurate and stable picking result than AIC and STA/LTA methods. Furthermore, the application results of real three-component microseismic data also show that the new method is superior to the other two methods in accuracy and stability.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
NASA Astrophysics Data System (ADS)
Cai, Y.
2017-12-01
Accurately forecasting crop yields has broad implications for economic trading, food production monitoring, and global food security. However, the variation of environmental variables presents challenges to model yields accurately, especially when the lack of highly accurate measurements creates difficulties in creating models that can succeed across space and time. In 2016, we developed a sequence of machine-learning based models forecasting end-of-season corn yields for the US at both the county and national levels. We combined machine learning algorithms in a hierarchical way, and used an understanding of physiological processes in temporal feature selection, to achieve high precision in our intra-season forecasts, including in very anomalous seasons. During the live run, we predicted the national corn yield within 1.40% of the final USDA number as early as August. In the backtesting of the 2000-2015 period, our model predicts national yield within 2.69% of the actual yield on average already by mid-August. At the county level, our model predicts 77% of the variation in final yield using data through the beginning of August and improves to 80% by the beginning of October, with the percentage of counties predicted within 10% of the average yield increasing from 68% to 73%. Further, the lowest errors are in the most significant producing regions, resulting in very high precision national-level forecasts. In addition, we identify the changes of important variables throughout the season, specifically early-season land surface temperature, and mid-season land surface temperature and vegetation index. For the 2017 season, we feed 2016 data to the training set, together with additional geospatial data sources, aiming to make the current model even more precise. We will show how our 2017 US corn yield forecasts converges in time, which factors affect the yield the most, as well as present our plans for 2018 model adjustments.
Molecular determinants of blood-brain barrier permeation.
Geldenhuys, Werner J; Mohammad, Afroz S; Adkins, Chris E; Lockman, Paul R
2015-01-01
The blood-brain barrier (BBB) is a microvascular unit which selectively regulates the permeability of drugs to the brain. With the rise in CNS drug targets and diseases, there is a need to be able to accurately predict a priori which compounds in a company database should be pursued for favorable properties. In this review, we will explore the different computational tools available today, as well as underpin these to the experimental methods used to determine BBB permeability. These include in vitro models and the in vivo models that yield the dataset we use to generate predictive models. Understanding of how these models were experimentally derived determines our accurate and predicted use for determining a balance between activity and BBB distribution.
NASA Technical Reports Server (NTRS)
Iyer, V.; Harris, J. E.
1987-01-01
The three-dimensional boundary-layer equations in the limit as the normal coordinate tends to infinity are called the surface Euler equations. The present paper describes an accurate method for generating edge conditions for three-dimensional boundary-layer codes using these equations. The inviscid pressure distribution is first interpolated to the boundary-layer grid. The surface Euler equations are then solved with this pressure field and a prescribed set of initial and boundary conditions to yield the velocities along the two surface coordinate directions. Results for typical wing and fuselage geometries are presented. The smoothness and accuracy of the edge conditions obtained are found to be superior to the conventional interpolation procedures.
Molecular determinants of blood–brain barrier permeation
Geldenhuys, Werner J; Mohammad, Afroz S; Adkins, Chris E; Lockman, Paul R
2015-01-01
The blood–brain barrier (BBB) is a microvascular unit which selectively regulates the permeability of drugs to the brain. With the rise in CNS drug targets and diseases, there is a need to be able to accurately predict a priori which compounds in a company database should be pursued for favorable properties. In this review, we will explore the different computational tools available today, as well as underpin these to the experimental methods used to determine BBB permeability. These include in vitro models and the in vivo models that yield the dataset we use to generate predictive models. Understanding of how these models were experimentally derived determines our accurate and predicted use for determining a balance between activity and BBB distribution. PMID:26305616
NASA Technical Reports Server (NTRS)
1994-01-01
General Purpose Boundary Element Solution Technology (GPBEST) software employs the boundary element method of mechanical engineering analysis, as opposed to finite element. It is, according to one of its developers, 10 times faster in data preparation and more accurate than other methods. Its use results in less expensive products because the time between design and manufacturing is shortened. A commercial derivative of a NASA-developed computer code, it is marketed by Best Corporation to solve problems in stress analysis, heat transfer, fluid analysis and yielding and cracking of solids. Other applications include designing tractor and auto parts, household appliances and acoustic analysis.
Are artificial opals non-close-packed fcc structures?
NASA Astrophysics Data System (ADS)
García-Santamaría, F.; Braun, P. V.
2007-06-01
The authors report a simple experimental method to accurately measure the volume fraction of artificial opals. The results are modeled using several methods, and they find that some of the most common yield very inaccurate results. Both finite size and substrate effects play an important role in calculations of the volume fraction. The experimental results show that the interstitial pore volume is 4%-15% larger than expected for close-packed structures. Consequently, calculations performed in previous work relating the amount of material synthesized in the opal interstices with the optical properties may need revision, especially in the case of high refractive index materials.
Precise calculation of the local pressure tensor in Cartesian and spherical coordinates in LAMMPS
NASA Astrophysics Data System (ADS)
Nakamura, Takenobu; Kawamoto, Shuhei; Shinoda, Wataru
2015-05-01
An accurate and efficient algorithm for calculating the 3D pressure field has been developed and implemented in the open-source molecular dynamics package, LAMMPS. Additionally, an algorithm to compute the pressure profile along the radial direction in spherical coordinates has also been implemented. The latter is particularly useful for systems showing a spherical symmetry such as micelles and vesicles. These methods yield precise pressure fields based on the Irving-Kirkwood contour integration and are particularly useful for biomolecular force fields. The present methods are applied to several systems including a buckled membrane and a vesicle.
Simulating large-scale crop yield by using perturbed-parameter ensemble method
NASA Astrophysics Data System (ADS)
Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.
2010-12-01
Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence is still high in most grids regardless of the countries. However, the model showed comparatively low reproducibility in the slope areas, such as around the Rocky Mountains in South Dakota, around the Great Xing'anling Mountains in Heilongjiang, and around the Brazilian Plateau. As there is a wide-ranging local climate conditions in the complex terrain, such as the slope of mountain, the GCM grid-scale weather inputs is likely one of major sources of error. The results of this study highlight the benefits of the perturbed-parameter ensemble method in simulating crop yield on a GCM grid scale: (1) the posterior PDF of parameter could quantify the uncertainty of parameter value of the crop model associated with the local crop production aspects; (2) the method can explicitly account for the uncertainty of parameter value in the crop model simulations; (3) the method achieve a Monte Carlo approximation of probability of sub-grid scale yield, accounting for the nonlinear response of crop yield to weather and management; (4) the method is therefore appropriate to aggregate the simulated sub-grid scale yields to a grid-scale yield and it may be a reason for high performance of the model in capturing inter-annual variation of yield.
Accurate formula for gaseous transmittance in the infrared.
Gibson, G A; Pierluissi, J H
1971-07-01
By considering the infrared transmittance model of Zachor as the equation for an elliptic cone, a quadratic generalization is proposed that yields significantly greater computational accuracy. The strong-band parameters are obtained by iterative nonlinear, curve-fitting methods using a digital computer. The remaining parameters are determined with a linear least-squares technique and a weighting function that yields better results than the one adopted by Zachor. The model is applied to CO(2) over intervals of 50 cm(-1) between 550 cm(-1) and 9150 cm(-1) and to water vapor over similar intervals between 1050 cm(-1) and 9950 cm(-1), with mean rms deviations from the original data being 2.30 x 10(-3) and 1.83 x 10(-3), respectively.
Meyer, Andreas L S; Wiens, John J
2018-01-01
Estimates of diversification rates are invaluable for many macroevolutionary studies. Recently, an approach called BAMM (Bayesian Analysis of Macro-evolutionary Mixtures) has become widely used for estimating diversification rates and rate shifts. At the same time, several articles have concluded that estimates of net diversification rates from the method-of-moments (MS) estimators are inaccurate. Yet, no studies have compared the ability of these two methods to accurately estimate clade diversification rates. Here, we use simulations to compare their performance. We found that BAMM yielded relatively weak relationships between true and estimated diversification rates. This occurred because BAMM underestimated the number of rates shifts across each tree, and assigned high rates to small clades with low rates. Errors in both speciation and extinction rates contributed to these errors, showing that using BAMM to estimate only speciation rates is also problematic. In contrast, the MS estimators (particularly using stem group ages), yielded stronger relationships between true and estimated diversification rates, by roughly twofold. Furthermore, the MS approach remained relatively accurate when diversification rates were heterogeneous within clades, despite the widespread assumption that it requires constant rates within clades. Overall, we caution that BAMM may be problematic for estimating diversification rates and rate shifts. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Shackelford, S D; Wheeler, T L; Koohmaraie, M
2003-01-01
The present experiment was conducted to evaluate the ability of the U.S. Meat Animal Research Center's beef carcass image analysis system to predict calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score under commercial beef processing conditions. In two commercial beef-processing facilities, image analysis was conducted on 800 carcasses on the beef-grading chain immediately after the conventional USDA beef quality and yield grades were applied. Carcasses were blocked by plant and observed calculated yield grade. The carcasses were then separated, with 400 carcasses assigned to a calibration data set that was used to develop regression equations, and the remaining 400 carcasses assigned to a prediction data set used to validate the regression equations. Prediction equations, which included image analysis variables and hot carcass weight, accounted for 90, 88, 90, 88, and 76% of the variation in calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score, respectively, in the prediction data set. In comparison, the official USDA yield grade as applied by online graders accounted for 73% of the variation in calculated yield grade. The technology described herein could be used by the beef industry to more accurately determine beef yield grades; however, this system does not provide an accurate enough prediction of marbling score to be used without USDA grader interaction for USDA quality grading.
Shearlet-based edge detection: flame fronts and tidal flats
NASA Astrophysics Data System (ADS)
King, Emily J.; Reisenhofer, Rafael; Kiefer, Johannes; Lim, Wang-Q.; Li, Zhen; Heygster, Georg
2015-09-01
Shearlets are wavelet-like systems which are better suited for handling geometric features in multi-dimensional data than traditional wavelets. A novel method for edge and line detection which is in the spirit of phase congruency but is based on a complex shearlet transform will be presented. This approach to detection yields an approximate tangent direction of detected discontinuities as a byproduct of the computation, which then yields local curvature estimates. Two applications of the edge detection method will be discussed. First, the tracking and classification of flame fronts is a critical component of research in technical thermodynamics. Quite often, the flame fronts are transient or weak and the images are noisy. The standard methods used in the field for the detection of flame fronts do not handle such data well. Fortunately, using the shearlet-based edge measure yields good results as well as an accurate approximation of local curvature. Furthermore, a modification of the method will yield line detection, which is important for certain imaging modalities. Second, the Wadden tidal flats are a biodiverse region along the North Sea coast. One approach to surveying the delicate region and tracking the topographical changes is to use pre-existing Synthetic Aperture Radar (SAR) images. Unfortunately, SAR data suffers from multiplicative noise as well as sensitivity to environmental factors. The first large-scale mapping project of that type showed good results but only with a tremendous amount of manual interaction because there are many edges in the data which are not boundaries of the tidal flats but are edges of features like fields or islands. Preliminary results will be presented.
Generalized Buneman Pruning for Inferring the Most Parsimonious Multi-state Phylogeny
NASA Astrophysics Data System (ADS)
Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell
Accurate reconstruction of phylogenies remains a key challenge in evolutionary biology. Most biologically plausible formulations of the problem are formally NP-hard, with no known efficient solution. The standard in practice are fast heuristic methods that are empirically known to work very well in general, but can yield results arbitrarily far from optimal. Practical exact methods, which yield exponential worst-case running times but generally much better times in practice, provide an important alternative. We report progress in this direction by introducing a provably optimal method for the weighted multi-state maximum parsimony phylogeny problem. The method is based on generalizing the notion of the Buneman graph, a construction key to efficient exact methods for binary sequences, so as to apply to sequences with arbitrary finite numbers of states with arbitrary state transition weights. We implement an integer linear programming (ILP) method for the multi-state problem using this generalized Buneman graph and demonstrate that the resulting method is able to solve data sets that are intractable by prior exact methods in run times comparable with popular heuristics. Our work provides the first method for provably optimal maximum parsimony phylogeny inference that is practical for multi-state data sets of more than a few characters.
NASA Technical Reports Server (NTRS)
Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.
2015-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.
Evaluating the sensitivity of agricultural model performance to different climate inputs
Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.
2017-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985
Magney, Troy S; Frankenberg, Christian; Fisher, Joshua B; Sun, Ying; North, Gretchen B; Davis, Thomas S; Kornfeld, Ari; Siebke, Katharina
2017-09-01
Recent advances in the retrieval of Chl fluorescence from space using passive methods (solar-induced Chl fluorescence, SIF) promise improved mapping of plant photosynthesis globally. However, unresolved issues related to the spatial, spectral, and temporal dynamics of vegetation fluorescence complicate our ability to interpret SIF measurements. We developed an instrument to measure leaf-level gas exchange simultaneously with pulse-amplitude modulation (PAM) and spectrally resolved fluorescence over the same field of view - allowing us to investigate the relationships between active and passive fluorescence with photosynthesis. Strongly correlated, slope-dependent relationships were observed between measured spectra across all wavelengths (F λ , 670-850 nm) and PAM fluorescence parameters under a range of actinic light intensities (steady-state fluorescence yields, F t ) and saturation pulses (maximal fluorescence yields, F m ). Our results suggest that this method can accurately reproduce the full Chl emission spectra - capturing the spectral dynamics associated with changes in the yields of fluorescence, photochemical (ΦPSII), and nonphotochemical quenching (NPQ). We discuss how this method may establish a link between photosynthetic capacity and the mechanistic drivers of wavelength-specific fluorescence emission during changes in environmental conditions (light, temperature, humidity). Our emphasis is on future research directions linking spectral fluorescence to photosynthesis, ΦPSII, and NPQ. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Geologic Carbon Sequestration Leakage Detection: A Physics-Guided Machine Learning Approach
NASA Astrophysics Data System (ADS)
Lin, Y.; Harp, D. R.; Chen, B.; Pawar, R.
2017-12-01
One of the risks of large-scale geologic carbon sequestration is the potential migration of fluids out of the storage formations. Accurate and fast detection of this fluids migration is not only important but also challenging, due to the large subsurface uncertainty and complex governing physics. Traditional leakage detection and monitoring techniques rely on geophysical observations including pressure. However, the resulting accuracy of these methods is limited because of indirect information they provide requiring expert interpretation, therefore yielding in-accurate estimates of leakage rates and locations. In this work, we develop a novel machine-learning technique based on support vector regression to effectively and efficiently predict the leakage locations and leakage rates based on limited number of pressure observations. Compared to the conventional data-driven approaches, which can be usually seem as a "black box" procedure, we develop a physics-guided machine learning method to incorporate the governing physics into the learning procedure. To validate the performance of our proposed leakage detection method, we employ our method to both 2D and 3D synthetic subsurface models. Our novel CO2 leakage detection method has shown high detection accuracy in the example problems.
EBIC: an evolutionary-based parallel biclustering algorithm for pattern discovery.
Orzechowski, Patryk; Sipper, Moshe; Huang, Xiuzhen; Moore, Jason H
2018-05-22
Biclustering algorithms are commonly used for gene expression data analysis. However, accurate identification of meaningful structures is very challenging and state-of-the-art methods are incapable of discovering with high accuracy different patterns of high biological relevance. In this paper a novel biclustering algorithm based on evolutionary computation, a subfield of artificial intelligence (AI), is introduced. The method called EBIC aims to detect order-preserving patterns in complex data. EBIC is capable of discovering multiple complex patterns with unprecedented accuracy in real gene expression datasets. It is also one of the very few biclustering methods designed for parallel environments with multiple graphics processing units (GPUs). We demonstrate that EBIC greatly outperforms state-of-the-art biclustering methods, in terms of recovery and relevance, on both synthetic and genetic datasets. EBIC also yields results over 12 times faster than the most accurate reference algorithms. EBIC source code is available on GitHub at https://github.com/EpistasisLab/ebic. Correspondence and requests for materials should be addressed to P.O. (email: patryk.orzechowski@gmail.com) and J.H.M. (email: jhmoore@upenn.edu). Supplementary Data with results of analyses and additional information on the method is available at Bioinformatics online.
[Contrast of Z-Pinch X-Ray Yield Measure Technique].
Li, Mo; Wang, Liang-ping; Sheng, Liang; Lu, Yi
2015-03-01
Resistive bolometer and scintillant detection system are two mainly Z-pinch X-ray yield measure techniques which are based on different diagnostic principles. Contrasting the results from two methods can help with increasing precision of X-ray yield measurement. Experiments with different load material and shape were carried out on the "QiangGuang-I" facility. For Al wire arrays, X-ray yields measured by the two techniques were largely consistent. However, for insulating coating W wire arrays, X-ray yields taken from bolometer changed with load parameters while data from scintillant detection system hardly changed. Simulation and analysis draw conclusions as follows: (1) Scintillant detection system is much more sensitive to X-ray photons with low energy and its spectral response is wider than the resistive bolometer. Thus, results from the former method are always larger than the latter. (2) The responses of the two systems are both flat to Al plasma radiation. Thus, their results are consistent for Al wire array loads. (3) Radiation form planar W wire arrays is mainly composed of sub-keV soft X-ray. X-ray yields measured by the bolometer is supposed to be accurate because of the nickel foil can absorb almost all the soft X-ray. (4) By contrast, using planar W wire arrays, data from scintillant detection system hardly change with load parameters. A possible explanation is that while the distance between wires increases, plasma temperature at stagnation reduces and spectra moves toward the soft X-ray region. Scintillator is much more sensitive to the soft X-ray below 200 eV. Thus, although the total X-ray yield reduces with large diameter load, signal from the scintillant detection system is almost the same. (5) Both Techniques affected by electron beams produced by the loads.
NASA Astrophysics Data System (ADS)
Vidic, Nataša. J.; TenPas, Jeff D.; Verosub, Kenneth L.; Singer, Michael J.
2000-08-01
Magnetic susceptibility variations in the Chinese loess/palaeosol sequences have been used extensively for palaeoclimatic interpretations. The magnetic signal of these sequences must be divided into lithogenic and pedogenic components because the palaeoclimatic record is primarily reflected in the pedogenic component. In this paper we compare two methods for separating the pedogenic and lithogenic components of the magnetic susceptibility signal: the citrate-bicarbonate-dithionite (CBD) extraction procedure, and a mixing analysis. Both methods yield good estimates of the pedogenic component, especially for the palaeosols. The CBD procedure underestimates the lithogenic component and overestimates the pedogenic component. The magnitude of this effect is moderately high in loess layers but almost negligible in palaeosols. The mixing model overestimates the lithogenic component and underestimates the pedogenic component. Both methods can be adjusted to yield better estimates of both components. The lithogenic susceptibility, as determined by either method, suggests that palaeoclimatic interpretations based only on total susceptibility will be in error and that a single estimate of the average lithogenic susceptibility is not an accurate basis for adjusting the total susceptibility. A long-term decline in lithogenic susceptibility with depth in the section suggests more intense or prolonged periods of weathering associated with the formation of the older palaeosols. The CBD procedure provides the most comprehensive information on the magnitude of the components and magnetic mineralogy of loess and palaeosols. However, the mixing analysis provides a sensitive, rapid, and easily applied alternative to the CBD procedure. A combination of the two approaches provides the most powerful and perhaps the most accurate way of separating the magnetic susceptibility components.
Evaluation of digestion methods for analysis of trace metals in mammalian tissues and NIST 1577c.
Binder, Grace A; Metcalf, Rainer; Atlas, Zachary; Daniel, Kenyon G
2018-02-15
Digestion techniques for ICP analysis have been poorly studied for biological samples. This report describes an optimized method for analysis of trace metals that can be used across a variety of sample types. Digestion methods were tested and optimized with the analysis of trace metals in cancerous as compared to normal tissue as the end goal. Anthropological, forensic, oncological and environmental research groups can employ this method reasonably cheaply and safely whilst still being able to compare between laboratories. We examined combined HNO 3 and H 2 O 2 digestion at 170 °C for human, porcine and bovine samples whether they are frozen, fresh or lyophilized powder. Little discrepancy is found between microwave digestion and PFA Teflon pressure vessels. The elements of interest (Cu, Zn, Fe and Ni) yielded consistently higher and more accurate values on standard reference material than samples heated to 75 °C or samples that utilized HNO 3 alone. Use of H 2 SO 4 does not improve homogeneity of the sample and lowers precision during ICP analysis. High temperature digestions (>165 °C) using a combination of HNO 3 and H 2 O 2 as outlined are proposed as a standard technique for all mammalian tissues, specifically, human tissues and yield greater than 300% higher values than samples digested at 75 °C regardless of the acid or acid combinations used. The proposed standardized technique is designed to accurately quantify potential discrepancies in metal loads between cancerous and healthy tissues and applies to numerous tissue studies requiring quick, effective and safe digestions. Copyright © 2017 Elsevier Inc. All rights reserved.
Growth and yield in Eucalyptus globulus
James A. Rinehart; Richard B. Standiford
1983-01-01
A study of the major Eucalyptus globulus stands throughout California conducted by Woodbridge Metcalf in 1924 provides a complete and accurate data set for generating variable site-density yield models. Two models were developed using linear regression techniques. Model I depicts a linear relationship between age and yield best used for stands between five and fifteen...
A cross-correlation-based estimate of the galaxy luminosity function
NASA Astrophysics Data System (ADS)
van Daalen, Marcel P.; White, Martin
2018-06-01
We extend existing methods for using cross-correlations to derive redshift distributions for photometric galaxies, without using photometric redshifts. The model presented in this paper simultaneously yields highly accurate and unbiased redshift distributions and, for the first time, redshift-dependent luminosity functions, using only clustering information and the apparent magnitudes of the galaxies as input. In contrast to many existing techniques for recovering unbiased redshift distributions, the output of our method is not degenerate with the galaxy bias b(z), which is achieved by modelling the shape of the luminosity bias. We successfully apply our method to a mock galaxy survey and discuss improvements to be made before applying our model to real data.
Blood vessels segmentation of hatching eggs based on fully convolutional networks
NASA Astrophysics Data System (ADS)
Geng, Lei; Qiu, Ling; Wu, Jun; Xiao, Zhitao
2018-04-01
FCN, trained end-to-end, pixels-to-pixels, predict result of each pixel. It has been widely used for semantic segmentation. In order to realize the blood vessels segmentation of hatching eggs, a method based on FCN is proposed in this paper. The training datasets are composed of patches extracted from very few images to augment data. The network combines with lower layer and deconvolution to enables precise segmentation. The proposed method frees from the problem that training deep networks need large scale samples. Experimental results on hatching eggs demonstrate that this method can yield more accurate segmentation outputs than previous researches. It provides a convenient reference for fertility detection subsequently.
Constructing a Watts-Strogatz network from a small-world network with symmetric degree distribution.
Menezes, Mozart B C; Kim, Seokjin; Huang, Rongbing
2017-01-01
Though the small-world phenomenon is widespread in many real networks, it is still challenging to replicate a large network at the full scale for further study on its structure and dynamics when sufficient data are not readily available. We propose a method to construct a Watts-Strogatz network using a sample from a small-world network with symmetric degree distribution. Our method yields an estimated degree distribution which fits closely with that of a Watts-Strogatz network and leads into accurate estimates of network metrics such as clustering coefficient and degree of separation. We observe that the accuracy of our method increases as network size increases.
NASA Astrophysics Data System (ADS)
Tsalamengas, John L.
2018-07-01
We study plane-wave electromagnetic scattering by radially and strongly inhomogeneous dielectric cylinders at oblique incidence. The method of analysis relies on an exact reformulation of the underlying field equations as a first-order 4 × 4 system of differential equations and on the ability to restate the associated initial-value problem in the form of a system of coupled linear Volterra integral equations of the second kind. The integral equations so derived are discretized via a sophisticated variant of the Nyström method. The proposed method yields results accurate up to machine precision without relying on approximations. Numerical results and case studies ably demonstrate the efficiency and high accuracy of the algorithms.
NASA Astrophysics Data System (ADS)
Wang, Xu; Le, Anh-Thu; Zhou, Zhaoyan; Wei, Hui; Lin, C. D.
2017-08-01
We provide a unified theoretical framework for recently emerging experiments that retrieve fixed-in-space molecular information through time-domain rotational coherence spectroscopy. Unlike a previous approach by Makhija et al. (V. Makhija et al., arXiv:1611.06476), our method can be applied to the retrieval of both real-valued (e.g., ionization yield) and complex-valued (e.g., induced dipole moment) molecular response information. It is also a direct retrieval method without using iterations. We also demonstrate that experimental parameters, such as the fluence of the aligning laser pulse and the rotational temperature of the molecular ensemble, can be quite accurately determined using a statistical method.
NASA Astrophysics Data System (ADS)
Tirani, M. D.; Maleki, M.; Kajani, M. T.
2014-11-01
A numerical method for solving the Lane-Emden equations of the polytropic index α when 4.75 ≤ α ≤ 5 is introduced. The method is based upon nonclassical Gauss-Radau collocation points and Freud type weights. Nonclassical orthogonal polynomials, nonclassical Radau points and weighted interpolation are introduced and are utilized in the interval [0,1]. A smooth, strictly monotonic transformation is used to map the infinite domain x ∈ [0,∞) onto a half-open interval t ∈ [0,1). The resulting problem on the finite interval is then transcribed to a system of nonlinear algebraic equations using collocation. The method is easy to implement and yields very accurate results.
Direct transesterification of fresh microalgal cells.
Liu, Jiao; Liu, Yanan; Wang, Haitao; Xue, Song
2015-01-01
Transesterification of lipids is a vital step during the processes of both biodiesel production and fatty acid analysis. By comparing the yields and fatty acid profiles obtained from microalgal oil and dry microalgal cells, the reliability of method for the transesterification of micro-scale samples was tested. The minimum amount of microalgal cells needed for accurate analysis was found to be approximately 300μg dry cells. This direct transesterification method of fresh cells was applied to eight microalgal species, and the results indicate that the efficiency of the developed method is identical to that of conventional method, except for Spirulina whose lipid content is very low, which means the total lipid content should been considered. Copyright © 2014 Elsevier Ltd. All rights reserved.
Hrabok, Marianne; Brooks, Brian L; Fay-McClymont, Taryn B; Sherman, Elisabeth M S
2014-01-01
The purpose of this article was to investigate the accuracy of the WISC-IV short forms in estimating Full Scale Intelligence Quotient (FSIQ) and General Ability Index (GAI) in pediatric epilepsy. One hundred and four children with epilepsy completed the WISC-IV as part of a neuropsychological assessment at a tertiary-level children's hospital. The clinical accuracy of eight short forms was assessed in two ways: (a) accuracy within +/- 5 index points of FSIQ and (b) the clinical classification rate according to Wechsler conventions. The sample was further subdivided into low FSIQ (≤ 80) and high FSIQ (> 80). All short forms were significantly correlated with FSIQ. Seven-subtest (Crawford et al. [2010] FSIQ) and 5-subtest (BdSiCdVcLn) short forms yielded the highest clinical accuracy rates (77%-89%). Overall, a 2-subtest (VcMr) short form yielded the lowest clinical classification rates for FSIQ (35%-63%). The short form yielding the most accurate estimate of GAI was VcSiMrBd (73%-84%). Short forms show promise as useful estimates. The 7-subtest (Crawford et al., 2010) and 5-subtest (BdSiVcLnCd) short forms yielded the most accurate estimates of FSIQ. VcSiMrBd yielded the most accurate estimate of GAI. Clinical recommendations are provided for use of short forms in pediatric epilepsy.
Tree decomposition based fast search of RNA structures including pseudoknots in genomes.
Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming
2005-01-01
Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.
Application of Pyrosequencing® in Food Biodefense.
Amoako, Kingsley Kwaku
2015-01-01
The perpetration of a bioterrorism attack poses a significant risk for public health with potential socioeconomic consequences. It is imperative that we possess reliable assays for the rapid and accurate identification of biothreat agents to make rapid risk-informed decisions on emergency response. The development of advanced methodologies for the detection of biothreat agents has been evolving rapidly since the release of the anthrax spores in the mail in 2001, and recent advances in detection and identification techniques could prove to be an essential component in the defense against biological attacks. Sequence-based approaches such as Pyrosequencing(®), which has the capability to determine short DNA stretches in real time using biotinylated PCR amplicons, have potential biodefense applications. Using markers from the virulence plasmids and chromosomal regions, my laboratory has demonstrated the power of this technology in the rapid, specific, and sensitive detection of B. anthracis spores and Yersinia pestis in food. These are the first applications for the detection of the two organisms in food. Furthermore, my lab has developed a rapid assay to characterize the antimicrobial resistance (AMR) gene profiles for Y. pestis using Pyrosequencing. Pyrosequencing is completed in about 60 min (following PCR amplification) and yields accurate and reliable results with an added layer of confidence, thus enabling rapid risk-informed decisions to be made. A typical run yields 40-84 bp reads with 94-100 % identity to the expected sequence. It also provides a rapid method for determining the AMR profile as compared to the conventional plate method which takes several days. The method described is proposed as a novel detection system for potential application in food biodefense.
Laurens, Lieve M L; Van Wychen, Stefanie; McAllister, Jordan P; Arrowsmith, Sarah; Dempster, Thomas A; McGowen, John; Pienkos, Philip T
2014-05-01
Accurate compositional analysis in biofuel feedstocks is imperative; the yields of individual components can define the economics of an entire process. In the nascent industry of algal biofuels and bioproducts, analytical methods that have been deemed acceptable for decades are suddenly critical for commercialization. We tackled the question of how the strain and biochemical makeup of algal cells affect chemical measurements. We selected a set of six procedures (two each for lipids, protein, and carbohydrates): three rapid fingerprinting methods and three advanced chromatography-based methods. All methods were used to measure the composition of 100 samples from three strains: Scenedesmus sp., Chlorella sp., and Nannochloropsis sp. The data presented point not only to species-specific discrepancies but also to cell biochemistry-related discrepancies. There are cases where two respective methods agree but the differences are often significant with over- or underestimation of up to 90%, likely due to chemical interferences with the rapid spectrophotometric measurements. We provide background on the chemistry of interfering reactions for the fingerprinting methods and conclude that for accurate compositional analysis of algae and process and mass balance closure, emphasis should be placed on unambiguous characterization using methods where individual components are measured independently. Copyright © 2014 Elsevier Inc. All rights reserved.
Rydzy, M; Deslauriers, R; Smith, I C; Saunders, J K
1990-08-01
A systematic study was performed to optimize the accuracy of kinetic parameters derived from magnetization transfer measurements. Three techniques were investigated: time-dependent saturation transfer (TDST), saturation recovery (SRS), and inversion recovery (IRS). In the last two methods, one of the resonances undergoing exchange is saturated throughout the experiment. The three techniques were compared with respect to the accuracy of the kinetic parameters derived from experiments performed in a given, fixed, amount of time. Stochastic simulation of magnetization transfer experiments was performed to optimize experimental design. General formulas for the relative accuracies of the unidirectional rate constant (k) were derived for each of the three experimental methods. It was calculated that for k values between 0.1 and 1.0 s-1, T1 values between 1 and 10 s, and relaxation delays appropriate for the creatine kinase reaction, the SRS method yields more accurate values of k than does the IRS method. The TDST method is more accurate than the SRS method for reactions where T1 is long and k is large, within the range of k and T1 values examined. Experimental verification of the method was carried out on a solution in which the forward (PCr----ATP) rate constant (kf) of the creatine kinase reaction was measured.
Comparison of forward flight effects theory of A. Michalke and U. Michel with measured data
NASA Technical Reports Server (NTRS)
Rawls, J. W., Jr.
1983-01-01
The scaling laws of a Michalke and Michel predict flyover noise of a single stream shock free circular jet from static data or static predictions. The theory is based on a farfield solution to Lighthill's equation and includes density terms which are important for heated jets. This theory is compared with measured data using two static jet noise prediction methods. The comparisons indicate the theory yields good results when the static noise levels are accurately predicted.
Atomic Oxygen Erosion Yield Prediction for Spacecraft Polymers in Low Earth Orbit
NASA Technical Reports Server (NTRS)
Banks, Bruce A.; Backus, Jane A.; Manno, Michael V.; Waters, Deborah L.; Cameron, Kevin C.; deGroh, Kim K.
2009-01-01
The ability to predict the atomic oxygen erosion yield of polymers based on their chemistry and physical properties has been only partially successful because of a lack of reliable low Earth orbit (LEO) erosion yield data. Unfortunately, many of the early experiments did not utilize dehydrated mass loss measurements for erosion yield determination, and the resulting mass loss due to atomic oxygen exposure may have been compromised because samples were often not in consistent states of dehydration during the pre-flight and post-flight mass measurements. This is a particular problem for short duration mission exposures or low erosion yield materials. However, as a result of the retrieval of the Polymer Erosion and Contamination Experiment (PEACE) flown as part of the Materials International Space Station Experiment 2 (MISSE 2), the erosion yields of 38 polymers and pyrolytic graphite were accurately measured. The experiment was exposed to the LEO environment for 3.95 years from August 16, 2001 to July 30, 2005 and was successfully retrieved during a space walk on July 30, 2005 during Discovery s STS-114 Return to Flight mission. The 40 different materials tested (including Kapton H fluence witness samples) were selected specifically to represent a variety of polymers used in space as well as a wide variety of polymer chemical structures. The MISSE 2 PEACE Polymers experiment used carefully dehydrated mass measurements, as well as accurate density measurements to obtain accurate erosion yield data for high-fluence (8.43 1021 atoms/sq cm). The resulting data was used to develop an erosion yield predictive tool with a correlation coefficient of 0.895 and uncertainty of +/-6.3 10(exp -25)cu cm/atom. The predictive tool utilizes the chemical structures and physical properties of polymers to predict in-space atomic oxygen erosion yields. A predictive tool concept (September 2009 version) is presented which represents an improvement over an earlier (December 2008) version.
NASA Astrophysics Data System (ADS)
Brückner, Charlotte; Engels, Bernd
2017-01-01
Vertical and adiabatic singlet and triplet excitation energies of molecular p-type semiconductors calculated with various DFT functionals and wave-function based approaches are benchmarked against MS-CASPT2/cc-pVTZ reference values. A special focus lies on the singlet-triplet gaps that are very important in the process of singlet fission. Singlet fission has the potential to boost device efficiencies of organic solar cells, but the scope of existing singlet-fission compounds is still limited. A computational prescreening of candidate molecules could enlarge it; yet it requires efficient methods accurately predicting singlet and triplet excitation energies. Different DFT formulations (Tamm-Dancoff approximation, linear response time-dependent DFT, Δ-SCF) and spin scaling schemes along with several ab initio methods (CC2, ADC(2)/MP2, CIS(D), CIS) are evaluated. While wave-function based methods yield rather reliable singlet-triplet gaps, many DFT functionals are shown to systematically underestimate triplet excitation energies. To gain insight, the impact of exact exchange and correlation is in detail addressed.
Elsayed, Mustafa M A; Vierl, Ulrich; Cevc, Gregor
2009-06-01
Potentiometric lipid membrane-water partition coefficient studies neglect electrostatic interactions to date; this leads to incorrect results. We herein show how to account properly for such interactions in potentiometric data analysis. We conducted potentiometric titration experiments to determine lipid membrane-water partition coefficients of four illustrative drugs, bupivacaine, diclofenac, ketoprofen and terbinafine. We then analyzed the results conventionally and with an improved analytical approach that considers Coulombic electrostatic interactions. The new analytical approach delivers robust partition coefficient values. In contrast, the conventional data analysis yields apparent partition coefficients of the ionized drug forms that depend on experimental conditions (mainly the lipid-drug ratio and the bulk ionic strength). This is due to changing electrostatic effects originating either from bound drug and/or lipid charges. A membrane comprising 10 mol-% mono-charged molecules in a 150 mM (monovalent) electrolyte solution yields results that differ by a factor of 4 from uncharged membranes results. Allowance for the Coulombic electrostatic interactions is a prerequisite for accurate and reliable determination of lipid membrane-water partition coefficients of ionizable drugs from potentiometric titration data. The same conclusion applies to all analytical methods involving drug binding to a surface.
Surface effect investigation on multipactor in microwave components using the EM-PIC method
NASA Astrophysics Data System (ADS)
Li, Yun; Ye, Ming; He, Yong-Ning; Cui, Wan-Zhao; Wang, Dan
2017-11-01
Multipactor poses a great risk to microwave components in space and its accurate controllable suppression is still lacking. To evaluate the secondary electron emission (SEE) of arbitrary surface states on multipactor, metal samples fabricated with ideal smoothness, random roughness, and micro-structures on the surface are investigated through SEE experiments and multipactor simulations. An accurate quantitative relationship between the SEE parameters and the multipactor discharge threshold in practical components has been established through Electromagnetic Particle-In-Cell (EM-PIC) simulation. Simulation results of microwave components, including the impedance transformer and the coaxial filter, exhibit an intuitive correlation between the critical SEE parameters, varied due to different surface states, and multipactor thresholds. It is demonstrated that it is the surface micro-structures with certain depth and morphology that determine the average yield of secondaries, other than the random surface relieves. Both the random surface relieves and micro-structures have a scattering effect on SEE, and the yield is prone to be identical upon different elevation angles of incident electrons. It possesses a great potential in the optimization and improvement of suppression technology without the exhaustion of the technological parameter.
Kelly, Nicola; McGarry, J Patrick
2012-05-01
The inelastic pressure dependent compressive behaviour of bovine trabecular bone is investigated through experimental and computational analysis. Two loading configurations are implemented, uniaxial and confined compression, providing two distinct loading paths in the von Mises-pressure stress plane. Experimental results reveal distinctive yielding followed by a constant nominal stress plateau for both uniaxial and confined compression. Computational simulation of the experimental tests using the Drucker-Prager and Mohr-Coulomb plasticity models fails to capture the confined compression behaviour of trabecular bone. The high pressure developed during confined compression does not result in plastic deformation using these formulations, and a near elastic response is computed. In contrast, the crushable foam plasticity models provide accurate simulation of the confined compression tests, with distinctive yield and plateau behaviour being predicted. The elliptical yield surfaces of the crushable foam formulations in the von Mises-pressure stress plane accurately characterise the plastic behaviour of trabecular bone. Results reveal that the hydrostatic yield stress is equal to the uniaxial yield stress for trabecular bone, demonstrating the importance of accurate characterisation and simulation of the pressure dependent plasticity. It is also demonstrated in this study that a commercially available trabecular bone analogue material, cellular rigid polyurethane foam, exhibits similar pressure dependent yield behaviour, despite having a lower stiffness and strength than trabecular bone. This study provides a novel insight into the pressure dependent yield behaviour of trabecular bone, demonstrating the inadequacy of uniaxial testing alone. For the first time, crushable foam plasticity formulations are implemented for trabecular bone. The enhanced understanding of the inelastic behaviour of trabecular bone established in this study will allow for more realistic simulation of orthopaedic device implantation and failure. Copyright © 2011 Elsevier Ltd. All rights reserved.
Determination of Time Dependent Virus Inactivation Rates
NASA Astrophysics Data System (ADS)
Chrysikopoulos, C. V.; Vogler, E. T.
2003-12-01
A methodology is developed for estimating temporally variable virus inactivation rate coefficients from experimental virus inactivation data. The methodology consists of a technique for slope estimation of normalized virus inactivation data in conjunction with a resampling parameter estimation procedure. The slope estimation technique is based on a relatively flexible geostatistical method known as universal kriging. Drift coefficients are obtained by nonlinear fitting of bootstrap samples and the corresponding confidence intervals are obtained by bootstrap percentiles. The proposed methodology yields more accurate time dependent virus inactivation rate coefficients than those estimated by fitting virus inactivation data to a first-order inactivation model. The methodology is successfully applied to a set of poliovirus batch inactivation data. Furthermore, the importance of accurate inactivation rate coefficient determination on virus transport in water saturated porous media is demonstrated with model simulations.
Modeling central metabolism and energy biosynthesis across microbial life
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal; ...
2016-08-08
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Modeling central metabolism and energy biosynthesis across microbial life
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Modeling central metabolism and energy biosynthesis across microbial life.
Edirisinghe, Janaka N; Weisenhorn, Pamela; Conrad, Neal; Xia, Fangfang; Overbeek, Ross; Stevens, Rick L; Henry, Christopher S
2016-08-08
Automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. To overcome this challenge, we developed methods and tools ( http://coremodels.mcs.anl.gov ) to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of model organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. We predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.
Biswas, Kristi; Taylor, Michael W.; Gear, Kim
2017-01-01
The application of high-throughput, next-generation sequencing technologies has greatly improved our understanding of the human oral microbiome. While deciphering this diverse microbial community using such approaches is more accurate than traditional culture-based methods, experimental bias introduced during critical steps such as DNA extraction may compromise the results obtained. Here, we systematically evaluate four commonly used microbial DNA extraction methods (MoBio PowerSoil® DNA Isolation Kit, QIAamp® DNA Mini Kit, Zymo Bacterial/Fungal DNA Mini PrepTM, phenol:chloroform-based DNA isolation) based on the following criteria: DNA quality and yield, and microbial community structure based on Illumina amplicon sequencing of the V3–V4 region of the 16S rRNA gene of bacteria and the internal transcribed spacer (ITS) 1 region of fungi. Our results indicate that DNA quality and yield varied significantly with DNA extraction method. Representation of bacterial genera in plaque and saliva samples did not significantly differ across DNA extraction methods and DNA extraction method showed no effect on the recovery of fungal genera from plaque. By contrast, fungal diversity from saliva was affected by DNA extraction method, suggesting that not all protocols are suitable to study the salivary mycobiome. PMID:28099455
Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques
Petersen, Richard C.
2014-01-01
Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms are discussed especially for fiber-reinforced composites. PMID:25620817
Calibrating genomic and allelic coverage bias in single-cell sequencing.
Zhang, Cheng-Zhong; Adalsteinsson, Viktor A; Francis, Joshua; Cornils, Hauke; Jung, Joonil; Maire, Cecile; Ligon, Keith L; Meyerson, Matthew; Love, J Christopher
2015-04-16
Artifacts introduced in whole-genome amplification (WGA) make it difficult to derive accurate genomic information from single-cell genomes and require different analytical strategies from bulk genome analysis. Here, we describe statistical methods to quantitatively assess the amplification bias resulting from whole-genome amplification of single-cell genomic DNA. Analysis of single-cell DNA libraries generated by different technologies revealed universal features of the genome coverage bias predominantly generated at the amplicon level (1-10 kb). The magnitude of coverage bias can be accurately calibrated from low-pass sequencing (∼0.1 × ) to predict the depth-of-coverage yield of single-cell DNA libraries sequenced at arbitrary depths. We further provide a benchmark comparison of single-cell libraries generated by multi-strand displacement amplification (MDA) and multiple annealing and looping-based amplification cycles (MALBAC). Finally, we develop statistical models to calibrate allelic bias in single-cell whole-genome amplification and demonstrate a census-based strategy for efficient and accurate variant detection from low-input biopsy samples.
Calibrating genomic and allelic coverage bias in single-cell sequencing
Francis, Joshua; Cornils, Hauke; Jung, Joonil; Maire, Cecile; Ligon, Keith L.; Meyerson, Matthew; Love, J. Christopher
2016-01-01
Artifacts introduced in whole-genome amplification (WGA) make it difficult to derive accurate genomic information from single-cell genomes and require different analytical strategies from bulk genome analysis. Here, we describe statistical methods to quantitatively assess the amplification bias resulting from whole-genome amplification of single-cell genomic DNA. Analysis of single-cell DNA libraries generated by different technologies revealed universal features of the genome coverage bias predominantly generated at the amplicon level (1–10 kb). The magnitude of coverage bias can be accurately calibrated from low-pass sequencing (~0.1 ×) to predict the depth-of-coverage yield of single-cell DNA libraries sequenced at arbitrary depths. We further provide a benchmark comparison of single-cell libraries generated by multi-strand displacement amplification (MDA) and multiple annealing and looping-based amplification cycles (MALBAC). Finally, we develop statistical models to calibrate allelic bias in single-cell whole-genome amplification and demonstrate a census-based strategy for efficient and accurate variant detection from low-input biopsy samples. PMID:25879913
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhen, X; Chen, H; Zhou, L
2014-06-15
Purpose: To propose and validate a novel and accurate deformable image registration (DIR) scheme to facilitate dose accumulation among treatment fractions of high-dose-rate (HDR) gynecological brachytherapy. Method: We have developed a method to adapt DIR algorithms to gynecologic anatomies with HDR applicators by incorporating a segmentation step and a point-matching step into an existing DIR framework. In the segmentation step, random walks algorithm is used to accurately segment and remove the applicator region (AR) in the HDR CT image. A semi-automatic seed point generation approach is developed to obtain the incremented foreground and background point sets to feed the randommore » walks algorithm. In the subsequent point-matching step, a feature-based thin-plate spline-robust point matching (TPS-RPM) algorithm is employed for AR surface point matching. With the resulting mapping, a DVF characteristic of the deformation between the two AR surfaces is generated by B-spline approximation, which serves as the initial DVF for the following Demons DIR between the two AR-free HDR CT images. Finally, the calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. Results: The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative results as well as the visual inspection of the DIR indicate that our proposed method can suppress the interference of the applicator with the DIR algorithm, and accurately register HDR CT images as well as deform and add interfractional HDR doses. Conclusions: We have developed a novel and robust DIR scheme that can perform registration between HDR gynecological CT images and yield accurate registration results. This new DIR scheme has potential for accurate interfractional HDR dose accumulation. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less
Multiple-Instance Regression with Structured Data
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Lane, Terran; Roper, Alex
2008-01-01
We present a multiple-instance regression algorithm that models internal bag structure to identify the items most relevant to the bag labels. Multiple-instance regression (MIR) operates on a set of bags with real-valued labels, each containing a set of unlabeled items, in which the relevance of each item to its bag label is unknown. The goal is to predict the labels of new bags from their contents. Unlike previous MIR methods, MI-ClusterRegress can operate on bags that are structured in that they contain items drawn from a number of distinct (but unknown) distributions. MI-ClusterRegress simultaneously learns a model of the bag's internal structure, the relevance of each item, and a regression model that accurately predicts labels for new bags. We evaluated this approach on the challenging MIR problem of crop yield prediction from remote sensing data. MI-ClusterRegress provided predictions that were more accurate than those obtained with non-multiple-instance approaches or MIR methods that do not model the bag structure.
Remaining Useful Life Prediction for Lithium-Ion Batteries Based on Gaussian Processes Mixture
Li, Lingling; Wang, Pengchong; Chao, Kuei-Hsiang; Zhou, Yatong; Xie, Yang
2016-01-01
The remaining useful life (RUL) prediction of Lithium-ion batteries is closely related to the capacity degeneration trajectories. Due to the self-charging and the capacity regeneration, the trajectories have the property of multimodality. Traditional prediction models such as the support vector machines (SVM) or the Gaussian Process regression (GPR) cannot accurately characterize this multimodality. This paper proposes a novel RUL prediction method based on the Gaussian Process Mixture (GPM). It can process multimodality by fitting different segments of trajectories with different GPR models separately, such that the tiny differences among these segments can be revealed. The method is demonstrated to be effective for prediction by the excellent predictive result of the experiments on the two commercial and chargeable Type 1850 Lithium-ion batteries, provided by NASA. The performance comparison among the models illustrates that the GPM is more accurate than the SVM and the GPR. In addition, GPM can yield the predictive confidence interval, which makes the prediction more reliable than that of traditional models. PMID:27632176
Predicting β-Turns in Protein Using Kernel Logistic Regression
Elbashir, Murtada Khalafallah; Sheng, Yu; Wang, Jianxin; Wu, FangXiang; Li, Min
2013-01-01
A β-turn is a secondary protein structure type that plays a significant role in protein configuration and function. On average 25% of amino acids in protein structures are located in β-turns. It is very important to develope an accurate and efficient method for β-turns prediction. Most of the current successful β-turns prediction methods use support vector machines (SVMs) or neural networks (NNs). The kernel logistic regression (KLR) is a powerful classification technique that has been applied successfully in many classification problems. However, it is often not found in β-turns classification, mainly because it is computationally expensive. In this paper, we used KLR to obtain sparse β-turns prediction in short evolution time. Secondary structure information and position-specific scoring matrices (PSSMs) are utilized as input features. We achieved Q total of 80.7% and MCC of 50% on BT426 dataset. These results show that KLR method with the right algorithm can yield performance equivalent to or even better than NNs and SVMs in β-turns prediction. In addition, KLR yields probabilistic outcome and has a well-defined extension to multiclass case. PMID:23509793
Predicting β-turns in protein using kernel logistic regression.
Elbashir, Murtada Khalafallah; Sheng, Yu; Wang, Jianxin; Wu, Fangxiang; Li, Min
2013-01-01
A β-turn is a secondary protein structure type that plays a significant role in protein configuration and function. On average 25% of amino acids in protein structures are located in β-turns. It is very important to develope an accurate and efficient method for β-turns prediction. Most of the current successful β-turns prediction methods use support vector machines (SVMs) or neural networks (NNs). The kernel logistic regression (KLR) is a powerful classification technique that has been applied successfully in many classification problems. However, it is often not found in β-turns classification, mainly because it is computationally expensive. In this paper, we used KLR to obtain sparse β-turns prediction in short evolution time. Secondary structure information and position-specific scoring matrices (PSSMs) are utilized as input features. We achieved Q total of 80.7% and MCC of 50% on BT426 dataset. These results show that KLR method with the right algorithm can yield performance equivalent to or even better than NNs and SVMs in β-turns prediction. In addition, KLR yields probabilistic outcome and has a well-defined extension to multiclass case.
Enhanced sequencing coverage with digital droplet multiple displacement amplification
Sidore, Angus M.; Lan, Freeman; Lim, Shaun W.; Abate, Adam R.
2016-01-01
Sequencing small quantities of DNA is important for applications ranging from the assembly of uncultivable microbial genomes to the identification of cancer-associated mutations. To obtain sufficient quantities of DNA for sequencing, the small amount of starting material must be amplified significantly. However, existing methods often yield errors or non-uniform coverage, reducing sequencing data quality. Here, we describe digital droplet multiple displacement amplification, a method that enables massive amplification of low-input material while maintaining sequence accuracy and uniformity. The low-input material is compartmentalized as single molecules in millions of picoliter droplets. Because the molecules are isolated in compartments, they amplify to saturation without competing for resources; this yields uniform representation of all sequences in the final product and, in turn, enhances the quality of the sequence data. We demonstrate the ability to uniformly amplify the genomes of single Escherichia coli cells, comprising just 4.7 fg of starting DNA, and obtain sequencing coverage distributions that rival that of unamplified material. Digital droplet multiple displacement amplification provides a simple and effective method for amplifying minute amounts of DNA for accurate and uniform sequencing. PMID:26704978
Simulated yields for managed northern hardwood stands
Dale S. Solomon; William B. Leak; William B. Leak
1986-01-01
Board-foot and cubic-foot yields developed with the forest growth model SlMTlM are presented for northern hardwood stands grown with and without management. SIMTIM has been modified to include more accurate growth rates by species, a new stocking chart, and yields that reflect species values and quality classes. Treatments range from no thinning to intensive quality...
NASA Astrophysics Data System (ADS)
Park, Dong-Kiu; Kim, Hyun-Sok; Seo, Moo-Young; Ju, Jae-Wuk; Kim, Young-Sik; Shahrjerdy, Mir; van Leest, Arno; Soco, Aileen; Miceli, Giacomo; Massier, Jennifer; McNamara, Elliott; Hinnen, Paul; Böcker, Paul; Oh, Nang-Lyeom; Jung, Sang-Hoon; Chai, Yvon; Lee, Jun-Hyung
2018-03-01
This paper demonstrates the improvement using the YieldStar S-1250D small spot, high-NA, after-etch overlay in-device measurements in a DRAM HVM environment. It will be demonstrated that In-device metrology (IDM) captures after-etch device fingerprints more accurately compared to the industry-standard CDSEM. Also, IDM measurements (acquiring both CD and overlay) can be executed significantly faster increasing the wafer sampling density that is possible within a realistic metrology budget. The improvements to both speed and accuracy open the possibility of extended modeling and correction capabilities for control. The proof-book data of this paper shows a 36% improvement of device overlay after switching to control in a DRAM HVM environment using indevice metrology.
Simulation-Based Height of Burst Map for Asteroid Airburst Damage Prediction
NASA Technical Reports Server (NTRS)
Aftosmis, Michael J.; Mathias, Donovan L.; Tarano, Ana M.
2017-01-01
Entry and breakup models predict that airburst in the Earth's atmosphere is likely for asteroids up to approximately 200 meters in diameter. Objects of this size can deposit over 250 megatons of energy into the atmosphere. Fast-running ground damage prediction codes for such events rely heavily upon methods developed from nuclear weapons research to estimate the damage potential for an airburst at altitude. (Collins, 2005; Mathias, 2017; Hills and Goda, 1993). In particular, these tools rely upon the powerful yield scaling laws developed for point-source blasts that are used in conjunction with a Height of Burst (HOB) map to predict ground damage for an airburst of a specific energy at a given altitude. While this approach works extremely well for yields as large as tens of megatons, it becomes less accurate as yields increase to the hundreds of megatons potentially released by larger airburst events. This study revisits the assumptions underlying this approach and shows how atmospheric buoyancy becomes important as yield increases beyond a few megatons. We then use large-scale three-dimensional simulations to construct numerically generated height of burst maps that are appropriate at the higher energy levels associated with the entry of asteroids with diameters of hundreds of meters. These numerically generated HOB maps can then be incorporated into engineering methods for damage prediction, significantly improving their accuracy for asteroids with diameters greater than 80-100 m.
A Method for Generating Reduced Order Linear Models of Supersonic Inlets
NASA Technical Reports Server (NTRS)
Chicatelli, Amy; Hartley, Tom T.
1997-01-01
For the modeling of high speed propulsion systems, there are at least two major categories of models. One is based on computational fluid dynamics (CFD), and the other is based on design and analysis of control systems. CFD is accurate and gives a complete view of the internal flow field, but it typically has many states and runs much slower dm real-time. Models based on control design typically run near real-time but do not always capture the fundamental dynamics. To provide improved control models, methods are needed that are based on CFD techniques but yield models that are small enough for control analysis and design.
Gorrell, Jamieson C; Boutin, Stan; Raveh, Shirley; Neuhaus, Peter; Côté, Steeve D; Coltman, David W
2012-09-01
We determined the sequence of the male-specific minor histocompatibility complex antigen (Smcy) from the Y chromosome of seven squirrel species (Sciuridae, Rodentia). Based on conserved regions inside the Smcy intron sequence, we designed PCR primers for sex determination in these species that can be co-amplified with nuclear loci as controls. PCR co-amplification yields two products for males and one for females that are easily visualized as bands by agarose gel electrophoresis. Our method provides simple and reliable sex determination across a wide range of squirrel species. © 2012 Blackwell Publishing Ltd.
Application of the superposition principle to solar-cell analysis
NASA Technical Reports Server (NTRS)
Lindholm, F. A.; Fossum, J. G.; Burgess, E. L.
1979-01-01
The superposition principle of differential-equation theory - which applies if and only if the relevant boundary-value problems are linear - is used to derive the widely used shifting approximation that the current-voltage characteristic of an illuminated solar cell is the dark current-voltage characteristic shifted by the short-circuit photocurrent. Analytical methods are presented to treat cases where shifting is not strictly valid. Well-defined conditions necessary for superposition to apply are established. For high injection in the base region, the method of analysis accurately yields the dependence of the open-circuit voltage on the short-circuit current (or the illumination level).
Frameless robotically targeted stereotactic brain biopsy: feasibility, diagnostic yield, and safety.
Bekelis, Kimon; Radwan, Tarek A; Desai, Atman; Roberts, David W
2012-05-01
Frameless stereotactic brain biopsy has become an established procedure in many neurosurgical centers worldwide. Robotic modifications of image-guided frameless stereotaxy hold promise for making these procedures safer, more effective, and more efficient. The authors hypothesized that robotic brain biopsy is a safe, accurate procedure, with a high diagnostic yield and a safety profile comparable to other stereotactic biopsy methods. This retrospective study included 41 patients undergoing frameless stereotactic brain biopsy of lesions (mean size 2.9 cm) for diagnostic purposes. All patients underwent image-guided, robotic biopsy in which the SurgiScope system was used in conjunction with scalp fiducial markers and a preoperatively selected target and trajectory. Forty-five procedures, with 50 supratentorial targets selected, were performed. The mean operative time was 44.6 minutes for the robotic biopsy procedures. This decreased over the second half of the study by 37%, from 54.7 to 34.5 minutes (p < 0.025). The diagnostic yield was 97.8% per procedure, with a second procedure being diagnostic in the single nondiagnostic case. Complications included one transient worsening of a preexisting deficit (2%) and another deficit that was permanent (2%). There were no infections. Robotic biopsy involving a preselected target and trajectory is safe, accurate, efficient, and comparable to other procedures employing either frame-based stereotaxy or frameless, nonrobotic stereotaxy. It permits biopsy in all patients, including those with small target lesions. Robotic biopsy planning facilitates careful preoperative study and optimization of needle trajectory to avoid sulcal vessels, bridging veins, and ventricular penetration.
Khoomrung, Sakda; Chumnanpuen, Pramote; Jansa-ard, Suwanee; Nookaew, Intawat; Nielsen, Jens
2012-06-01
We present a fast and accurate method for preparation of fatty acid methyl esters (FAMEs) using microwave-assisted derivatization of fatty acids present in yeast samples. The esterification of free/bound fatty acids to FAMEs was completed within 5 min, which is 24 times faster than with conventional heating methods. The developed method was validated in two ways: (1) through comparison with a conventional method (hot plate) and (2) through validation with the standard reference material (SRM) 3275-2 omega-3 and omega-6 fatty acids in fish oil (from the Nation Institute of Standards and Technology, USA). There were no significant differences (P>0.05) in yields of FAMEs with both validations. By performing a simple modification of closed-vessel microwave heating, it was possible to carry out the esterification in Pyrex glass tubes kept inside the closed vessel. Hereby, we are able to increase the number of sample preparations to several hundred samples per day as the time for preparation of reused vessels was eliminated. Pretreated cell disruption steps are not required, since the direct FAME preparation provides equally quantitative results. The new microwave-assisted derivatization method facilitates the preparation of FAMEs directly from yeast cells, but the method is likely to also be applicable for other biological samples.
Meshless Local Petrov-Galerkin Method for Bending Problems
NASA Technical Reports Server (NTRS)
Phillips, Dawn R.; Raju, Ivatury S.
2002-01-01
Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.
NASA Astrophysics Data System (ADS)
Balla, Vamsi Krishna; Coox, Laurens; Deckers, Elke; Plyumers, Bert; Desmet, Wim; Marudachalam, Kannan
2018-01-01
The vibration response of a component or system can be predicted using the finite element method after ensuring numerical models represent realistic behaviour of the actual system under study. One of the methods to build high-fidelity finite element models is through a model updating procedure. In this work, a novel model updating method of deep-drawn components is demonstrated. Since the component is manufactured with a high draw ratio, significant deviations in both profile and thickness distributions occurred in the manufacturing process. A conventional model updating, involving Young's modulus, density and damping ratios, does not lead to a satisfactory match between simulated and experimental results. Hence a new model updating process is proposed, where geometry shape variables are incorporated, by carrying out morphing of the finite element model. This morphing process imitates the changes that occurred during the deep drawing process. An optimization procedure that uses the Global Response Surface Method (GRSM) algorithm to maximize diagonal terms of the Modal Assurance Criterion (MAC) matrix is presented. This optimization results in a more accurate finite element model. The advantage of the proposed methodology is that the CAD surface of the updated finite element model can be readily obtained after optimization. This CAD model can be used for carrying out analysis, as it represents the manufactured part more accurately. Hence, simulations performed using this updated model with an accurate geometry, will therefore yield more reliable results.
Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E
2017-06-01
The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.
ICE-COLA: fast simulations for weak lensing observables
NASA Astrophysics Data System (ADS)
Izard, Albert; Fosalba, Pablo; Crocce, Martin
2018-01-01
Approximate methods to full N-body simulations provide a fast and accurate solution to the development of mock catalogues for the modelling of galaxy clustering observables. In this paper we extend ICE-COLA, based on an optimized implementation of the approximate COLA method, to produce weak lensing maps and halo catalogues in the light-cone using an integrated and self-consistent approach. We show that despite the approximate dynamics, the catalogues thus produced enable an accurate modelling of weak lensing observables one decade beyond the characteristic scale where the growth becomes non-linear. In particular, we compare ICE-COLA to the MICE Grand Challenge N-body simulation for some fiducial cases representative of upcoming surveys and find that, for sources at redshift z = 1, their convergence power spectra agree to within 1 per cent up to high multipoles (i.e. of order 1000). The corresponding shear two point functions, ξ+ and ξ-, yield similar accuracy down to 2 and 20 arcmin respectively, while tangential shear around a z = 0.5 lens sample is accurate down to 4 arcmin. We show that such accuracy is stable against an increased angular resolution of the weak lensing maps. Hence, this opens the possibility of using approximate methods for the joint modelling of galaxy clustering and weak lensing observables and their covariance in ongoing and future galaxy surveys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Deukwoo; Little, Mark P.; Miller, Donald L.
Purpose: To determine more accurate regression formulas for estimating peak skin dose (PSD) from reference air kerma (RAK) or kerma-area product (KAP). Methods: After grouping of the data from 21 procedures into 13 clinically similar groups, assessments were made of optimal clustering using the Bayesian information criterion to obtain the optimal linear regressions of (log-transformed) PSD vs RAK, PSD vs KAP, and PSD vs RAK and KAP. Results: Three clusters of clinical groups were optimal in regression of PSD vs RAK, seven clusters of clinical groups were optimal in regression of PSD vs KAP, and six clusters of clinical groupsmore » were optimal in regression of PSD vs RAK and KAP. Prediction of PSD using both RAK and KAP is significantly better than prediction of PSD with either RAK or KAP alone. The regression of PSD vs RAK provided better predictions of PSD than the regression of PSD vs KAP. The partial-pooling (clustered) method yields smaller mean squared errors compared with the complete-pooling method.Conclusion: PSD distributions for interventional radiology procedures are log-normal. Estimates of PSD derived from RAK and KAP jointly are most accurate, followed closely by estimates derived from RAK alone. Estimates of PSD derived from KAP alone are the least accurate. Using a stochastic search approach, it is possible to cluster together certain dissimilar types of procedures to minimize the total error sum of squares.« less
Viles, C L; Sieracki, M E
1992-01-01
Accurate measurement of the biomass and size distribution of picoplankton cells (0.2 to 2.0 microns) is paramount in characterizing their contribution to the oceanic food web and global biogeochemical cycling. Image-analyzed fluorescence microscopy, usually based on video camera technology, allows detailed measurements of individual cells to be taken. The application of an imaging system employing a cooled, slow-scan charge-coupled device (CCD) camera to automated counting and sizing of individual picoplankton cells from natural marine samples is described. A slow-scan CCD-based camera was compared to a video camera and was superior for detecting and sizing very small, dim particles such as fluorochrome-stained bacteria. Several edge detection methods for accurately measuring picoplankton cells were evaluated. Standard fluorescent microspheres and a Sargasso Sea surface water picoplankton population were used in the evaluation. Global thresholding was inappropriate for these samples. Methods used previously in image analysis of nanoplankton cells (2 to 20 microns) also did not work well with the smaller picoplankton cells. A method combining an edge detector and an adaptive edge strength operator worked best for rapidly generating accurate cell sizes. A complete sample analysis of more than 1,000 cells averages about 50 min and yields size, shape, and fluorescence data for each cell. With this system, the entire size range of picoplankton can be counted and measured. Images PMID:1610183
Particle sizing by weighted measurements of scattered light
NASA Technical Reports Server (NTRS)
Buchele, Donald R.
1988-01-01
A description is given of a measurement method, applicable to a poly-dispersion of particles, in which the intensity of scattered light at any angle is weighted by a factor proportional to that angle. Determination is then made of four angles at which the weighted intensity is four fractions of the maximum intensity. These yield four characteristic diameters, i.e., the diameters of the volume/area mean (D sub 32 the Sauter mean) and the volume/diameter mean (D sub 31); the diameters at cumulative volume fractions of 0.5 (D sub v0.5 the volume median) and 0.75 (D sub v0.75). They also yield the volume dispersion of diameters. Mie scattering computations show that an average diameter less than three micrometers cannot be accurately measured. The results are relatively insensitive to extraneous background light and to the nature of the diameter distribution. Also described is an experimental method of verifying the conclusions by using two microscopic slides coated with polystyrene microspheres to simulate the particles and the background.
Gillette, William K; Esposito, Dominic; Abreu Blanco, Maria; Alexander, Patrick; Bindu, Lakshman; Bittner, Cammi; Chertov, Oleg; Frank, Peter H; Grose, Carissa; Jones, Jane E; Meng, Zhaojing; Perkins, Shelley; Van, Que; Ghirlando, Rodolfo; Fivash, Matthew; Nissley, Dwight V; McCormick, Frank; Holderfield, Matthew; Stephen, Andrew G
2015-11-02
Prenylated proteins play key roles in several human diseases including cancer, atherosclerosis and Alzheimer's disease. KRAS4b, which is frequently mutated in pancreatic, colon and lung cancers, is processed by farnesylation, proteolytic cleavage and carboxymethylation at the C-terminus. Plasma membrane localization of KRAS4b requires this processing as does KRAS4b-dependent RAF kinase activation. Previous attempts to produce modified KRAS have relied on protein engineering approaches or in vitro farnesylation of bacterially expressed KRAS protein. The proteins produced by these methods do not accurately replicate the mature KRAS protein found in mammalian cells and the protein yield is typically low. We describe a protocol that yields 5-10 mg/L highly purified, farnesylated, and methylated KRAS4b from insect cells. Farnesylated and methylated KRAS4b is fully active in hydrolyzing GTP, binds RAF-RBD on lipid Nanodiscs and interacts with the known farnesyl-binding protein PDEδ.
NASA Astrophysics Data System (ADS)
Kessedjian, G.; Chebboubi, A.; Faust, H.; Köster, U.; Materna, T.; Sage, C.; Serot, O.
2013-03-01
The accurate knowledge of the fission of actinides is necessary for studies of innovative nuclear reactor concepts. The fission yields have a direct influence on the evaluation of the fuel inventory or the reactor residual power after shutdown. A collaboration between the ILL, LPSC and CEA has developed a measurement program on fission fragment distributions at ILL in order to measure the isotopic and isomeric yields. The method is illustrated using the 233U(n,f)98Y reaction. However, the extracted beam from the Lohengrin spectrometer is not isobaric ions which limits the low yield measurements. Presently, the coupling of the Lohengrin spectrometer with a Gas Filled Magnet (GFM) is studied at the ILL in order to define and validate the enhanced purification of the extracted beam. This work will present the results of the spectrometer characterisation, along with a comparison with a dedicated Monte Carlo simulation especially developed for this purpose.
Gillette, William K.; Esposito, Dominic; Abreu Blanco, Maria; Alexander, Patrick; Bindu, Lakshman; Bittner, Cammi; Chertov, Oleg; Frank, Peter H.; Grose, Carissa; Jones, Jane E.; Meng, Zhaojing; Perkins, Shelley; Van, Que; Ghirlando, Rodolfo; Fivash, Matthew; Nissley, Dwight V.; McCormick, Frank; Holderfield, Matthew; Stephen, Andrew G.
2015-01-01
Prenylated proteins play key roles in several human diseases including cancer, atherosclerosis and Alzheimer’s disease. KRAS4b, which is frequently mutated in pancreatic, colon and lung cancers, is processed by farnesylation, proteolytic cleavage and carboxymethylation at the C-terminus. Plasma membrane localization of KRAS4b requires this processing as does KRAS4b-dependent RAF kinase activation. Previous attempts to produce modified KRAS have relied on protein engineering approaches or in vitro farnesylation of bacterially expressed KRAS protein. The proteins produced by these methods do not accurately replicate the mature KRAS protein found in mammalian cells and the protein yield is typically low. We describe a protocol that yields 5–10 mg/L highly purified, farnesylated, and methylated KRAS4b from insect cells. Farnesylated and methylated KRAS4b is fully active in hydrolyzing GTP, binds RAF-RBD on lipid Nanodiscs and interacts with the known farnesyl-binding protein PDEδ. PMID:26522388
Liu, Xiaojun; Ferguson, Richard B.; Zheng, Hengbiao; Cao, Qiang; Tian, Yongchao; Cao, Weixing; Zhu, Yan
2017-01-01
The successful development of an optimal canopy vegetation index dynamic model for obtaining higher yield can offer a technical approach for real-time and nondestructive diagnosis of rice (Oryza sativa L) growth and nitrogen (N) nutrition status. In this study, multiple rice cultivars and N treatments of experimental plots were carried out to obtain: normalized difference vegetation index (NDVI), leaf area index (LAI), above-ground dry matter (DM), and grain yield (GY) data. The quantitative relationships between NDVI and these growth indices (e.g., LAI, DM and GY) were analyzed, showing positive correlations. Using the normalized modeling method, an appropriate NDVI simulation model of rice was established based on the normalized NDVI (RNDVI) and relative accumulative growing degree days (RAGDD). The NDVI dynamic model for high-yield production in rice can be expressed by a double logistic model: RNDVI=(1+e−15.2829×(RAGDDi−0.1944))−1−(1+e−11.6517×(RAGDDi−1.0267))−1 (R2 = 0.8577**), which can be used to accurately predict canopy NDVI dynamic changes during the entire growth period. Considering variation among rice cultivars, we constructed two relative NDVI (RNDVI) dynamic models for Japonica and Indica rice types, with R2 reaching 0.8764** and 0.8874**, respectively. Furthermore, independent experimental data were used to validate the RNDVI dynamic models. The results showed that during the entire growth period, the accuracy (k), precision (R2), and standard deviation of RNDVI dynamic models for the Japonica and Indica cultivars were 0.9991, 1.0170; 0.9084**, 0.8030**; and 0.0232, 0.0170, respectively. These results indicated that RNDVI dynamic models could accurately reflect crop growth and predict dynamic changes in high-yield crop populations, providing a rapid approach for monitoring rice growth status. PMID:28338637
Liu, Xiaojun; Ferguson, Richard B; Zheng, Hengbiao; Cao, Qiang; Tian, Yongchao; Cao, Weixing; Zhu, Yan
2017-03-24
The successful development of an optimal canopy vegetation index dynamic model for obtaining higher yield can offer a technical approach for real-time and nondestructive diagnosis of rice (Oryza sativa L) growth and nitrogen (N) nutrition status. In this study, multiple rice cultivars and N treatments of experimental plots were carried out to obtain: normalized difference vegetation index (NDVI), leaf area index (LAI), above-ground dry matter (DM), and grain yield (GY) data. The quantitative relationships between NDVI and these growth indices (e.g., LAI, DM and GY) were analyzed, showing positive correlations. Using the normalized modeling method, an appropriate NDVI simulation model of rice was established based on the normalized NDVI (RNDVI) and relative accumulative growing degree days (RAGDD). The NDVI dynamic model for high-yield production in rice can be expressed by a double logistic model: RNDVI = ( 1 + e - 15.2829 × ( R A G D D i - 0.1944 ) ) - 1 - ( 1 + e - 11.6517 × ( R A G D D i - 1.0267 ) ) - 1 (R2 = 0.8577**), which can be used to accurately predict canopy NDVI dynamic changes during the entire growth period. Considering variation among rice cultivars, we constructed two relative NDVI (RNDVI) dynamic models for Japonica and Indica rice types, with R2 reaching 0.8764** and 0.8874**, respectively. Furthermore, independent experimental data were used to validate the RNDVI dynamic models. The results showed that during the entire growth period, the accuracy (k), precision (R2), and standard deviation of RNDVI dynamic models for the Japonica and Indica cultivars were 0.9991, 1.0170; 0.9084**, 0.8030**; and 0.0232, 0.0170, respectively. These results indicated that RNDVI dynamic models could accurately reflect crop growth and predict dynamic changes in high-yield crop populations, providing a rapid approach for monitoring rice growth status.
Goldberg, Tony L; Gillespie, Thomas R; Singer, Randall S
2006-09-01
Repetitive-element PCR (rep-PCR) is a method for genotyping bacteria based on the selective amplification of repetitive genetic elements dispersed throughout bacterial chromosomes. The method has great potential for large-scale epidemiological studies because of its speed and simplicity; however, objective guidelines for inferring relationships among bacterial isolates from rep-PCR data are lacking. We used multilocus sequence typing (MLST) as a "gold standard" to optimize the analytical parameters for inferring relationships among Escherichia coli isolates from rep-PCR data. We chose 12 isolates from a large database to represent a wide range of pairwise genetic distances, based on the initial evaluation of their rep-PCR fingerprints. We conducted MLST with these same isolates and systematically varied the analytical parameters to maximize the correspondence between the relationships inferred from rep-PCR and those inferred from MLST. Methods that compared the shapes of densitometric profiles ("curve-based" methods) yielded consistently higher correspondence values between data types than did methods that calculated indices of similarity based on shared and different bands (maximum correspondences of 84.5% and 80.3%, respectively). Curve-based methods were also markedly more robust in accommodating variations in user-specified analytical parameter values than were "band-sharing coefficient" methods, and they enhanced the reproducibility of rep-PCR. Phylogenetic analyses of rep-PCR data yielded trees with high topological correspondence to trees based on MLST and high statistical support for major clades. These results indicate that rep-PCR yields accurate information for inferring relationships among E. coli isolates and that accuracy can be enhanced with the use of analytical methods that consider the shapes of densitometric profiles.
NASA Astrophysics Data System (ADS)
Ketcha, M. D.; De Silva, T.; Uneri, A.; Jacobson, M. W.; Goerres, J.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.
2017-06-01
A multi-stage image-based 3D-2D registration method is presented that maps annotations in a 3D image (e.g. point labels annotating individual vertebrae in preoperative CT) to an intraoperative radiograph in which the patient has undergone non-rigid anatomical deformation due to changes in patient positioning or due to the intervention itself. The proposed method (termed msLevelCheck) extends a previous rigid registration solution (LevelCheck) to provide an accurate mapping of vertebral labels in the presence of spinal deformation. The method employs a multi-stage series of rigid 3D-2D registrations performed on sets of automatically determined and increasingly localized sub-images, with the final stage achieving a rigid mapping for each label to yield a locally rigid yet globally deformable solution. The method was evaluated first in a phantom study in which a CT image of the spine was acquired followed by a series of 7 mobile radiographs with increasing degree of deformation applied. Second, the method was validated using a clinical data set of patients exhibiting strong spinal deformation during thoracolumbar spine surgery. Registration accuracy was assessed using projection distance error (PDE) and failure rate (PDE > 20 mm—i.e. label registered outside vertebra). The msLevelCheck method was able to register all vertebrae accurately for all cases of deformation in the phantom study, improving the maximum PDE of the rigid method from 22.4 mm to 3.9 mm. The clinical study demonstrated the feasibility of the approach in real patient data by accurately registering all vertebral labels in each case, eliminating all instances of failure encountered in the conventional rigid method. The multi-stage approach demonstrated accurate mapping of vertebral labels in the presence of strong spinal deformation. The msLevelCheck method maintains other advantageous aspects of the original LevelCheck method (e.g. compatibility with standard clinical workflow, large capture range, and robustness against mismatch in image content) and extends capability to cases exhibiting strong changes in spinal curvature.
NASA Astrophysics Data System (ADS)
Saito, Toru; Nishihara, Satomichi; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi
2010-10-01
Mukherjee's type of multireference coupled-cluster (MkMRCC), approximate spin-projected spin-unrestricted CC (APUCC), and AP spin-unrestricted Brueckner's (APUBD) methods were applied to didehydronated ethylene, allyl cation, cis-butadiene, and naphthalene. The focus is on descriptions of magnetic properties for these diradical species such as S-T gaps and diradical characters. Several types of orbital sets were examined as reference orbitals for MkMRCC calculations, and it was found that the change of orbital sets do not give significant impacts on computational results for these species. Comparison of MkMRCC results with APUCC and APUBD results show that these two types of methods yield similar results. These results show that the quantum spin corrected UCC and UBD methods can effectively account for both nondynamical and dynamical correlation effects that are covered by the MkMRCC methods. It was also shown that appropriately parameterized hybrid density functional theory (DFT) with AP corrections (APUDFT) calculations yielded very accurate data that qualitatively agree with those of MRCC and APUBD methods. This hierarchy of methods, MRCC, APUCC, and APUDFT, is expected to constitute a series of standard ab initio approaches towards radical systems, among which we could choose one of them, depending on the size of the systems and the required accuracy.
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-07-22
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account.
Jin, Jae Hwa; Kim, Junho; Lee, Jeong-Yil; Oh, Young Min
2016-01-01
One of the main interests in petroleum geology and reservoir engineering is to quantify the porosity of reservoir beds as accurately as possible. A variety of direct measurements, including methods of mercury intrusion, helium injection and petrographic image analysis, have been developed; however, their application frequently yields equivocal results because these methods are different in theoretical bases, means of measurement, and causes of measurement errors. Here, we present a set of porosities measured in Berea Sandstone samples by the multiple methods, in particular with adoption of a new method using computed tomography and reference samples. The multiple porosimetric data show a marked correlativeness among different methods, suggesting that these methods are compatible with each other. The new method of reference-sample-guided computed tomography is more effective than the previous methods when the accompanied merits such as experimental conveniences are taken into account. PMID:27445105
Li, Fangbing; Wang, Hui; Xin, Huaxia; Cai, Jianfeng; Fu, Qing; Jin, Yu
2016-12-01
Purified standards of xylooligosaccharides (XOSs) (DP2-6) were first prepared from a mixture of XOSs using solid phase extraction (SPE), followed by semi-preparative liquid chromatography both under hydrophilic interaction liquid chromatography (HILIC) modes. Then, an accurate quantitative analysis method based on hydrophilic interaction liquid chromatography-evaporative light scattering detection (HILIC-ELSD) was developed and validated for simultaneous determination of xylose (X1), xylobiose (X2), xylotriose (X3), xylotetraose (X4), xylopentaose (X5), and xylohexaose (X6). This developed HILIC-ELSD method was applied to the comparison of different hydrolysis methods for xylans and assessment of XOSs contents from different agricultural wastes. The result indicated that enzymatic hydrolysis was preferable with fewer by-products and high XOSs yield. The XOSs yield (48.40%) from sugarcane bagasse xylan was the highest, showing conversions of 11.21g X2, 12.75g X3, 4.54g X4, 13.31g X5, and 6.78g X6 from 100g xylan. Copyright © 2016 Elsevier Ltd. All rights reserved.
Identification of saline soils with multi-year remote sensing of crop yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobell, D; Ortiz-Monasterio, I; Gurrola, F C
2006-10-17
Soil salinity is an important constraint to agricultural sustainability, but accurate information on its variation across agricultural regions or its impact on regional crop productivity remains sparse. We evaluated the relationships between remotely sensed wheat yields and salinity in an irrigation district in the Colorado River Delta Region. The goals of this study were to (1) document the relative importance of salinity as a constraint to regional wheat production and (2) develop techniques to accurately identify saline fields. Estimates of wheat yield from six years of Landsat data agreed well with ground-based records on individual fields (R{sup 2} = 0.65).more » Salinity measurements on 122 randomly selected fields revealed that average 0-60 cm salinity levels > 4 dS m{sup -1} reduced wheat yields, but the relative scarcity of such fields resulted in less than 1% regional yield loss attributable to salinity. Moreover, low yield was not a reliable indicator of high salinity, because many other factors contributed to yield variability in individual years. However, temporal analysis of yield images showed a significant fraction of fields exhibited consistently low yields over the six year period. A subsequent survey of 60 additional fields, half of which were consistently low yielding, revealed that this targeted subset had significantly higher salinity at 30-60 cm depth than the control group (p = 0.02). These results suggest that high subsurface salinity is associated with consistently low yields in this region, and that multi-year yield maps derived from remote sensing therefore provide an opportunity to map salinity across agricultural regions.« less
Representing winter wheat in the Community Land Model (version 4.5)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land–atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange ofmore » CO 2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.« less
Representing winter wheat in the Community Land Model (version 4.5)
NASA Astrophysics Data System (ADS)
Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; Torn, Margaret S.; Kueppers, Lara M.
2017-05-01
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land-atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange of CO2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.
Representing winter wheat in the Community Land Model (version 4.5)
Lu, Yaqiong; Williams, Ian N.; Bagley, Justin E.; ...
2017-05-05
Winter wheat is a staple crop for global food security, and is the dominant vegetation cover for a significant fraction of Earth's croplands. As such, it plays an important role in carbon cycling and land–atmosphere interactions in these key regions. Accurate simulation of winter wheat growth is not only crucial for future yield prediction under a changing climate, but also for accurately predicting the energy and water cycles for winter wheat dominated regions. We modified the winter wheat model in the Community Land Model (CLM) to better simulate winter wheat leaf area index, latent heat flux, net ecosystem exchange ofmore » CO 2, and grain yield. These included schemes to represent vernalization as well as frost tolerance and damage. We calibrated three key parameters (minimum planting temperature, maximum crop growth days, and initial value of leaf carbon allocation coefficient) and modified the grain carbon allocation algorithm for simulations at the US Southern Great Plains ARM site (US-ARM), and validated the model performance at eight additional sites across North America. We found that the new winter wheat model improved the prediction of monthly variation in leaf area index, reduced latent heat flux, and net ecosystem exchange root mean square error (RMSE) by 41 and 35 % during the spring growing season. The model accurately simulated the interannual variation in yield at the US-ARM site, but underestimated yield at sites and in regions (northwestern and southeastern US) with historically greater yields by 35 %.« less
Valiev, R R; Cherepanov, V N; Baryshnikov, G V; Sundholm, D
2018-02-28
A method for calculating the rate constants for internal-conversion (k IC ) and intersystem-crossing (k ISC ) processes within the adiabatic and Franck-Condon (FC) approximations is proposed. The applicability of the method is demonstrated by calculation of k IC and k ISC for a set of organic and organometallic compounds with experimentally known spectroscopic properties. The studied molecules were pyrromethene-567 dye, psoralene, hetero[8]circulenes, free-base porphyrin, naphthalene, and larger polyacenes. We also studied fac-Alq 3 and fac-Ir(ppy) 3 , which are important molecules in organic light emitting diodes (OLEDs). The excitation energies were calculated at the multi-configuration quasi-degenerate second-order perturbation theory (XMC-QDPT2) level, which is found to yield excitation energies in good agreement with experimental data. Spin-orbit coupling matrix elements, non-adiabatic coupling matrix elements, Huang-Rhys factors, and vibrational energies were calculated at the time-dependent density functional theory (TDDFT) and complete active space self-consistent field (CASSCF) levels. The computed fluorescence quantum yields for the pyrromethene-567 dye, psoralene, hetero[8]circulenes, fac-Alq 3 and fac-Ir(ppy) 3 agree well with experimental data, whereas for the free-base porphyrin, naphthalene, and the polyacenes, the obtained quantum yields significantly differ from the experimental values, because the FC and adiabatic approximations are not accurate for these molecules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacvarov, D.C.
1981-01-01
A new method for probabilistic risk assessment of transmission line insulation flashovers caused by lightning strokes is presented. The utilized approach of applying the finite element method for probabilistic risk assessment is demonstrated to be very powerful. The reasons for this are two. First, the finite element method is inherently suitable for analysis of three dimensional spaces where the parameters, such as three variate probability densities of the lightning currents, are non-uniformly distributed. Second, the finite element method permits non-uniform discretization of the three dimensional probability spaces thus yielding high accuracy in critical regions, such as the area of themore » low probability events, while at the same time maintaining coarse discretization in the non-critical areas to keep the number of grid points and the size of the problem to a manageable low level. The finite element probabilistic risk assessment method presented here is based on a new multidimensional search algorithm. It utilizes an efficient iterative technique for finite element interpolation of the transmission line insulation flashover criteria computed with an electro-magnetic transients program. Compared to other available methods the new finite element probabilistic risk assessment method is significantly more accurate and approximately two orders of magnitude computationally more efficient. The method is especially suited for accurate assessment of rare, very low probability events.« less
Exact exchange-correlation potentials of singlet two-electron systems
NASA Astrophysics Data System (ADS)
Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.
2017-10-01
We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.
Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2012-01-01
A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.
Smiga, Szymon; Fabiano, Eduardo
2017-11-15
We have developed a simplified coupled cluster (SCC) methodology, using the basic idea of scaled MP2 methods. The scheme has been applied to the coupled cluster double equations and implemented in three different non-iterative variants. This new method (especially the SCCD[3] variant, which utilizes a spin-resolved formalism) has been found to be very efficient and to yield an accurate approximation of the reference CCD results for both total and interaction energies of different atoms and molecules. Furthermore, we demonstrate that the equations determining the scaling coefficients for the SCCD[3] approach can generate non-empirical SCS-MP2 scaling coefficients which are in good agreement with previous theoretical investigations.
Determination of Material Strengths by Hydraulic Bulge Test.
Wang, Hankui; Xu, Tong; Shou, Binan
2016-12-30
The hydraulic bulge test (HBT) method is proposed to determine material tensile strengths. The basic idea of HBT is similar to the small punch test (SPT), but inspired by the manufacturing process of rupture discs-high-pressure hydraulic oil is used instead of punch to cause specimen deformation. Compared with SPT method, the HBT method can avoid some of influence factors, such as punch dimension, punch material, and the friction between punch and specimen. A calculation procedure that is entirely based on theoretical derivation is proposed for estimate yield strength and ultimate tensile strength. Both conventional tensile tests and hydraulic bulge tests were carried out for several ferrous alloys, and the results showed that hydraulic bulge test results are reliable and accurate.
Computer controlled fluorometer device and method of operating same
Kolber, Z.; Falkowski, P.
1990-07-17
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.
Computer controlled fluorometer device and method of operating same
Kolber, Zbigniew; Falkowski, Paul
1990-01-01
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.
Single-step methods for predicting orbital motion considering its periodic components
NASA Astrophysics Data System (ADS)
Lavrov, K. N.
1989-01-01
Modern numerical methods for integration of ordinary differential equations can provide accurate and universal solutions to celestial mechanics problems. The implicit single sequence algorithms of Everhart and multiple step computational schemes using a priori information on periodic components can be combined to construct implicit single sequence algorithms which combine their advantages. The construction and analysis of the properties of such algorithms are studied, utilizing trigonometric approximation of the solutions of differential equations containing periodic components. The algorithms require 10 percent more machine memory than the Everhart algorithms, but are twice as fast, and yield short term predictions valid for five to ten orbits with good accuracy and five to six times faster than algorithms using other methods.
X-ray power and yield measurements at the refurbished Z machine
Jones, M. C.; Ampleford, D. J.; Cuneo, M. E.; ...
2014-08-04
Advancements have been made in the diagnostic techniques to measure accurately the total radiated x-ray yield and power from z-pinch loads at the Z Machine with high accuracy. The Z-accelerator is capable of outputting 2MJ and 330 TW of x-ray yield and power, and accurately measuring these quantities is imperative. We will describe work over the past several years which include the development of new diagnostics, improvements to existing diagnostics, and implementation of automated data analysis routines. A set of experiments were conducted on the Z machine where the load and machine configuration were held constant. During this shot series,more » it was observed that total z-pinch x-ray emission power determined from the two common techniques for inferring the x-ray power, Kimfol filtered x-ray diode diagnostic and the Total Power and Energy diagnostic gave 450 TW and 327 TW respectively. Our analysis shows the latter to be the more accurate interpretation. More broadly, the comparison demonstrates the necessity to consider spectral response and field of view when inferring xray powers from z-pinch sources.« less
Random vs. systematic sampling from administrative databases involving human subjects.
Hagino, C; Lo, R J
1998-09-01
Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, Corey; Holmes, Joshua; Nibler, Joseph W.
2013-05-16
Combined high-resolution spectroscopic, electron-diffraction, and quantum theoretical methods are particularly advantageous for small molecules of high symmetry and can yield accurate structures that reveal subtle effects of electron delocalization on molecular bonds. The smallest of the radialene compounds, trimethylenecyclopropane, [3]-radialene, has been synthesized and examined in the gas phase by these methods. The first high-resolution infrared spectra have been obtained for this molecule of D3h symmetry, leading to an accurate B0 rotational constant value of 0.1378629(8) cm-1, within 0.5% of the value obtained from electronic structure calculations (density functional theory (DFT) B3LYP/cc-pVTZ). This result is employed in an analysis ofmore » electron-diffraction data to obtain the rz bond lengths (in Å): C-H = 1.072 (17), C-C = 1.437 (4), and C=C = 1.330 (4). The analysis does not lead to an accurate value of the HCH angle; however, from comparisons of theoretical and experimental angles for similar compounds, the theoretical prediction of 117.5° is believed to be reliable to within 2°. The effect of electron delocalization in radialene is to reduce the single C-C bond length by 0.07 Å compared to that in cyclopropane.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swaja, R.E.; Greene, R.T.; Sims, C.S.
1985-04-01
An international intercomparison of nuclear accident dosimetry systems was conducted during September 12-16, 1983, at Oak Ridge National Laboratory (ORNL) using the Health Physics Research Reactor operated in the pulse mode to simulate criticality accidents. This study marked the twentieth in a series of annual accident dosimetry intercomparisons conducted at ORNL. Participants from ten organizations attended this intercomparison and measured neutron and gamma doses at area monitoring stations and on phantoms for three different shield conditions. Results of this study indicate that foil activation techniques are the most popular and accurate method of determining accident-level neutron doses at area monitoringmore » stations. For personnel monitoring, foil activation, blood sodium activation, and thermoluminescent (TL) methods are all capable of providing accurate dose estimates in a variety of radiation fields. All participants in this study used TLD's to determine gamma doses with very good results on the average. Chemical dosemeters were also shown to be capable of yielding accurate estimates of total neutron plus gamma doses in a variety of radiation fields. While 83% of all neutron measurements satisfied regulatory standards relative to reference values, only 39% of all gamma results satisfied corresponding guidelines for gamma measurements. These results indicate that continued improvement in accident dosimetry evaluation and measurement techniques is needed.« less
Method and apparatus for automatically tracking a workpiece surface. [Patents
Not Available
1981-02-03
Laser cutting concepts and apparatus have been developed for cutting the shroud of the core fuel subassemblies. However, much care must be taken in the accuracy of the cutting since the fuel rods within the shroud often become warped and are forced into direct contact with the shroud in random regions. Thus, in order to cut the nuclear fuel rod shroud accurately so as not to puncture the cladding of the fuel rods, and to insure optimal cutting efficiency and performance, the focal point of beam need be maintained accurately at the workpiece surface. It becomes necessary to detect deviations in the level of the workpiece surface accurately in connection with the cutting process. Therefore, a method and apparatus for tracking the surface of a workpiece being cut by a laser beam coming from a focus head assembly is disclosed which includes two collimated laser beams directed onto the work-piece surface at spaced points by beam directing optics in generally parallel planes of incidence. A shift in spacing between the two points is detected by means of a video camera system and processed by a computer to yield a workpiece surface displacement signal which is input to a motor which raises or lowers the beam focus head accordingly.
Reducing misfocus-related motion artefacts in laser speckle contrast imaging.
Ringuette, Dene; Sigal, Iliya; Gad, Raanan; Levi, Ofer
2015-01-01
Laser Speckle Contrast Imaging (LSCI) is a flexible, easy-to-implement technique for measuring blood flow speeds in-vivo. In order to obtain reliable quantitative data from LSCI the object must remain in the focal plane of the imaging system for the duration of the measurement session. However, since LSCI suffers from inherent frame-to-frame noise, it often requires a moving average filter to produce quantitative results. This frame-to-frame noise also makes the implementation of rapid autofocus system challenging. In this work, we demonstrate an autofocus method and system based on a novel measure of misfocus which serves as an accurate and noise-robust feedback mechanism. This measure of misfocus is shown to enable the localization of best focus with sub-depth-of-field sensitivity, yielding more accurate estimates of blood flow speeds and blood vessel diameters.
Simple and accurate sum rules for highly relativistic systems
NASA Astrophysics Data System (ADS)
Cohen, Scott M.
2005-03-01
In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zerouali, K; Aubry, J; Doucet, R
2016-06-15
Purpose: To implement the new EBT-XD Gafchromic films for accurate dosimetric and geometric validation of stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT) CyberKnife (CK) patient specific QA. Methods: Film calibration was performed using a triplechannel film analysis on an Epson 10000XL scanner. Calibration films were irradiated using a Varian Clinac 21EX flattened beam (0 to 20 Gy), to ensure sufficient dose homogeneity. Films were scanned to a resolution of 0.3 mm, 24 hours post irradiation following a well-defined protocol. A set of 12 QA was performed for several types of CK plans: trigeminal neuralgia, brain metastasis, prostate andmore » lung tumors. A custom made insert for the CK head phantom has been manufactured to yield an accurate measured to calculated dose registration. When the high dose region was large enough, absolute dose was also measured with an ionization chamber. Dose calculation is performed using MultiPlan Ray-tracing algorithm for all cases since the phantom is mostly made from near water-equivalent plastic. Results: Good agreement (<2%) was found between the dose to the chamber and the film, when a chamber measurement was possible The average dose difference and standard deviations between film measurements and TPS calculations were respectively 1.75% and 3%. The geometric accuracy has been estimated to be <1 mm, combining robot positioning uncertainty and film registration to calculated dose. Conclusion: Patient specific QA measurements using EBT-XD films yielded a full 2D dose plane with high spatial resolution and acceptable dose accuracy. This method is particularly promising for trigeminal neuralgia plan QA, where the positioning of the spatial dose distribution is equally or more important than the absolute delivered dose to achieve clinical goals.« less
Activity coefficients from molecular simulations using the OPAS method
NASA Astrophysics Data System (ADS)
Kohns, Maximilian; Horsch, Martin; Hasse, Hans
2017-10-01
A method for determining activity coefficients by molecular dynamics simulations is presented. It is an extension of the OPAS (osmotic pressure for the activity of the solvent) method in previous work for studying the solvent activity in electrolyte solutions. That method is extended here to study activities of all components in mixtures of molecular species. As an example, activity coefficients in liquid mixtures of water and methanol are calculated for 298.15 K and 323.15 K at 1 bar using molecular models from the literature. These dense and strongly interacting mixtures pose a significant challenge to existing methods for determining activity coefficients by molecular simulation. It is shown that the new method yields accurate results for the activity coefficients which are in agreement with results obtained with a thermodynamic integration technique. As the partial molar volumes are needed in the proposed method, the molar excess volume of the system water + methanol is also investigated.
NASA Technical Reports Server (NTRS)
Lintilhac, P. M.; Wei, C.; Tanguay, J. J.; Outwater, J. O.
2000-01-01
In this article we describe a new method for the determination of turgor pressures in living plant cells. Based on the treatment of growing plant cells as thin-walled pressure vessels, we find that pressures can be accurately determined by observing and measuring the area of the contact patch formed when a spherical glass probe is lowered onto the cell surface with a known force. Within the limits we have described, we can show that the load (determined by precalibration of the device) divided by the projected area of the contact patch (determined by video microscopy) provides a direct, rapid, and accurate measure of the internal turgor pressure of the cell. We demonstrate, by parallel measurements with the pressure probe, that our method yields pressure data that are consistent with those from the pressure probe. Also, by incubating target tissues in stepped concentrations of mannitol to incrementally reduce the turgor pressure, we show that the pressures measured by tonometry accurately reflect the predicted changes from the osmotic potential of the bathing medium. The advantages of this new method over the pressure probe are considerable, however, in that we can move rapidly from cell to cell, taking measurements every 20 s. In addition, the nondestructive nature of the method means that we can return to the same cell repeatedly for periodic pressure measurements. The limitations of the method lie in the fact that it is suitable only for superficial cells that are directly accessible to the probe and to cells that are relatively thin walled and not heavily decorated with surface features. It is also not suitable for measuring pressures in flaccid cells.
Analysis of operator splitting errors for near-limit flame simulations
NASA Astrophysics Data System (ADS)
Lu, Zhen; Zhou, Hua; Li, Shan; Ren, Zhuyin; Lu, Tianfeng; Law, Chung K.
2017-04-01
High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction-diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.
Analysis of operator splitting errors for near-limit flame simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Zhen; Zhou, Hua; Li, Shan
High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction ofmore » ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory combustion from cool flames, both the Strang splitting and the midpoint method can successfully capture the dynamic behavior, whereas the balanced splitting scheme results in significant errors.« less
Harris, Adrian L; Ullah, Roshan; Fountain, Michelle T
2017-08-01
Tetranychus urticae is a widespread polyphagous mite, found on a variety of fruit crops. Tetranychus urticae feeds on the underside of the leaves perforating plant cells and sucking the cell contents. Foliar damage and excess webbing produced by T. urticae can reduce fruit yield. Assessments of T. urticae populations while small provide reliable and accurate ways of targeting control strategies and recording their efficacy against T. urticae. The aim of this study was to evaluate four methods for extracting low levels of T. urticae from leaf samples, representative of developing infestations. These methods were compared to directly counting of mites on leaves under a dissecting microscope. These methods were ethanol washing, a modified paraffin/ethanol meniscus technique, Tullgren funnel extraction and the Henderson and McBurnie mite brushing machine with consideration to: accuracy, precision and simplicity. In addition, two physically different leaf morphologies were compared; Prunus leaves which are glabrous with Malus leaves which are setaceous. Ethanol extraction consistently yielded the highest numbers of mites and was the most rapid method for recovering T. urticae from leaf samples, irrespective of leaf structure. In addition the samples could be processed and stored before final counting. The advantages and disadvantages of each method are discussed in detail.
Flip-avoiding interpolating surface registration for skull reconstruction.
Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye
2018-03-30
Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby
2017-01-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
Elastic Moduli of Pyrolytic Boron Nitride Measured Using 3-Point Bending and Ultrasonic Testing
NASA Technical Reports Server (NTRS)
Kaforey, M. L.; Deeb, C. W.; Matthiesen, D. H.; Roth, D. J.
1999-01-01
Three-point bending and ultrasonic testing were performed on a flat plate of PBN. In the bending experiment, the deformation mechanism was believed to be shear between the pyrolytic layers, which yielded a shear modulus, c (sub 44), of 2.60 plus or minus .31 GPa. Calculations based on the longitudinal and shear wave velocity measurements yielded values of 0.341 plus or minus 0.006 for Poisson's ratio, 10.34 plus or minus .30 GPa for the elastic modulus (c (sub 33)), and 3.85 plus or minus 0.02 GPa for the shear modulus (c (sub 44)). Since free basal dislocations have been reported to affect the value of c (sub 44) found using ultrasonic methods, the value from the bending experiment was assumed to be the more accurate value.
Ray Effect Mitigation Through Reference Frame Rotation
Tencer, John
2016-05-01
The discrete ordinates method is a popular and versatile technique for solving the radiative transport equation, a major drawback of which is the presence of ray effects. Mitigation of ray effects can yield significantly more accurate results and enhanced numerical stability for combined mode codes. Moreover, when ray effects are present, the solution is seen to be highly dependent upon the relative orientation of the geometry and the global reference frame. It is an undesirable property. A novel ray effect mitigation technique of averaging the computed solution for various reference frame orientations is proposed.
Research on the frequency hopping bistatic sonar system
NASA Astrophysics Data System (ADS)
Liang, Guo-long; Zhang, Yao; Zhang, Guang-pu; Liu, Kai
2011-10-01
A new model for bistatic sonar system is established, in which frequency hopping (FH) signals are used for targets detection according to some rules. This model can decrease the time between adjacent signals and obtain more information in a unit time. The receiving system will receive and process the signals of different frequency respectively, according the FH pattern, for detecting and locating targets. This method can helps yield more stable and accurate outputs, using the characteristic of the FH signals, increase the ability of anti-detection and anti partial-band jamming.
The application of biofilm science to the study and control of chronic bacterial infections
Costerton, William; Veeh, Richard; Shirtliff, Mark; Pasmore, Mark; Post, Christopher; Ehrlich, Garth
2003-01-01
Unequivocal direct observations have established that the bacteria that cause device-related and other chronic infections grow in matrix-enclosed biofilms. The diagnostic and therapeutic strategies that have served us so well in the partial eradication of acute epidemic bacterial diseases have not yielded accurate data or favorable outcomes when applied to these biofilm diseases. We discuss the potential benefits of the application of the new methods and concepts developed by biofilm science and engineering to the clinical management of infectious diseases. PMID:14617746
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reiche, Helmut Matthias; Vogel, Sven C.
New in situ data for the U-C system are presented, with the goal of improving knowledge of the phase diagram to enable production of new ceramic fuels. The none quenchable, cubic, δ-phase, which in turn is fundamental to computational methods, was identified. Rich datasets of the formation synthesis of uranium carbide yield kinetics data which allow the benchmarking of modeling, thermodynamic parameters etc. The order-disorder transition (carbon sublattice melting) was observed due to equal sensitivity of neutrons to both elements. This dynamic has not been accurately described in some recent simulation-based publications.
Methods for analysis of cracks in three-dimensional solids
NASA Technical Reports Server (NTRS)
Raju, I. S.; Newman, J. C., Jr.
1984-01-01
Various analytical and numerical methods used to evaluate the stress intensity factors for cracks in three-dimensional (3-D) solids are reviewed. Classical exact solutions and many of the approximate methods used in 3-D analyses of cracks are reviewed. The exact solutions for embedded elliptic cracks in infinite solids are discussed. The approximate methods reviewed are the finite element methods, the boundary integral equation (BIE) method, the mixed methods (superposition of analytical and finite element method, stress difference method, discretization-error method, alternating method, finite element-alternating method), and the line-spring model. The finite element method with singularity elements is the most widely used method. The BIE method only needs modeling of the surfaces of the solid and so is gaining popularity. The line-spring model appears to be the quickest way to obtain good estimates of the stress intensity factors. The finite element-alternating method appears to yield the most accurate solution at the minimum cost.
34 CFR 300.304 - Evaluation procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of communication and in the form most likely to yield accurate information on what the child knows..., manual, or speaking skills, the assessment results accurately reflect the child's aptitude or achievement... impaired sensory, manual, or speaking skills (unless those skills are the factors that the test purports to...
34 CFR 300.304 - Evaluation procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... or other mode of communication and in the form most likely to yield accurate information on what the... with impaired sensory, manual, or speaking skills, the assessment results accurately reflect the child... reflecting the child's impaired sensory, manual, or speaking skills (unless those skills are the factors that...
34 CFR 300.304 - Evaluation procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... or other mode of communication and in the form most likely to yield accurate information on what the... with impaired sensory, manual, or speaking skills, the assessment results accurately reflect the child... reflecting the child's impaired sensory, manual, or speaking skills (unless those skills are the factors that...
Biaxial Testing of 2219-T87 Aluminum Alloy Using Cruciform Specimens
NASA Technical Reports Server (NTRS)
Dawicke, D. S.; Pollock, W. D.
1997-01-01
A cruciform biaxial test specimen was designed and seven biaxial tensile tests were conducted on 2219-T87 aluminum alloy. An elastic-plastic finite element analysis was used to simulate each tests and predict the yield stresses. The elastic-plastic finite analysis accurately simulated the measured load-strain behavior for each test. The yield stresses predicted by the finite element analyses indicated that the yield behavior of the 2219-T87 aluminum alloy agrees with the von Mises yield criterion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nozirov, Farhod, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com; Stachów, Michał, E-mail: michal.stachow@gmail.com; Kupka, Teobald, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com
2014-04-14
A theoretical prediction of nuclear magnetic shieldings and indirect spin-spin coupling constants in 1,1-, cis- and trans-1,2-difluoroethylenes is reported. The results obtained using density functional theory (DFT) combined with large basis sets and gauge-independent atomic orbital calculations were critically compared with experiment and conventional, higher level correlated electronic structure methods. Accurate structural, vibrational, and NMR parameters of difluoroethylenes were obtained using several density functionals combined with dedicated basis sets. B3LYP/6-311++G(3df,2pd) optimized structures of difluoroethylenes closely reproduced experimental geometries and earlier reported benchmark coupled cluster results, while BLYP/6-311++G(3df,2pd) produced accurate harmonic vibrational frequencies. The most accurate vibrations were obtained using B3LYP/6-311++G(3df,2pd)more » with correction for anharmonicity. Becke half and half (BHandH) density functional predicted more accurate {sup 19}F isotropic shieldings and van Voorhis and Scuseria's τ-dependent gradient-corrected correlation functional yielded better carbon shieldings than B3LYP. A surprisingly good performance of Hartree-Fock (HF) method in predicting nuclear shieldings in these molecules was observed. Inclusion of zero-point vibrational correction markedly improved agreement with experiment for nuclear shieldings calculated by HF, MP2, CCSD, and CCSD(T) methods but worsened the DFT results. The threefold improvement in accuracy when predicting {sup 2}J(FF) in 1,1-difluoroethylene for BHandH density functional compared to B3LYP was observed (the deviations from experiment were −46 vs. −115 Hz)« less
Construction and application of a new dual-hybrid random phase approximation.
Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Kállay, Mihály
2015-10-13
The direct random phase approximation (dRPA) combined with Kohn-Sham reference orbitals is among the most promising tools in computational chemistry and applicable in many areas of chemistry and physics. The reason for this is that it scales as N(4) with the system size, which is a considerable advantage over the accurate ab initio wave function methods like standard coupled-cluster. dRPA also yields a considerably more accurate description of thermodynamic and electronic properties than standard density-functional theory methods. It is also able to describe strong static electron correlation effects even in large systems with a small or vanishing band gap missed by common single-reference methods. However, dRPA has several flaws due to its self-correlation error. In order to obtain accurate and precise reaction energies, barriers and noncovalent intra- and intermolecular interactions, we construct a new dual-hybrid dRPA (hybridization of exact and semilocal exchange in both the energy and the orbitals) and test the performance of this new functional on isogyric, isodesmic, hypohomodesmotic, homodesmotic, and hyperhomodesmotic reaction classes. We also use a test set of 14 Diels-Alder reactions, six atomization energies (AE6), 38 hydrocarbon atomization energies, and 100 reaction barrier heights (DBH24, HT-BH38, and NHT-BH38). For noncovalent complexes, we use the NCCE31 and S22 test sets. To test the intramolecular interactions, we use a set of alkane, cysteine, phenylalanine-glycine-glycine tripeptide, and monosaccharide conformers. We also discuss the delocalization and static correlation errors. We show that a universally accurate description of chemical properties can be provided by a large, 75% exact exchange mixing both in the calculation of the reference orbitals and the final energy.
Projection-free approximate balanced truncation of large unstable systems
NASA Astrophysics Data System (ADS)
Flinois, Thibault L. B.; Morgans, Aimee S.; Schmid, Peter J.
2015-08-01
In this article, we show that the projection-free, snapshot-based, balanced truncation method can be applied directly to unstable systems. We prove that even for unstable systems, the unmodified balanced proper orthogonal decomposition algorithm theoretically yields a converged transformation that balances the Gramians (including the unstable subspace). We then apply the method to a spatially developing unstable system and show that it results in reduced-order models of similar quality to the ones obtained with existing methods. Due to the unbounded growth of unstable modes, a practical restriction on the final impulse response simulation time appears, which can be adjusted depending on the desired order of the reduced-order model. Recommendations are given to further reduce the cost of the method if the system is large and to improve the performance of the method if it does not yield acceptable results in its unmodified form. Finally, the method is applied to the linearized flow around a cylinder at Re = 100 to show that it actually is able to accurately reproduce impulse responses for more realistic unstable large-scale systems in practice. The well-established approximate balanced truncation numerical framework therefore can be safely applied to unstable systems without any modifications. Additionally, balanced reduced-order models can readily be obtained even for large systems, where the computational cost of existing methods is prohibitive.
Eike, David M; Maginn, Edward J
2006-04-28
A method recently developed to rigorously determine solid-liquid equilibrium using a free-energy-based analysis has been extended to analyze multiatom molecular systems. This method is based on using a pseudosupercritical transformation path to reversibly transform between solid and liquid phases. Integration along this path yields the free energy difference at a single state point, which can then be used to determine the free energy difference as a function of temperature and therefore locate the coexistence temperature at a fixed pressure. The primary extension reported here is the introduction of an external potential field capable of inducing center of mass order along with secondary orientational order for molecules. The method is used to calculate the melting point of 1-H-1,2,4-triazole and benzene. Despite the fact that the triazole model gives accurate bulk densities for the liquid and crystal phases, it is found to do a poor job of reproducing the experimental crystal structure and heat of fusion. Consequently, it yields a melting point that is 100 K lower than the experimental value. On the other hand, the benzene model has been parametrized extensively to match a wide range of properties and yields a melting point that is only 20 K lower than the experimental value. Previous work in which a simple "direct heating" method was used actually found that the melting point of the benzene model was 50 K higher than the experimental value. This demonstrates the importance of using proper free energy methods to compute phase behavior. It also shows that the melting point is a very sensitive measure of force field quality that should be considered in parametrization efforts. The method described here provides a relatively simple approach for computing melting points of molecular systems.
Improved accuracy for finite element structural analysis via an integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Aiello, R. A.; Berke, L.
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Goff, Ben M; Moore, Kenneth J; Fales, Steven L; Pedersen, Jeffery F
2011-06-01
Sorghum [Sorghum bicolor (L.) Moench] has been shown to contain the cyanogenic glycoside dhurrin, which is responsible for the disorder known as prussic acid poisoning in livestock. The current standard method for estimating hydrogen cyanide (HCN) uses spectrophotometry to measure the aglycone, p-hydroxybenzaldehyde (p-HB), after hydrolysis. Errors may occur due to the inability of this method to solely estimate the absorbance of p-HB at a given wavelength. The objective of this study was to compare the use of gas chromatography (GC) and near infrared spectroscopy (NIRS) methods, along with a spectrophotometry method to estimate the potential for prussic acid (HCNp) of sorghum and sudangrasses over three stages maturities. It was shown that the GC produced higher HCNp estimates than the spectrophotometer for the grain sorghums, but lower concentrations for the sudangrass. Based on what is known about the analytical process of each method, the GC data is likely closer to the true HCNp concentrations of the forages. Both the GC and spectrophotometry methods yielded robust equations with the NIRS method; however, using GC as the calibration method resulted in more accurate and repeatable estimates. The HCNp values obtained from using the GC quantification method are believed to be closer to the actual values of the forage, and that use of this method will provide a more accurate and easily automated means of quantifying prussic acid. Copyright © 2011 Society of Chemical Industry.
Estimating the remaining useful life of bearings using a neuro-local linear estimator-based method.
Ahmad, Wasim; Ali Khan, Sheraz; Kim, Jong-Myon
2017-05-01
Estimating the remaining useful life (RUL) of a bearing is required for maintenance scheduling. While the degradation behavior of a bearing changes during its lifetime, it is usually assumed to follow a single model. In this letter, bearing degradation is modeled by a monotonically increasing function that is globally non-linear and locally linearized. The model is generated using historical data that is smoothed with a local linear estimator. A neural network learns this model and then predicts future levels of vibration acceleration to estimate the RUL of a bearing. The proposed method yields reasonably accurate estimates of the RUL of a bearing at different points during its operational life.
Garbarino, J.R.; Jones, B.E.; Stein, G.P.
1985-01-01
In an interlaboratory test, inductively coupled plasma atomic emission spectrometry (ICP-AES) was compared with flame atomic absorption spectrometry and molecular absorption spectrophotometry for the determination of 17 major and trace elements in 100 filtered natural water samples. No unacceptable biases were detected. The analysis precision of ICP-AES was found to be equal to or better than alternative methods. Known-addition recovery experiments demonstrated that the ICP-AES determinations are accurate to between plus or minus 2 and plus or minus 10 percent; four-fifths of the tests yielded average recoveries of 95-105 percent, with an average relative standard deviation of about 5 percent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.
Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, David R.; Bershady, Matthew A., E-mail: david.andersen@nrc-cnrc.gc.ca, E-mail: mab@astro.wisc.edu
2013-05-01
Using the integral field unit DensePak on the WIYN 3.5 m telescope we have obtained H{alpha} velocity fields of 39 nearly face-on disks at echelle resolutions. High-quality, uniform kinematic data and a new modeling technique enabled us to derive accurate and precise kinematic inclinations with mean i{sub kin} = 23 Degree-Sign for 90% of these galaxies. Modeling the kinematic data as single, inclined disks in circular rotation improves upon the traditional tilted-ring method. We measure kinematic inclinations with a precision in sin i of 25% at 20 Degree-Sign and 6% at 30 Degree-Sign . Kinematic inclinations are consistent with photometricmore » and inverse Tully-Fisher inclinations when the sample is culled of galaxies with kinematic asymmetries, for which we give two specific prescriptions. Kinematic inclinations can therefore be used in statistical ''face-on'' Tully-Fisher studies. A weighted combination of multiple, independent inclination measurements yield the most precise and accurate inclination. Combining inverse Tully-Fisher inclinations with kinematic inclinations yields joint probability inclinations with a precision in sin i of 10% at 15 Degree-Sign and 5% at 30 Degree-Sign . This level of precision makes accurate mass decompositions of galaxies possible even at low inclination. We find scaling relations between rotation speed and disk-scale length identical to results from more inclined samples. We also observe the trend of more steeply rising rotation curves with increased rotation speed and light concentration. This trend appears to be uncorrelated with disk surface brightness.« less
The 400 microsphere per piece "rule" does not apply to all blood flow studies.
Polissar, N L; Stanford, D C; Glenny, R W
2000-01-01
Microsphere experiments are useful in measuring regional organ perfusion as well as heterogeneity of blood flow within organs and correlation of perfusion between organ pieces at different time points. A 400 microspheres/piece "rule" is often used in planning experiments or to determine whether experiments are valid. This rule is based on the statement that 400 microspheres must lodge in a region for 95% confidence that the observed flow in the region is within 10% of the true flow. The 400 microspheres precision rule, however, only applies to measurements of perfusion to a single region or organ piece. Examples, simulations, and an animal experiment were carried out to show that good precision for measurements of heterogeneity and correlation can be obtained from many experiments with <400 microspheres/piece. Furthermore, methods were developed and tested for correcting the observed heterogeneity and correlation to remove the Poisson "noise" due to discrete microsphere measurements. The animal experiment shows adjusted values of heterogeneity and correlation that are in close agreement for measurements made with many or few microspheres/piece. Simulations demonstrate that the adjusted values are accurate for a variety of experiments with far fewer than 400 microspheres/piece. Thus the 400 microspheres rule does not apply to many experiments. A "rule of thumb" is that experiments with a total of at least 15,000 microspheres, for all pieces combined, are very likely to yield accurate estimates of heterogeneity. Experiments with a total of at least 25,000 microspheres are very likely to yield accurate estimates of correlation coefficients.
Regional crop gross primary production and yield estimation using fused Landsat-MODIS data
NASA Astrophysics Data System (ADS)
He, M.; Kimball, J. S.; Maneta, M. P.; Maxwell, B. D.; Moreno, A.
2017-12-01
Accurate crop yield assessments using satellite-based remote sensing are of interest for the design of regional policies that promote agricultural resiliency and food security. However, the application of current vegetation productivity algorithms derived from global satellite observations are generally too coarse to capture cropland heterogeneity. Merging information from sensors with reciprocal spatial and temporal resolution can improve the accuracy of these retrievals. In this study, we estimate annual crop yields for seven important crop types -alfalfa, barley, corn, durum wheat, peas, spring wheat and winter wheat over Montana, United States (U.S.) from 2008 to 2015. Yields are estimated as the product of gross primary production (GPP) and a crop-specific harvest index (HI) at 30 m spatial resolution. To calculate GPP we used a modified form of the MOD17 LUE algorithm driven by a 30 m 8-day fused NDVI dataset constructed by blending Landsat (5 or 7) and MODIS Terra reflectance data. The fused 30-m NDVI record shows good consistency with the original Landsat and MODIS data, but provides better spatiotemporal information on cropland vegetation growth. The resulting GPP estimates capture characteristic cropland patterns and seasonal variations, while the estimated annual 30 m crop yield results correspond favorably with county-level crop yield data (r=0.96, p<0.05). The estimated crop yield performance was generally lower, but still favorable in relation to field-scale crop yield surveys (r=0.42, p<0.01). Our methods and results are suitable for operational applications at regional scales.
NASA Astrophysics Data System (ADS)
Hibino, Daisuke; Hsu, Mingyi; Shindo, Hiroyuki; Izawa, Masayuki; Enomoto, Yuji; Lin, J. F.; Hu, J. R.
2013-04-01
The impact on yield loss due to systematic defect which remains after Optical Proximity Correction (OPC) modeling has increased, and achieving an acceptable yield has become more difficult in the leading technology beyond 20 nm node production. Furthermore Process-Window has become narrow because of the complexity of IC design and less process margin. In the past, the systematic defects have been inspected by human-eyes. However the judgment by human-eyes is sometime unstable and not accurate. Moreover an enormous amount of time and labor will have to be expended on the one-by-one judgment for several thousands of hot-spot defects. In order to overcome these difficulties and improve the yield and manufacturability, the automated system, which can quantify the shape difference with high accuracy and speed, is needed. Inspection points could be increased for getting higher yield, if the automated system achieves our goal. Defect Window Analysis (DWA) system by using high-precision-contour extraction from SEM image on real silicon and quantifying method which can calculate the difference between defect pattern and non-defect pattern automatically, which was developed by Hitachi High-Technologies, has been applied to the defect judgment instead of the judgment by human-eyes. The DWA result which describes process behavior might be feedback to design or OPC or mask. This new methodology and evaluation results will be presented in detail in this paper.
Robert-Peillard, Fabien; Boudenne, Jean-Luc; Coulomb, Bruno
2014-05-01
This paper presents a simple, accurate and multi-sample method for the determination of proline in wines thanks to a 96-well microplate technique. Proline is the most abundant amino acid in wine and is an important parameter related to wine characteristics or maturation processes of grape. In the current study, an improved application of the general method based on sodium hypochlorite oxidation and o-phthaldialdehyde (OPA)-thiol spectrofluorometric detection is described. The main interfering compounds for specific proline detection in wines are strongly reduced by selective reaction with OPA in a preliminary step under well-defined pH conditions. Application of the protocol after a 500-fold dilution of wine samples provides a working range between 0.02 and 2.90gL(-1), with a limit of detection of 7.50mgL(-1). Comparison and validation on real wine samples by ion-exchange chromatography prove that this procedure yields accurate results. Simplicity of the protocol used, with no need for centrifugation or filtration, organic solvents or high temperature enables its full implementation in plastic microplates and efficient application for routine analysis of proline in wines. Copyright © 2013 Elsevier Ltd. All rights reserved.
Bisgin, Halil; Bera, Tanmay; Ding, Hongjian; Semey, Howard G; Wu, Leihong; Liu, Zhichao; Barnes, Amy E; Langley, Darryl A; Pava-Ripoll, Monica; Vyas, Himansu J; Tong, Weida; Xu, Joshua
2018-04-25
Insect pests, such as pantry beetles, are often associated with food contaminations and public health risks. Machine learning has the potential to provide a more accurate and efficient solution in detecting their presence in food products, which is currently done manually. In our previous research, we demonstrated such feasibility where Artificial Neural Network (ANN) based pattern recognition techniques could be implemented for species identification in the context of food safety. In this study, we present a Support Vector Machine (SVM) model which improved the average accuracy up to 85%. Contrary to this, the ANN method yielded ~80% accuracy after extensive parameter optimization. Both methods showed excellent genus level identification, but SVM showed slightly better accuracy for most species. Highly accurate species level identification remains a challenge, especially in distinguishing between species from the same genus which may require improvements in both imaging and machine learning techniques. In summary, our work does illustrate a new SVM based technique and provides a good comparison with the ANN model in our context. We believe such insights will pave better way forward for the application of machine learning towards species identification and food safety.
Quantification of intensity variations in functional MR images using rotated principal components
NASA Astrophysics Data System (ADS)
Backfrieder, W.; Baumgartner, R.; Sámal, M.; Moser, E.; Bergmann, H.
1996-08-01
In functional MRI (fMRI), the changes in cerebral haemodynamics related to stimulated neural brain activity are measured using standard clinical MR equipment. Small intensity variations in fMRI data have to be detected and distinguished from non-neural effects by careful image analysis. Based on multivariate statistics we describe an algorithm involving oblique rotation of the most significant principal components for an estimation of the temporal and spatial distribution of the stimulated neural activity over the whole image matrix. This algorithm takes advantage of strong local signal variations. A mathematical phantom was designed to generate simulated data for the evaluation of the method. In simulation experiments, the potential of the method to quantify small intensity changes, especially when processing data sets containing multiple sources of signal variations, was demonstrated. In vivo fMRI data collected in both visual and motor stimulation experiments were analysed, showing a proper location of the activated cortical regions within well known neural centres and an accurate extraction of the activation time profile. The suggested method yields accurate absolute quantification of in vivo brain activity without the need of extensive prior knowledge and user interaction.
Visell, Yon
2015-04-01
This paper proposes a fast, physically accurate method for synthesizing multimodal, acoustic and haptic, signatures of distributed fracture in quasi-brittle heterogeneous materials, such as wood, granular media, or other fiber composites. Fracture processes in these materials are challenging to simulate with existing methods, due to the prevalence of large numbers of disordered, quasi-random spatial degrees of freedom, representing the complex physical state of a sample over the geometric volume of interest. Here, I develop an algorithm for simulating such processes, building on a class of statistical lattice models of fracture that have been widely investigated in the physics literature. This algorithm is enabled through a recently published mathematical construction based on the inverse transform method of random number sampling. It yields a purely time domain stochastic jump process representing stress fluctuations in the medium. The latter can be readily extended by a mean field approximation that captures the averaged constitutive (stress-strain) behavior of the material. Numerical simulations and interactive examples demonstrate the ability of these algorithms to generate physically plausible acoustic and haptic signatures of fracture in complex, natural materials interactively at audio sampling rates.
Orun, A B; Seker, H; Uslan, V; Goodyer, E; Smith, G
2017-06-01
The textural structure of 'skin age'-related subskin components enables us to identify and analyse their unique characteristics, thus making substantial progress towards establishing an accurate skin age model. This is achieved by a two-stage process. First by the application of textural analysis using laser speckle imaging, which is sensitive to textural effects within the λ = 650 nm spectral band region. In the second stage, a Bayesian inference method is used to select attributes from which a predictive model is built. This technique enables us to contrast different skin age models, such as the laser speckle effect against the more widely used normal light (LED) imaging method, whereby it is shown that our laser speckle-based technique yields better results. The method introduced here is non-invasive, low cost and capable of operating in real time; having the potential to compete against high-cost instrumentation such as confocal microscopy or similar imaging devices used for skin age identification purposes. © 2016 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Novel Automated Blood Separations Validate Whole Cell Biomarkers
Burger, Douglas E.; Wang, Limei; Ban, Liqin; Okubo, Yoshiaki; Kühtreiber, Willem M.; Leichliter, Ashley K.; Faustman, Denise L.
2011-01-01
Background Progress in clinical trials in infectious disease, autoimmunity, and cancer is stymied by a dearth of successful whole cell biomarkers for peripheral blood lymphocytes (PBLs). Successful biomarkers could help to track drug effects at early time points in clinical trials to prevent costly trial failures late in development. One major obstacle is the inaccuracy of Ficoll density centrifugation, the decades-old method of separating PBLs from the abundant red blood cells (RBCs) of fresh blood samples. Methods and Findings To replace the Ficoll method, we developed and studied a novel blood-based magnetic separation method. The magnetic method strikingly surpassed Ficoll in viability, purity and yield of PBLs. To reduce labor, we developed an automated platform and compared two magnet configurations for cell separations. These more accurate and labor-saving magnet configurations allowed the lymphocytes to be tested in bioassays for rare antigen-specific T cells. The automated method succeeded at identifying 79% of patients with the rare PBLs of interest as compared with Ficoll's uniform failure. We validated improved upfront blood processing and show accurate detection of rare antigen-specific lymphocytes. Conclusions Improving, automating and standardizing lymphocyte detections from whole blood may facilitate development of new cell-based biomarkers for human diseases. Improved upfront blood processes may lead to broad improvements in monitoring early trial outcome measurements in human clinical trials. PMID:21799852
NASA Astrophysics Data System (ADS)
Prastuti, M.; Suhartono; Salehah, NA
2018-04-01
The need for energy supply, especially for electricity in Indonesia has been increasing in the last past years. Furthermore, the high electricity usage by people at different times leads to the occurrence of heteroscedasticity issue. Estimate the electricity supply that could fulfilled the community’s need is very important, but the heteroscedasticity issue often made electricity forecasting hard to be done. An accurate forecast of electricity consumptions is one of the key challenges for energy provider to make better resources and service planning and also take control actions in order to balance the electricity supply and demand for community. In this paper, hybrid ARIMAX Quantile Regression (ARIMAX-QR) approach was proposed to predict the short-term electricity consumption in East Java. This method will also be compared to time series regression using RMSE, MAPE, and MdAPE criteria. The data used in this research was the electricity consumption per half-an-hour data during the period of September 2015 to April 2016. The results show that the proposed approach can be a competitive alternative to forecast short-term electricity in East Java. ARIMAX-QR using lag values and dummy variables as predictors yield more accurate prediction in both in-sample and out-sample data. Moreover, both time series regression and ARIMAX-QR methods with addition of lag values as predictor could capture accurately the patterns in the data. Hence, it produces better predictions compared to the models that not use additional lag variables.
Robust numerical solution of the reservoir routing equation
NASA Astrophysics Data System (ADS)
Fiorentini, Marcello; Orlandini, Stefano
2013-09-01
The robustness of numerical methods for the solution of the reservoir routing equation is evaluated. The methods considered in this study are: (1) the Laurenson-Pilgrim method, (2) the fourth-order Runge-Kutta method, and (3) the fixed order Cash-Karp method. Method (1) is unable to handle nonmonotonic outflow rating curves. Method (2) is found to fail under critical conditions occurring, especially at the end of inflow recession limbs, when large time steps (greater than 12 min in this application) are used. Method (3) is computationally intensive and it does not solve the limitations of method (2). The limitations of method (2) can be efficiently overcome by reducing the time step in the critical phases of the simulation so as to ensure that water level remains inside the domains of the storage function and the outflow rating curve. The incorporation of a simple backstepping procedure implementing this control into the method (2) yields a robust and accurate reservoir routing method that can be safely used in distributed time-continuous catchment models.
Using Smartphone Sensors for Improving Energy Expenditure Estimation
Zhu, Jindan; Das, Aveek K.; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J.
2015-01-01
Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings. PMID:27170901
Using Smartphone Sensors for Improving Energy Expenditure Estimation.
Pande, Amit; Zhu, Jindan; Das, Aveek K; Zeng, Yunze; Mohapatra, Prasant; Han, Jay J
2015-01-01
Energy expenditure (EE) estimation is an important factor in tracking personal activity and preventing chronic diseases, such as obesity and diabetes. Accurate and real-time EE estimation utilizing small wearable sensors is a difficult task, primarily because the most existing schemes work offline or use heuristics. In this paper, we focus on accurate EE estimation for tracking ambulatory activities (walking, standing, climbing upstairs, or downstairs) of a typical smartphone user. We used built-in smartphone sensors (accelerometer and barometer sensor), sampled at low frequency, to accurately estimate EE. Using a barometer sensor, in addition to an accelerometer sensor, greatly increases the accuracy of EE estimation. Using bagged regression trees, a machine learning technique, we developed a generic regression model for EE estimation that yields upto 96% correlation with actual EE. We compare our results against the state-of-the-art calorimetry equations and consumer electronics devices (Fitbit and Nike+ FuelBand). The newly developed EE estimation algorithm demonstrated superior accuracy compared with currently available methods. The results were calibrated against COSMED K4b2 calorimeter readings.
Wilmoth, Daniel R
2015-12-01
The prescription drug user fee program provides additional resources to the U.S. Food and Drug Administration at the expense of regulated firms. Those resources accelerate the review of new drugs. Faster approvals allow firms to realize profits sooner, and the program is supported politically by industry. However, published estimates of the value to firms of reduced regulatory delay vary dramatically. It is shown here that this variation is driven largely by differences in methods that correspond to differences in implicit assumptions about the effects of reduced delay. Theoretical modeling is used to derive an equation describing the relationship between estimates generated using different methods. The method likely to yield the most accurate results is identified. A reconciliation of published estimates yields a value to a firm for a one-year reduction in regulatory delay at the time of approval of about $60 million for a typical drug. Published 2015. This article is a U.S. Government work and is in the public domain in the U.S.A. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Exchange inlet optimization by genetic algorithm for improved RBCC performance
NASA Astrophysics Data System (ADS)
Chorkawy, G.; Etele, J.
2017-09-01
A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.
Microscopic predictions of fission yields based on the time dependent GCM formalism
NASA Astrophysics Data System (ADS)
Regnier, D.; Dubray, N.; Schunck, N.; Verrière, M.
2016-03-01
Accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r-process to fuel cycle optimization in nuclear energy. The need for a predictive theory applicable where no data is available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics. One of the most promising theoretical frameworks is the time-dependent generator coordinate method (TDGCM) applied under the Gaussian overlap approximation (GOA). Previous studies reported promising results by numerically solving the TDGCM+GOA equation with a finite difference technique. However, the computational cost of this method makes it difficult to properly control numerical errors. In addition, it prevents one from performing calculations with more than two collective variables. To overcome these limitations, we developed the new code FELIX-1.0 that solves the TDGCM+GOA equation based on the Galerkin finite element method. In this article, we briefly illustrate the capabilities of the solver FELIX-1.0, in particular its validation for n+239Pu low energy induced fission. This work is the result of a collaboration between CEA,DAM,DIF and LLNL on nuclear fission theory.
A Remote Sensing-Derived Corn Yield Assessment Model
NASA Astrophysics Data System (ADS)
Shrestha, Ranjay Man
Agricultural studies and food security have become critical research topics due to continuous growth in human population and simultaneous shrinkage in agricultural land. In spite of modern technological advancements to improve agricultural productivity, more studies on crop yield assessments and food productivities are still necessary to fulfill the constantly increasing food demands. Besides human activities, natural disasters such as flood and drought, along with rapid climate changes, also inflect an adverse effect on food productivities. Understanding the impact of these disasters on crop yield and making early impact estimations could help planning for any national or international food crisis. Similarly, the United States Department of Agriculture (USDA) Risk Management Agency (RMA) insurance management utilizes appropriately estimated crop yield and damage assessment information to sustain farmers' practice through timely and proper compensations. Through County Agricultural Production Survey (CAPS), the USDA National Agricultural Statistical Service (NASS) uses traditional methods of field interviews and farmer-reported survey data to perform annual crop condition monitoring and production estimations at the regional and state levels. As these manual approaches of yield estimations are highly inefficient and produce very limited samples to represent the entire area, NASS requires supplemental spatial data that provides continuous and timely information on crop production and annual yield. Compared to traditional methods, remote sensing data and products offer wider spatial extent, more accurate location information, higher temporal resolution and data distribution, and lower data cost--thus providing a complementary option for estimation of crop yield information. Remote sensing derived vegetation indices such as Normalized Difference Vegetation Index (NDVI) provide measurable statistics of potential crop growth based on the spectral reflectance and could be further associated with the actual yield. Utilizing satellite remote sensing products, such as daily NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m pixel size, the crop yield estimation can be performed at a very fine spatial resolution. Therefore, this study examined the potential of these daily NDVI products within agricultural studies and crop yield assessments. In this study, a regression-based approach was proposed to estimate the annual corn yield through changes in MODIS daily NDVI time series. The relationship between daily NDVI and corn yield was well defined and established, and as changes in corn phenology and yield were directly reflected by the changes in NDVI within the growing season, these two entities were combined to develop a relational model. The model was trained using 15 years (2000-2014) of historical NDVI and county-level corn yield data for four major corn producing states: Kansas, Nebraska, Iowa, and Indiana, representing four climatic regions as South, West North Central, East North Central, and Central, respectively, within the U.S. Corn Belt area. The model's goodness of fit was well defined with a high coefficient of determination (R2>0.81). Similarly, using 2015 yield data for validation, 92% of average accuracy signified the performance of the model in estimating corn yield at county level. Besides providing the county-level corn yield estimations, the derived model was also accurate enough to estimate the yield at finer spatial resolution (field level). The model's assessment accuracy was evaluated using the randomly selected field level corn yield within the study area for 2014, 2015, and 2016. A total of over 120 plot level corn yield were used for validation, and the overall average accuracy was 87%, which statistically justified the model's capability to estimate plot-level corn yield. Additionally, the proposed model was applied to the impact estimation by examining the changes in corn yield due to flood events during the growing season. Using a 2011 Missouri River flood event as a case study, field-level flood impact map on corn yield throughout the flooded regions was produced and an overall agreement of over 82.2% was achieved when compared with the reference impact map. The future research direction of this dissertation research would be to examine other major crops outside the Corn Belt region of the U.S.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsley, Dean M.; Miles, Broxton J.; Timmes, F. X.
2016-07-01
We refine our previously introduced parameterized model for explosive carbon–oxygen fusion during thermonuclear Type Ia supernovae (SNe Ia) by adding corrections to post-processing of recorded Lagrangian fluid-element histories to obtain more accurate isotopic yields. Deflagration and detonation products are verified for propagation in a medium of uniform density. A new method is introduced for reconstructing the temperature–density history within the artificially thick model deflagration front. We obtain better than 5% consistency between the electron capture computed by the burning model and yields from post-processing. For detonations, we compare to a benchmark calculation of the structure of driven steady-state planar detonationsmore » performed with a large nuclear reaction network and error-controlled integration. We verify that, for steady-state planar detonations down to a density of 5 × 10{sup 6} g cm{sup −3}, our post-processing matches the major abundances in the benchmark solution typically to better than 10% for times greater than 0.01 s after the passage of the shock front. As a test case to demonstrate the method, presented here with post-processing for the first time, we perform a two-dimensional simulation of a SN Ia in the scenario of a Chandrasekhar-mass deflagration–detonation transition (DDT). We find that reconstruction of deflagration tracks leads to slightly more complete silicon burning than without reconstruction. The resulting abundance structure of the ejecta is consistent with inferences from spectroscopic studies of observed SNe Ia. We confirm the absence of a central region of stable Fe-group material for the multi-dimensional DDT scenario. Detailed isotopic yields are tabulated and change only modestly when using deflagration reconstruction.« less
Model-independent determination of the astrophysical S factor in laser-induced fusion plasmas
NASA Astrophysics Data System (ADS)
Lattuada, D.; Barbarino, M.; Bonasera, A.; Bang, W.; Quevedo, H. J.; Warren, M.; Consoli, F.; De Angelis, R.; Andreoli, P.; Kimura, S.; Dyer, G.; Bernstein, A. C.; Hagel, K.; Barbui, M.; Schmidt, K.; Gaul, E.; Donovan, M. E.; Natowitz, J. B.; Ditmire, T.
2016-04-01
In this work, we present a new and general method for measuring the astrophysical S factor of nuclear reactions in laser-induced plasmas and we apply it to :mmultiscripts>(d ,n )3He . The experiment was performed with the Texas Petawatt Laser, which delivered 150-270 fs pulses of energy ranging from 90 to 180 J to D2 or CD4 molecular clusters (where D denotes 2H ) . After removing the background noise, we used the measured time-of-flight data of energetic deuterium ions to obtain their energy distribution. We derive the S factor using the measured energy distribution of the ions, the measured volume of the fusion plasma, and the measured fusion yields. This method is model independent in the sense that no assumption on the state of the system is required, but it requires an accurate measurement of the ion energy distribution, especially at high energies, and of the relevant fusion yields. In the :mmultiscripts>(d ,n )3He and 3He(d ,p )4He cases discussed here, it is very important to apply the background subtraction for the energetic ions and to measure the fusion yields with high precision. While the available data on both ion distribution and fusion yields allow us to determine with good precision the S factor in the d +d case (lower Gamow energies), for the d +3He case the data are not precise enough to obtain the S factor using this method. Our results agree with other experiments within the experimental error, even though smaller values of the S factor were obtained. This might be due to the plasma environment differing from the beam target conditions in a conventional accelerator experiment.
Model-independent determination of the astrophysical S factor in laser-induced fusion plasmas
Lattuada, D.; Barbarino, M.; Bonasera, A.; ...
2016-04-19
In this paper, we present a new and general method for measuring the astrophysical S factor of nuclear reactions in laser-induced plasmas and we apply it to 2H(d,n) 3He. The experiment was performed with the Texas Petawatt Laser, which delivered 150–270 fs pulses of energy ranging from 90 to 180 J to D 2 or CD 4 molecular clusters (where D denotes 2H). After removing the background noise, we used the measured time-of-flight data of energetic deuterium ions to obtain their energy distribution. We derive the S factor using the measured energy distribution of the ions, the measured volume ofmore » the fusion plasma, and the measured fusion yields. This method is model independent in the sense that no assumption on the state of the system is required, but it requires an accurate measurement of the ion energy distribution, especially at high energies, and of the relevant fusion yields. In the 2H(d,n) 3He and 3He(d,p) 4He cases discussed here, it is very important to apply the background subtraction for the energetic ions and to measure the fusion yields with high precision. While the available data on both ion distribution and fusion yields allow us to determine with good precision the S factor in the d+d case (lower Gamow energies), for the d+ 3He case the data are not precise enough to obtain the S factor using this method. Our results agree with other experiments within the experimental error, even though smaller values of the S factor were obtained. This might be due to the plasma environment differing from the beam target conditions in a conventional accelerator experiment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lattuada, D.; Barbarino, M.; Bonasera, A.
In this paper, we present a new and general method for measuring the astrophysical S factor of nuclear reactions in laser-induced plasmas and we apply it to 2H(d,n) 3He. The experiment was performed with the Texas Petawatt Laser, which delivered 150–270 fs pulses of energy ranging from 90 to 180 J to D 2 or CD 4 molecular clusters (where D denotes 2H). After removing the background noise, we used the measured time-of-flight data of energetic deuterium ions to obtain their energy distribution. We derive the S factor using the measured energy distribution of the ions, the measured volume ofmore » the fusion plasma, and the measured fusion yields. This method is model independent in the sense that no assumption on the state of the system is required, but it requires an accurate measurement of the ion energy distribution, especially at high energies, and of the relevant fusion yields. In the 2H(d,n) 3He and 3He(d,p) 4He cases discussed here, it is very important to apply the background subtraction for the energetic ions and to measure the fusion yields with high precision. While the available data on both ion distribution and fusion yields allow us to determine with good precision the S factor in the d+d case (lower Gamow energies), for the d+ 3He case the data are not precise enough to obtain the S factor using this method. Our results agree with other experiments within the experimental error, even though smaller values of the S factor were obtained. This might be due to the plasma environment differing from the beam target conditions in a conventional accelerator experiment.« less
2017-01-01
In order to reliably predict and understand the breathing behavior of highly flexible metal–organic frameworks from thermodynamic considerations, an accurate estimation of the free energy difference between their different metastable states is a prerequisite. Herein, a variety of free energy estimation methods are thoroughly tested for their ability to construct the free energy profile as a function of the unit cell volume of MIL-53(Al). The methods comprise free energy perturbation, thermodynamic integration, umbrella sampling, metadynamics, and variationally enhanced sampling. A series of molecular dynamics simulations have been performed in the frame of each of the five methods to describe structural transformations in flexible materials with the volume as the collective variable, which offers a unique opportunity to assess their computational efficiency. Subsequently, the most efficient method, umbrella sampling, is used to construct an accurate free energy profile at different temperatures for MIL-53(Al) from first principles at the PBE+D3(BJ) level of theory. This study yields insight into the importance of the different aspects such as entropy contributions and anharmonic contributions on the resulting free energy profile. As such, this thorough study provides unparalleled insight in the thermodynamics of the large structural deformations of flexible materials. PMID:29131647
Demuynck, Ruben; Rogge, Sven M J; Vanduyfhuys, Louis; Wieme, Jelle; Waroquier, Michel; Van Speybroeck, Veronique
2017-12-12
In order to reliably predict and understand the breathing behavior of highly flexible metal-organic frameworks from thermodynamic considerations, an accurate estimation of the free energy difference between their different metastable states is a prerequisite. Herein, a variety of free energy estimation methods are thoroughly tested for their ability to construct the free energy profile as a function of the unit cell volume of MIL-53(Al). The methods comprise free energy perturbation, thermodynamic integration, umbrella sampling, metadynamics, and variationally enhanced sampling. A series of molecular dynamics simulations have been performed in the frame of each of the five methods to describe structural transformations in flexible materials with the volume as the collective variable, which offers a unique opportunity to assess their computational efficiency. Subsequently, the most efficient method, umbrella sampling, is used to construct an accurate free energy profile at different temperatures for MIL-53(Al) from first principles at the PBE+D3(BJ) level of theory. This study yields insight into the importance of the different aspects such as entropy contributions and anharmonic contributions on the resulting free energy profile. As such, this thorough study provides unparalleled insight in the thermodynamics of the large structural deformations of flexible materials.
Zhang, Ying; Li, Yan; Zhu, Xiao-Juan; Li, Min; Chen, Hao-Yu; Lv, Xiao-Ling; Zhang, Jian
2017-07-01
A reliable and accurate method for the determination of seven biogenic amines (BAs) was developed and validated with Chinese rice wine samples. The BAs were derivatised with dansyl chloride, cleaned up using solid-phase extraction (SPE) and separated by high-performance liquid chromatography (HPLC) coupled with ultraviolet (UV) detection. The optimised derivatisation reaction, conducted at pH 9.6 and 60°C for 30 min, ensured baseline separation and peak symmetry for each BA. SPE clean-up using Oasis MCX cartridges yielded good recovery rates for all BAs and effectively reduced matrix effects. The developed method shows good linearity with determination coefficients of more than 0.9989 over a concentration range of 0.1-100 mg l -1 . The limits of detection (LODs) for the investigated BAs ranged from 2.07 to 5.56 µg l -1 . The intra- and inter-day relative standard deviations (RSDs) ranged from 0.86% to 3.81% and from 2.13% to 3.82%, respectively. Spiking experiments showed that the overall recovery rates ranged from 85% to 113%. Thus, the proposed method was demonstrated as being suitable for simultaneous detection, with accurate and precise quantification, of BAs in Chinese rice wine.
Reyes-Montes, M del R; Taylor, M L; Curiel-Quesada, E; Mesa-Arango, A C
2000-12-01
The classification of microbial strains is currently based on different typing methods, which must meet certain criteria in order to be widely used. Phenotypic and genotypic methods are being employed in the epidemiology of several fungal diseases. However, some problems associated to the phenotypic methods have fostered genotyping procedures, from DNA polymorphic diversity to gene sequencing studies, all aiming to differentiate and to relate fungal isolates or strains. Through these studies, it is possible to identify outbreaks, to detect nosocomial infection transmission, and to determine the source of infection, as well as to recognize virulent isolates. This paper is aimed at analyzing the methods recently used to type Histoplasma capsulatum, causative agent of the systemic mycosis known as histoplasmosis, in order to recommend those that yield reproducible and accurate results.
Box, Stephen E.; Bookstrom, Arthur A.; Ikramuddin, Mohammed; Lindsay, James
2001-01-01
(Fe), manganese (Mn), arsenic (As), and cadmium (Cd). In general inter-laboratory correlations are better for samples within the compositional range of the Standard Reference Materials (SRMs) from the National Institute of Standards and Technology (NIST). Analyses by EWU are the most accurate relative to the NIST standards (mean recoveries within 1% for Pb, Fe, Mn, and As, 3% for Zn and 5% for Cd) and are the most precise (within 7% of the mean at the 95% confidence interval). USGS-EDXRF is similarly accurate for Pb and Zn. XRAL and ACZ are relatively accurate for Pb (within 5-8% of certified NIST values), but were considerably less accurate for the other 5 elements of concern (10-25% of NIST values). However, analyses of sample splits by more than one laboratory reveal that, for some elements, XRAL (Pb, Mn, Cd) and ACZ (Pb, Mn, Zn, Fe) analyses were comparable to EWU analyses of the same samples (when values are within the range of NIST SRMs). These results suggest that, for some elements, XRAL and ACZ dissolutions are more effective on the matrix of the CdA samples than on the matrix of the NIST samples (obtained from soils around Butte, Montana). Splits of CdA samples analyzed by CHEMEX were the least accurate, yielding values 10-25% less than those of EWU.
NASA Astrophysics Data System (ADS)
Kirshman, David
A numerical method for the solution of inviscid compressible flow using an array of embedded Cartesian meshes in conjunction with gridless surface boundary conditions is developed. The gridless boundary treatment is implemented by means of a least squares fitting of the conserved flux variables using a cloud of nodes in the vicinity of the surface geometry. The method allows for accurate treatment of the surface boundary conditions using a grid resolution an order of magnitude coarser than required of typical Cartesian approaches. Additionally, the method does not suffer from issues associated with thin body geometry or extremely fine cut cells near the body. Unlike some methods that consider a gridless (or "meshless") treatment throughout the entire domain, multi-grid acceleration can be effectively incorporated and issues associated with global conservation are alleviated. The "gridless" surface boundary condition provides for efficient and simple problem set up since definition of the body geometry is generated independently from the field mesh, and automatically incorporated into the field discretization of the domain. The applicability of the method is first demonstrated for steady flow of single and multi-element airfoil configurations. Using this method, comparisons with traditional body-fitted grid simulations reveal that steady flow solutions can be obtained accurately with minimal effort associated with grid generation. The method is then extended to unsteady flow predictions. In this application, flow field simulations for the prescribed oscillation of an airfoil indicate excellent agreement with experimental data. Furthermore, it is shown that the phase lag associated with shock oscillation is accurately predicted without the need for a deformable mesh. Lastly, the method is applied to the prediction of transonic flutter using a two-dimensional wing model, in which comparisons with moving mesh simulations yield nearly identical results. As a result, applicability of the method to transient and vibrating fluid-structure interaction problems is established in which the requirement for a deformable mesh is eliminated.
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.
Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang
2015-09-21
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less
Classification of Aerial Photogrammetric 3d Point Clouds
NASA Astrophysics Data System (ADS)
Becker, C.; Häni, N.; Rosinskaya, E.; d'Angelo, E.; Strecha, C.
2017-05-01
We present a powerful method to extract per-point semantic class labels from aerial photogrammetry data. Labelling this kind of data is important for tasks such as environmental modelling, object classification and scene understanding. Unlike previous point cloud classification methods that rely exclusively on geometric features, we show that incorporating color information yields a significant increase in accuracy in detecting semantic classes. We test our classification method on three real-world photogrammetry datasets that were generated with Pix4Dmapper Pro, and with varying point densities. We show that off-the-shelf machine learning techniques coupled with our new features allow us to train highly accurate classifiers that generalize well to unseen data, processing point clouds containing 10 million points in less than 3 minutes on a desktop computer.
Extraction of gravitational waves in numerical relativity.
Bishop, Nigel T; Rezzolla, Luciano
2016-01-01
A numerical-relativity calculation yields in general a solution of the Einstein equations including also a radiative part, which is in practice computed in a region of finite extent. Since gravitational radiation is properly defined only at null infinity and in an appropriate coordinate system, the accurate estimation of the emitted gravitational waves represents an old and non-trivial problem in numerical relativity. A number of methods have been developed over the years to "extract" the radiative part of the solution from a numerical simulation and these include: quadrupole formulas, gauge-invariant metric perturbations, Weyl scalars, and characteristic extraction. We review and discuss each method, in terms of both its theoretical background as well as its implementation. Finally, we provide a brief comparison of the various methods in terms of their inherent advantages and disadvantages.
High-resolution comparative modeling with RosettaCM.
Song, Yifan; DiMaio, Frank; Wang, Ray Yu-Ruei; Kim, David; Miles, Chris; Brunette, Tj; Thompson, James; Baker, David
2013-10-08
We describe an improved method for comparative modeling, RosettaCM, which optimizes a physically realistic all-atom energy function over the conformational space defined by homologous structures. Given a set of sequence alignments, RosettaCM assembles topologies by recombining aligned segments in Cartesian space and building unaligned regions de novo in torsion space. The junctions between segments are regularized using a loop closure method combining fragment superposition with gradient-based minimization. The energies of the resulting models are optimized by all-atom refinement, and the most representative low-energy model is selected. The CASP10 experiment suggests that RosettaCM yields models with more accurate side-chain and backbone conformations than other methods when the sequence identity to the templates is greater than ∼15%. Copyright © 2013 Elsevier Ltd. All rights reserved.
Physics-based enzyme design: predicting binding affinity and catalytic activity.
Sirin, Sarah; Pearlman, David A; Sherman, Woody
2014-12-01
Computational enzyme design is an emerging field that has yielded promising success stories, but where numerous challenges remain. Accurate methods to rapidly evaluate possible enzyme design variants could provide significant value when combined with experimental efforts by reducing the number of variants needed to be synthesized and speeding the time to reach the desired endpoint of the design. To that end, extending our computational methods to model the fundamental physical-chemical principles that regulate activity in a protocol that is automated and accessible to a broad population of enzyme design researchers is essential. Here, we apply a physics-based implicit solvent MM-GBSA scoring approach to enzyme design and benchmark the computational predictions against experimentally determined activities. Specifically, we evaluate the ability of MM-GBSA to predict changes in affinity for a steroid binder protein, catalytic turnover for a Kemp eliminase, and catalytic activity for α-Gliadin peptidase variants. Using the enzyme design framework developed here, we accurately rank the most experimentally active enzyme variants, suggesting that this approach could provide enrichment of active variants in real-world enzyme design applications. © 2014 Wiley Periodicals, Inc.
Classification of HCV and HIV-1 Sequences with the Branching Index
Hraber, Peter; Kuiken, Carla; Waugh, Mark; Geer, Shaun; Bruno, William J.; Leitner, Thomas
2009-01-01
SUMMARY Classification of viral sequences should be fast, objective, accurate, and reproducible. Most methods that classify sequences use either pairwise distances or phylogenetic relations, but cannot discern when a sequence is unclassifiable. The branching index (BI) combines distance and phylogeny methods to compute a ratio that quantifies how closely a query sequence clusters with a subtype clade. In the hypothesis-testing framework of statistical inference, the BI is compared with a threshold to test whether sufficient evidence exists for the query sequence to be classified among known sequences. If above the threshold, the null hypothesis of no support for the subtype relation is rejected and the sequence is taken as belonging to the subtype clade with which it clusters on the tree. This study evaluates statistical properties of the branching index for subtype classification in HCV and HIV-1. Pairs of BI values with known positive and negative test results were computed from 10,000 random fragments of reference alignments. Sampled fragments were of sufficient length to contain phylogenetic signal that groups reference sequences together properly into subtype clades. For HCV, a threshold BI of 0.71 yields 95.1% agreement with reference subtypes, with equal false positive and false negative rates. For HIV-1, a threshold of 0.66 yields 93.5% agreement. Higher thresholds can be used where lower false positive rates are required. In synthetic recombinants, regions without breakpoints are recognized accurately; regions with breakpoints do not uniquely represent any known subtype. Web-based services for viral subtype classification with the branching index are available online. PMID:18753218
Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634
Design, Fabrication and Test of Composite Curved Frames for Helicopter Fuselage Structure
NASA Technical Reports Server (NTRS)
Lowry, D. W.; Krebs, N. E.; Dobyns, A. L.
1984-01-01
Aspects of curved beam effects and their importance in designing composite frame structures are discussed. The curved beam effect induces radial flange loadings which in turn causes flange curling. This curling increases the axial flange stresses and induces transverse bending. These effects are more important in composite structures due to their general inability to redistribute stresses by general yielding, such as in metal structures. A detailed finite element analysis was conducted and used in the design of composite curved frame specimens. Five specimens were statically tested and compared with predicted and test strains. The curved frame effects must be accurately accounted for to avoid premature fracture; finite element methods can accurately predict most of the stresses and no elastic relief from curved beam effects occurred in the composite frames tested. Finite element studies are presented for comparative curved beam effects on composite and metal frames.
Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots
Gilbert, Hunter B.; Webster, Robert J.
2016-01-01
Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C. PMID:27648473
Rapid, Reliable Shape Setting of Superelastic Nitinol for Prototyping Robots.
Gilbert, Hunter B; Webster, Robert J
Shape setting Nitinol tubes and wires in a typical laboratory setting for use in superelastic robots is challenging. Obtaining samples that remain superelastic and exhibit desired precurvatures currently requires many iterations, which is time consuming and consumes a substantial amount of Nitinol. To provide a more accurate and reliable method of shape setting, in this paper we propose an electrical technique that uses Joule heating to attain the necessary shape setting temperatures. The resulting high power heating prevents unintended aging of the material and yields consistent and accurate results for the rapid creation of prototypes. We present a complete algorithm and system together with an experimental analysis of temperature regulation. We experimentally validate the approach on Nitinol tubes that are shape set into planar curves. We also demonstrate the feasibility of creating general space curves by shape setting a helical tube. The system demonstrates a mean absolute temperature error of 10°C.
The Torsion of Members Having Sections Common in Aircraft Construction
NASA Technical Reports Server (NTRS)
Trayer, George W; March, H W
1930-01-01
Within recent years a great variety of approximate torsion formulas and drafting-room processes have been advocated. In some of these, especially where mathematical considerations are involved, the results are extremely complex and are not generally intelligible to engineers. The principal object of this investigation was to determine by experiment and theoretical investigation how accurate the more common of these formulas are and on what assumptions they are founded and, if none of the proposed methods proved to be reasonable accurate in practice, to produce simple, practical formulas from reasonably correct assumptions, backed by experiment. A second object was to collect in readily accessible form the most useful of known results for the more common sections. Formulas for all the important solid sections that have yielded to mathematical treatment are listed. Then follows a discussion of the torsion of tubular rods with formulas both rigorous and approximate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judd, R.C.; Caldwell, H.D.
1985-01-01
The objective of this study was to determine if in-gel chloramine-T radioiodination adequately labels OM proteins to allow for accurate and precise structural comparison of these molecules. Therefore, intrinsically /sup 14/C-amino acid labeled proteins and /sup 125/I-labeled proteins were cleaved with two endopeptidic reagents and the peptide fragments separated by HPLC. A comparison of retention times of the fragments, as determined by differential radiation counting, thus indicated whether /sup 125/Ilabeling identified of all the peptide peaks seen in the /sup 14/Clabeled proteins. Results demonstrated that radioiodination yields complete and accurate information about the primary structure of outer membrane proteins. Inmore » addition, it permits the use of extremely small amounts of protein allowing for method optimization and multiple separations to insure reproducibility.« less
Segmentation of cortical bone using fast level sets
NASA Astrophysics Data System (ADS)
Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo
2017-02-01
Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.
HIV-1 protease cleavage site prediction based on two-stage feature selection method.
Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong
2013-03-01
Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
A SPH elastic-viscoplastic model for granular flows and bed-load transport
NASA Astrophysics Data System (ADS)
Ghaïtanellis, Alex; Violeau, Damien; Ferrand, Martin; Abderrezzak, Kamal El Kadi; Leroy, Agnès; Joly, Antoine
2018-01-01
An elastic-viscoplastic model (Ulrich, 2013) is combined to a multi-phase SPH formulation (Hu and Adams, 2006; Ghaitanellis et al., 2015) to model granular flows and non-cohesive sediment transport. The soil is treated as a continuum exhibiting a viscoplastic behaviour. Thus, below a critical shear stress (i.e. the yield stress), the soil is assumed to behave as an isotropic linear-elastic solid. When the yield stress is exceeded, the soil flows and behaves as a shear-thinning fluid. A liquid-solid transition threshold based on the granular material properties is proposed, so as to make the model free of numerical parameter. The yield stress is obtained from Drucker-Prager criterion that requires an accurate computation of the effective stress in the soil. A novel method is proposed to compute the effective stress in SPH, solving a Laplace equation. The model is applied to a two-dimensional soil collapse (Bui et al., 2008) and a dam break over mobile beds (Spinewine and Zech, 2007). Results are compared with experimental data and a good agreement is obtained.
NASA Technical Reports Server (NTRS)
Freedman, M. I.; Sipcic, S.; Tseng, K.
1985-01-01
A frequency domain Green's Function Method for unsteady supersonic potential flow around complex aircraft configurations is presented. The focus is on the supersonic range wherein the linear potential flow assumption is valid. In this range the effects of the nonlinear terms in the unsteady supersonic compressible velocity potential equation are negligible and therefore these terms will be omitted. The Green's function method is employed in order to convert the potential flow differential equation into an integral one. This integral equation is then discretized, through standard finite element technique, to yield a linear algebraic system of equations relating the unknown potential to its prescribed co-normalwash (boundary condition) on the surface of the aircraft. The arbitrary complex aircraft configuration (e.g., finite-thickness wing, wing-body-tail) is discretized into hyperboloidal (twisted quadrilateral) panels. The potential and co-normalwash are assumed to vary linearly within each panel. The long range goal is to develop a comprehensive theory for unsteady supersonic potential aerodynamic which is capable of yielding accurate results even in the low supersonic (i.e., high transonic) range.
A comparison of approaches for estimating bottom-sediment mass in large reservoirs
Juracek, Kyle E.
2006-01-01
Estimates of sediment and sediment-associated constituent loads and yields from drainage basins are necessary for the management of reservoir-basin systems to address important issues such as reservoir sedimentation and eutrophication. One method for the estimation of loads and yields requires a determination of the total mass of sediment deposited in a reservoir. This method involves a sediment volume-to-mass conversion using bulk-density information. A comparison of four computational approaches (partition, mean, midpoint, strategic) for using bulk-density information to estimate total bottom-sediment mass in four large reservoirs indicated that the differences among the approaches were not statistically significant. However, the lack of statistical significance may be a result of the small sample size. Compared to the partition approach, which was presumed to provide the most accurate estimates of bottom-sediment mass, the results achieved using the strategic, mean, and midpoint approaches differed by as much as ?4, ?20, and ?44 percent, respectively. It was concluded that the strategic approach may merit further investigation as a less time consuming and less costly alternative to the partition approach.
Rising temperatures reduce global wheat production
NASA Astrophysics Data System (ADS)
Asseng, S.; Ewert, F.; Martre, P.; Rötter, R. P.; Lobell, D. B.; Cammarano, D.; Kimball, B. A.; Ottman, M. J.; Wall, G. W.; White, J. W.; Reynolds, M. P.; Alderman, P. D.; Prasad, P. V. V.; Aggarwal, P. K.; Anothai, J.; Basso, B.; Biernath, C.; Challinor, A. J.; de Sanctis, G.; Doltra, J.; Fereres, E.; Garcia-Vila, M.; Gayler, S.; Hoogenboom, G.; Hunt, L. A.; Izaurralde, R. C.; Jabloun, M.; Jones, C. D.; Kersebaum, K. C.; Koehler, A.-K.; Müller, C.; Naresh Kumar, S.; Nendel, C.; O'Leary, G.; Olesen, J. E.; Palosuo, T.; Priesack, E.; Eyshi Rezaei, E.; Ruane, A. C.; Semenov, M. A.; Shcherbak, I.; Stöckle, C.; Stratonovitch, P.; Streck, T.; Supit, I.; Tao, F.; Thorburn, P. J.; Waha, K.; Wang, E.; Wallach, D.; Wolf, J.; Zhao, Z.; Zhu, Y.
2015-02-01
Crop models are essential tools for assessing the threat of climate change to local and global food production. Present models used to predict wheat grain yield are highly uncertain when simulating how crops respond to temperature. Here we systematically tested 30 different wheat crop models of the Agricultural Model Intercomparison and Improvement Project against field experiments in which growing season mean temperatures ranged from 15 °C to 32 °C, including experiments with artificial heating. Many models simulated yields well, but were less accurate at higher temperatures. The model ensemble median was consistently more accurate in simulating the crop temperature response than any single model, regardless of the input information used. Extrapolating the model ensemble temperature response indicates that warming is already slowing yield gains at a majority of wheat-growing locations. Global wheat production is estimated to fall by 6% for each °C of further temperature increase and become more variable over space and time.
Rising Temperatures Reduce Global Wheat Production
NASA Technical Reports Server (NTRS)
Asseng, S.; Ewert, F.; Martre, P.; Rötter, R. P.; Lobell, D. B.; Cammarano, D.; Kimball, B. A.; Ottman, M. J.; Wall, G. W.; White, J. W.;
2015-01-01
Crop models are essential tools for assessing the threat of climate change to local and global food production. Present models used to predict wheat grain yield are highly uncertain when simulating how crops respond to temperature. Here we systematically tested 30 different wheat crop models of the Agricultural Model Intercomparison and Improvement Project against field experiments in which growing season mean temperatures ranged from 15 degrees C to 32? degrees C, including experiments with artificial heating. Many models simulated yields well, but were less accurate at higher temperatures. The model ensemble median was consistently more accurate in simulating the crop temperature response than any single model, regardless of the input information used. Extrapolating the model ensemble temperature response indicates that warming is already slowing yield gains at a majority of wheat-growing locations. Global wheat production is estimated to fall by 6% for each degree C of further temperature increase and become more variable over space and time.
Application of Eyring's thermal activation theory to constitutive equations for polymers
NASA Astrophysics Data System (ADS)
Zerilli, Frank J.; Armstrong, Ronald W.
2000-04-01
The application of a constitutive model based on the thermal activation theory of Eyring to the yield stress of polymethylmethacrylate at various temperatures and strain rates, as measured by Bauwens-Crowet, shows that the yield stress may reasonably well be described by a thermal activation equation in which the volume of activation is inversely proportional to the yield stress. It is found that, to obtain an accurate model, the dependence of the cold (T=0 K) yield stress on the shear modulus must be taken into account.
Simulation of FIB-SEM images for analysis of porous microstructures.
Prill, Torben; Schladitz, Katja
2013-01-01
Focused ion beam nanotomography-scanning electron microscopy tomography yields high-quality three-dimensional images of materials microstructures at the nanometer scale combining serial sectioning using a focused ion beam with SEM. However, FIB-SEM tomography of highly porous media leads to shine-through artifacts preventing automatic segmentation of the solid component. We simulate the SEM process in order to generate synthetic FIB-SEM image data for developing and validating segmentation methods. Monte-Carlo techniques yield accurate results, but are too slow for the simulation of FIB-SEM tomography requiring hundreds of SEM images for one dataset alone. Nevertheless, a quasi-analytic description of the specimen and various acceleration techniques, including a track compression algorithm and an acceleration for the simulation of secondary electrons, cut down the computing time by orders of magnitude, allowing for the first time to simulate FIB-SEM tomography. © Wiley Periodicals, Inc.
Computed potential energy surfaces for chemical reactions
NASA Technical Reports Server (NTRS)
Walch, Stephen P.; Levin, Eugene
1993-01-01
A new global potential energy surface (PES) is being generated for O(P-3) + H2 yields OH + H. This surface is being fit using the rotated Morse oscillator method, which was used to fit the previous POL-CI surface. The new surface is expected to be more accurate and also includes a much more complete sampling of bent geometries. A new study has been undertaken of the reaction N + O2 yields NO + O. The new studies have focused on the region of the surface near a possible minimum corresponding to the peroxy form of NOO. A large portion of the PES for this second reaction has been mapped out. Since state to state cross sections for the reaction are important in the chemistry of high temperature air, these studies will probably be extended to permit generation of a new global potential for reaction.
Mining Social Media and Web Searches For Disease Detection
Yang, Y. Tony; Horneffer, Michael; DiLisio, Nicole
2013-01-01
Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate. PMID:25170475
Mining social media and web searches for disease detection.
Yang, Y Tony; Horneffer, Michael; DiLisio, Nicole
2013-04-28
Web-based social media is increasingly being used across different settings in the health care industry. The increased frequency in the use of the Internet via computer or mobile devices provides an opportunity for social media to be the medium through which people can be provided with valuable health information quickly and directly. While traditional methods of detection relied predominately on hierarchical or bureaucratic lines of communication, these often failed to yield timely and accurate epidemiological intelligence. New web-based platforms promise increased opportunities for a more timely and accurate spreading of information and analysis. This article aims to provide an overview and discussion of the availability of timely and accurate information. It is especially useful for the rapid identification of an outbreak of an infectious disease that is necessary to promptly and effectively develop public health responses. These web-based platforms include search queries, data mining of web and social media, process and analysis of blogs containing epidemic key words, text mining, and geographical information system data analyses. These new sources of analysis and information are intended to complement traditional sources of epidemic intelligence. Despite the attractiveness of these new approaches, further study is needed to determine the accuracy of blogger statements, as increases in public participation may not necessarily mean the information provided is more accurate.
Using artificial neural network and satellite data to predict rice yield in Bangladesh
NASA Astrophysics Data System (ADS)
Akhand, Kawsar; Nizamuddin, Mohammad; Roytman, Leonid; Kogan, Felix; Goldberg, Mitch
2015-09-01
Rice production in Bangladesh is a crucial part of the national economy and providing about 70 percent of an average citizen's total calorie intake. The demand for rice is constantly rising as the new populations are added in every year in Bangladesh. Due to the increase in population, the cultivation land decreases. In addition, Bangladesh is faced with production constraints such as drought, flooding, salinity, lack of irrigation facilities and lack of modern technology. To maintain self sufficiency in rice, Bangladesh will have to continue to expand rice production by increasing yield at a rate that is at least equal to the population growth until the demand of rice has stabilized. Accurate rice yield prediction is one of the most important challenges in managing supply and demand of rice as well as decision making processes. Artificial Neural Network (ANN) is used to construct a model to predict Aus rice yield in Bangladesh. Advanced Very High Resolution Radiometer (AVHRR)-based remote sensing satellite data vegetation health (VH) indices (Vegetation Condition Index (VCI) and Temperature Condition Index (TCI) are used as input variables and official statistics of Aus rice yield is used as target variable for ANN prediction model. The result obtained with ANN method is encouraging and the error of prediction is less than 10%. Therefore, prediction can play an important role in planning and storing of sufficient rice to face in any future uncertainty.
Chain Ends and the Ultimate Tensile Strength of Polyethylene Fibers
NASA Astrophysics Data System (ADS)
O'Connor, Thomas C.; Robbins, Mark O.
Determining the tensile yield mechanisms of oriented polymer fibers remains a challenging problem in polymer mechanics. By maximizing the alignment and crystallinity of polyethylene (PE) fibers, tensile strengths σ ~ 6 - 7 GPa have been achieved. While impressive, first-principal calculations predict carbon backbone bonds would allow strengths four times higher (σ ~ 20 GPa) before breaking. The reduction in strength is caused by crystal defects like chain ends, which allow fibers to yield by chain slip in addition to bond breaking. We use large scale molecular dynamics (MD) simulations to determine the tensile yield mechanism of orthorhombic PE crystals with finite chains spanning 102 -104 carbons in length. The yield stress σy saturates for long chains at ~ 6 . 3 GPa, agreeing well with experiments. Chains do not break but always yield by slip, after nucleation of 1D dislocations at chain ends. Dislocations are accurately described by a Frenkel-Kontorova model, parametrized by the mechanical properties of an ideal crystal. We compute a dislocation core size ξ = 25 . 24 Å and determine the high and low strain rate limits of σy. Our results suggest characterizing such 1D dislocations is an efficient method for predicting fiber strength. This research was performed within the Center for Materials in Extreme Dynamic Environments (CMEDE) under the Hopkins Extreme Materials Institute at Johns Hopkins University. Financial support was provided by Grant W911NF-12-2-0022.
Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher
2008-09-15
We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.
Noise Power Spectrum Measurements in Digital Imaging With Gain Nonuniformity Correction.
Kim, Dong Sik
2016-08-01
The noise power spectrum (NPS) of an image sensor provides the spectral noise properties needed to evaluate sensor performance. Hence, measuring an accurate NPS is important. However, the fixed pattern noise from the sensor's nonuniform gain inflates the NPS, which is measured from images acquired by the sensor. Detrending the low-frequency fixed pattern is traditionally used to accurately measure NPS. However, detrending methods cannot remove high-frequency fixed patterns. In order to efficiently correct the fixed pattern noise, a gain-correction technique based on the gain map can be used. The gain map is generated using the average of uniformly illuminated images without any objects. Increasing the number of images n for averaging can reduce the remaining photon noise in the gain map and yield accurate NPS values. However, for practical finite n , the photon noise also significantly inflates NPS. In this paper, a nonuniform-gain image formation model is proposed and the performance of the gain correction is theoretically analyzed in terms of the signal-to-noise ratio (SNR). It is shown that the SNR is O(√n) . An NPS measurement algorithm based on the gain map is then proposed for any given n . Under a weak nonuniform gain assumption, another measurement algorithm based on the image difference is also proposed. For real radiography image detectors, the proposed algorithms are compared with traditional detrending and subtraction methods, and it is shown that as few as two images ( n=1 ) can provide an accurate NPS because of the compensation constant (1+1/n) .
Calculation of vitrinite reflectance from thermal histories: A comparison of some methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morrow, D.W.; Issler, D.R.
1993-04-01
Vitrinite reflectance values (%R[sub o]) calculated from commonly used methods are compared with respect to time invariant temperatures and constant heating rates. Two monofunctional methods, one involving a time-temperature index to vitrinite reflectance correlation (TTI-%R[sub o]) to depth correlation, yield vitrinite reflectance values that are similar to those calculated by recently published Arrhenius-based methods, such as EASY%R[sub o]. The approximate agreement between these methods supports the perception that the EASY%R[sub o] algorithm is the most accurate method for the prediction of vitrinite reflectances throughout the range of organic maturity normally encountered. However, calibration of these methods against vitrinite reflectance datamore » from two basin sequences with well-documented geologic histories indicates that, although the EASY%R[sub o] method has wide applicability, it slightly overestimates vitrinite reflectances in strata of low to medium maturity up to a %R[sub o] value of 0.9%. The two monofunctional methods may be more accurate for prediction of vitrinite reflectances in similar sequences of low maturity. An older, but previously widely accepted TTI-%R[sub O] correlation consistently overestimates vitrinite reflectances with respect to other methods. Underestimation of paleogeothermal gradients in the original calibration of time-temperature history to vitrinite reflectance may have introduced a systematic bias to the TTI-%R[sub o] correlation used in this method. Also, incorporation of TAI (thermal alteration index) data and its conversion to %R[sub o]-equivalent values may have introduced inaccuracies. 36 refs., 7 figs.« less
Evaluation of seeding depth and guage-wheel load effects on maize emergence and yield
USDA-ARS?s Scientific Manuscript database
Planting represents perhaps the most important field operation with errors likely to negatively affect crop yield and thereby farm profitability. Performance of row-crop planters are evaluated by their ability to accurately place seeds into the soil at an adequate and pre-determined depth, the goal ...
Specific energy yield comparison between crystalline silicon and amorphous silicon based PV modules
NASA Astrophysics Data System (ADS)
Ferenczi, Toby; Stern, Omar; Hartung, Marianne; Mueggenburg, Eike; Lynass, Mark; Bernal, Eva; Mayer, Oliver; Zettl, Marcus
2009-08-01
As emerging thin-film PV technologies continue to penetrate the market and the number of utility scale installations substantially increase, detailed understanding of the performance of the various PV technologies becomes more important. An accurate database for each technology is essential for precise project planning, energy yield prediction and project financing. However recent publications showed that it is very difficult to get accurate and reliable performance data of theses technologies. This paper evaluates previously reported claims the amorphous silicon based PV modules have a higher annual energy yield compared to crystalline silicon modules relative to their rated performance. In order to acquire a detailed understanding of this effect, outdoor module tests were performed at GE Global Research Center in Munich. In this study we examine closely two of the five reported factors that contribute to enhanced energy yield of amorphous silicon modules. We find evidence to support each of these factors and evaluate their relative significance. We discuss aspects for improvement in how PV modules are sold and identify areas for further study further study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genova, Alessandro, E-mail: alessandro.genova@rutgers.edu; Pavanello, Michele, E-mail: m.pavanello@rutgers.edu; Ceresoli, Davide, E-mail: davide.ceresoli@cnr.it
2016-06-21
In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange–correlation potentials that aremore » linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH{sup •} radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH{sup •} radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.« less
NASA Astrophysics Data System (ADS)
Blum, Volker
This talk describes recent advances of a general, efficient, accurate all-electron electronic theory approach based on numeric atom-centered orbitals; emphasis is placed on developments related to materials for energy conversion and their discovery. For total energies and electron band structures, we show that the overall accuracy is on par with the best benchmark quality codes for materials, but scalable to large system sizes (1,000s of atoms) and amenable to both periodic and non-periodic simulations. A recent localized resolution-of-identity approach for the Coulomb operator enables O (N) hybrid functional based descriptions of the electronic structure of non-periodic and periodic systems, shown for supercell sizes up to 1,000 atoms; the same approach yields accurate results for many-body perturbation theory as well. For molecular systems, we also show how many-body perturbation theory for charged and neutral quasiparticle excitation energies can be efficiently yet accurately applied using basis sets of computationally manageable size. Finally, the talk highlights applications to the electronic structure of hybrid organic-inorganic perovskite materials, as well as to graphene-based substrates for possible future transition metal compound based electrocatalyst materials. All methods described here are part of the FHI-aims code. VB gratefully acknowledges contributions by numerous collaborators at Duke University, Fritz Haber Institute Berlin, TU Munich, USTC Hefei, Aalto University, and many others around the globe.
Contrast-enhanced spectral mammography in patients with MRI contraindications.
Richter, Vivien; Hatterman, Valerie; Preibsch, Heike; Bahrs, Sonja D; Hahn, Markus; Nikolaou, Konstantin; Wiesinger, Benjamin
2017-01-01
Background Contrast-enhanced spectral mammography (CESM) is a novel breast imaging technique providing comparable diagnostic accuracy to breast magnetic resonance imaging (MRI). Purpose To show that CESM in patients with MRI contraindications is feasible, accurate, and useful as a problem-solving tool, and to highlight its limitations. Material and Methods A total of 118 patients with MRI contraindications were examined by CESM. Histology was obtained in 94 lesions and used as gold standard for diagnostic accuracy calculations. Imaging data were reviewed retrospectively for feasibility, accuracy, and technical problems. The diagnostic yield of CESM as a problem-solving tool and for therapy response evaluation was reviewed separately. Results CESM was more accurate than mammography (MG) for lesion categorization (r = 0.731, P < 0.0001 vs. r = 0.279, P = 0.006) and for lesion size estimation (r = 0.738 vs. r = 0.689, P < 0.0001). Negative predictive value of CESM was significantly higher than of MG (85.71% vs. 30.77%, P < 0.0001). When used for problem-solving, CESM changed patient management in 2/8 (25%) cases. Superposition artifacts and timing problems affected diagnostic utility in 3/118 (2.5%) patients. Conclusion CESM is a feasible and accurate alternative for patients with MRI contraindications, but it is necessary to be aware of the method's technical limitations.
Genova, Alessandro; Ceresoli, Davide; Pavanello, Michele
2016-06-21
In this work we achieve three milestones: (1) we present a subsystem DFT method capable of running ab-initio molecular dynamics simulations accurately and efficiently. (2) In order to rid the simulations of inter-molecular self-interaction error, we exploit the ability of semilocal frozen density embedding formulation of subsystem DFT to represent the total electron density as a sum of localized subsystem electron densities that are constrained to integrate to a preset, constant number of electrons; the success of the method relies on the fact that employed semilocal nonadditive kinetic energy functionals effectively cancel out errors in semilocal exchange-correlation potentials that are linked to static correlation effects and self-interaction. (3) We demonstrate this concept by simulating liquid water and solvated OH(•) radical. While the bulk of our simulations have been performed on a periodic box containing 64 independent water molecules for 52 ps, we also simulated a box containing 256 water molecules for 22 ps. The results show that, provided one employs an accurate nonadditive kinetic energy functional, the dynamics of liquid water and OH(•) radical are in semiquantitative agreement with experimental results or higher-level electronic structure calculations. Our assessments are based upon comparisons of radial and angular distribution functions as well as the diffusion coefficient of the liquid.
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...
2016-12-15
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
Network geometry inference using common neighbors
NASA Astrophysics Data System (ADS)
Papadopoulos, Fragkiskos; Aldecoa, Rodrigo; Krioukov, Dmitri
2015-08-01
We introduce and explore a method for inferring hidden geometric coordinates of nodes in complex networks based on the number of common neighbors between the nodes. We compare this approach to the HyperMap method, which is based only on the connections (and disconnections) between the nodes, i.e., on the links that the nodes have (or do not have). We find that for high degree nodes, the common-neighbors approach yields a more accurate inference than the link-based method, unless heuristic periodic adjustments (or "correction steps") are used in the latter. The common-neighbors approach is computationally intensive, requiring O (t4) running time to map a network of t nodes, versus O (t3) in the link-based method. But we also develop a hybrid method with O (t3) running time, which combines the common-neighbors and link-based approaches, and we explore a heuristic that reduces its running time further to O (t2) , without significant reduction in the mapping accuracy. We apply this method to the autonomous systems (ASs) Internet, and we reveal how soft communities of ASs evolve over time in the similarity space. We further demonstrate the method's predictive power by forecasting future links between ASs. Taken altogether, our results advance our understanding of how to efficiently and accurately map real networks to their latent geometric spaces, which is an important necessary step toward understanding the laws that govern the dynamics of nodes in these spaces, and the fine-grained dynamics of network connections.
Sehl, Anthony; Couëdelo, Leslie; Fonseca, Laurence; Vaysse, Carole; Cansell, Maud
2018-06-15
Lipid transmethylation methods described in the literature are not always evaluated with care so to insure that the methods are effective, especially on food matrix or biological samples containing polyunsaturated fatty acid (PUFA). The aim of the present study was to select a method suitable for all lipid species rich in long chain n-3 PUFA. Three published methods were adapted and applied on individual lipid classes. Lipid (trans)methylation efficiency was characterized in terms of reaction yield and gas chromatography (GC) analysis. The acid-catalyzed method was unable to convert triglycerides and sterol esters, while the method using an incubation at a moderate temperature was ineffective on phospholipids and sterol esters. On the whole only the method using sodium methoxide and sulfuric acid was effective on lipid classes taken individually or in a complex medium. This study highlighted the use of an appropriate (trans)methylation method for insuring an accurate fatty acid composition. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Gnoffo, P. A.
1978-01-01
An investigation has been made into the ability of a method of integral relations to calculate inviscid zero degree angle of attack, radiative heating distributions over blunt, sonic corner bodies for some representative outer planet entry conditions is investigated. Comparisons have been made with a more detailed numerical method, a time asymptotic technique, using the same equilibrium chemistry and radiation transport subroutines. An effort to produce a second order approximation (two-strip) method of integral relations code to aid in this investigation is also described and a modified two-strip routine is presented. Results indicate that the one-strip method of integral relations cannot be used to obtain accurate estimates of the radiative heating distribution because of its inability to resolve thermal gradients near the wall. The two-strip method can sometimes be used to improve these estimates; however, the two-strip method has only a small range of conditions over which it will yield significant improvement over the one-strip method.
Application of the scalar and vector potentials to the aerodynamics of jets
NASA Technical Reports Server (NTRS)
Russell, H. L.; Skifstad, J. G.
1973-01-01
The applicability of a method based on the Stokes potentials (vector and scalar potentials) to computations associated with the aerodynamics of jets was examined. The aerodynamic field near the nozzle could be represented and that the influence of a nonuniform velocity profile at the nozzle exit plane could be determined. Also computations were made for an axisymmetric jet exhausting into a quiescient atmosphere. The velocity at the axis of the jet, and the location of the half-velocity points along the jet yield accurate aerodynamic field computations. Inconsistencies among the different theoretical characterizations of jet flowfields are shown.
Kramers-Kronig relations in Laser Intensity Modulation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuncer, Enis
2006-01-01
In this short paper, the Kramers-Kronig relations for the Laser Intensity Modulation Method (LIMM) are presented to check the self-consistency of experimentally obtained complex current densities. The numerical procedure yields well defined, precise estimates for the real and the imaginary parts of the LIMM current density calculated from its imaginary and real parts, respectively. The procedure also determines an accurate high frequency real current value which appears to be an intrinsic material parameter similar to that of the dielectric permittivity at optical frequencies. Note that the problem considered here couples two different material properties, thermal and electrical, consequently the validitymore » of the Kramers-Kronig relation indicates that the problem is invariant and linear.« less
Quantifying Void Ratio in Granular Materials Using Voronoi Tessellation
NASA Technical Reports Server (NTRS)
Alshibli, Khalid A.; El-Saidany, Hany A.; Rose, M. Franklin (Technical Monitor)
2000-01-01
Voronoi technique was used to calculate the local void ratio distribution of granular materials. It was implemented in an application-oriented image processing and analysis algorithm capable of extracting object edges, separating adjacent particles, obtaining the centroid of each particle, generating Voronoi polygons, and calculating the local void ratio. Details of the algorithm capabilities and features are presented. Verification calculations included performing manual digitization of synthetic images using Oda's method and Voronoi polygon system. The developed algorithm yielded very accurate measurements of the local void ratio distribution. Voronoi tessellation has the advantage, compared to Oda's method, of offering a well-defined polygon generation criterion that can be implemented in an algorithm to automatically calculate local void ratio of particulate materials.
Unbiased simulation of near-Clifford quantum circuits
Bennink, Ryan S.; Ferragut, Erik M.; Humble, Travis S.; ...
2017-06-28
Modeling and simulation are essential for predicting and verifying the behavior of fabricated quantum circuits, but existing simulation methods are either impractically costly or require an unrealistic simplification of error processes. In this paper, we present a method of simulating noisy Clifford circuits that is both accurate and practical in experimentally relevant regimes. In particular, the cost is weakly exponential in the size and the degree of non-Cliffordness of the circuit. Our approach is based on the construction of exact representations of quantum channels as quasiprobability distributions over stabilizer operations, which are then sampled, simulated, and weighted to yield unbiasedmore » statistical estimates of circuit outputs and other observables. As a demonstration of these techniques, we simulate a Steane [[7,1,3
Complete Hexose Isomer Identification with Mass Spectrometry
NASA Astrophysics Data System (ADS)
Nagy, Gabe; Pohl, Nicola L. B.
2015-04-01
The first analytical method is presented for the identification and absolute configuration determination of all 24 aldohexose and 2-ketohexose isomers, including the D and L enantiomers for allose, altrose, galactose, glucose, gulose, idose, mannose, talose, fructose, psicose, sorbose, and tagatose. Two unique fixed ligand kinetic method combinations were discovered to create significant enough energetic differences to achieve chiral discrimination among all 24 hexoses. Each of these 24 hexoses yields unique ratios of a specific pair of fragment ions that allows for simultaneous determination of identification and absolute configuration. This mass spectrometric-based methodology can be readily employed for accurate identification of any isolated monosaccharide from an unknown biological source. This work provides a key step towards the goal of complete de novo carbohydrate analysis.
Villamor, Grace B.; Nyarko, Benjamin Kofi; Wala, Kperkouma; Akpagana, Koffi
2018-01-01
Vitellaria paradoxa (Gaertn C. F.), or shea tree, remains one of the most valuable trees for farmers in the Atacora district of northern Benin, where rural communities depend on shea products for both food and income. To optimize productivity and management of shea agroforestry systems, or "parklands," accurate and up-to-date data are needed. For this purpose, we monitored120 fruiting shea trees for two years under three land-use scenarios and different soil groups in Atacora, coupled with a farm household survey to elicit information on decision making and management practices. To examine the local pattern of shea tree productivity and relationships between morphological factors and yields, we used a randomized branch sampling method and applied a regression analysis to build a shea yield model based on dendrometric, soil and land-use variables. We also compared potential shea yields based on farm household socio-economic characteristics and management practices derived from the survey data. Soil and land-use variables were the most important determinants of shea fruit yield. In terms of land use, shea trees growing on farmland plots exhibited the highest yields (i.e., fruit quantity and mass) while trees growing on Lixisols performed better than those of the other soil group. Contrary to our expectations, dendrometric parameters had weak relationships with fruit yield regardless of land-use and soil group. There is an inter-annual variability in fruit yield in both soil groups and land-use type. In addition to observed inter-annual yield variability, there was a high degree of variability in production among individual shea trees. Furthermore, household socioeconomic characteristics such as road accessibility, landholding size, and gross annual income influence shea fruit yield. The use of fallow areas is an important land management practice in the study area that influences both conservation and shea yield. PMID:29346406
Aleza, Koutchoukalo; Villamor, Grace B; Nyarko, Benjamin Kofi; Wala, Kperkouma; Akpagana, Koffi
2018-01-01
Vitellaria paradoxa (Gaertn C. F.), or shea tree, remains one of the most valuable trees for farmers in the Atacora district of northern Benin, where rural communities depend on shea products for both food and income. To optimize productivity and management of shea agroforestry systems, or "parklands," accurate and up-to-date data are needed. For this purpose, we monitored120 fruiting shea trees for two years under three land-use scenarios and different soil groups in Atacora, coupled with a farm household survey to elicit information on decision making and management practices. To examine the local pattern of shea tree productivity and relationships between morphological factors and yields, we used a randomized branch sampling method and applied a regression analysis to build a shea yield model based on dendrometric, soil and land-use variables. We also compared potential shea yields based on farm household socio-economic characteristics and management practices derived from the survey data. Soil and land-use variables were the most important determinants of shea fruit yield. In terms of land use, shea trees growing on farmland plots exhibited the highest yields (i.e., fruit quantity and mass) while trees growing on Lixisols performed better than those of the other soil group. Contrary to our expectations, dendrometric parameters had weak relationships with fruit yield regardless of land-use and soil group. There is an inter-annual variability in fruit yield in both soil groups and land-use type. In addition to observed inter-annual yield variability, there was a high degree of variability in production among individual shea trees. Furthermore, household socioeconomic characteristics such as road accessibility, landholding size, and gross annual income influence shea fruit yield. The use of fallow areas is an important land management practice in the study area that influences both conservation and shea yield.
Design of reinforced areas of concrete column using quadratic polynomials
NASA Astrophysics Data System (ADS)
Arif Gunadi, Tjiang; Parung, Herman; Rachman Djamaluddin, Abd; Arwin Amiruddin, A.
2017-11-01
Designing of reinforced concrete columns mostly carried out by a simple planning method which uses column interaction diagram. However, the application of this method is limited because it valids only for certain compressive strenght of the concrete and yield strength of the reinforcement. Thus, a more applicable method is still in need. Another method is the use of quadratic polynomials as a basis for the approach in designing reinforced concrete columns, where the ratio of neutral lines to the effective height of a cross section (ξ) if associated with ξ in the same cross-section with different reinforcement ratios is assumed to form a quadratic polynomial. This is identical to the basic principle used in the Simpson rule for numerical integral using quadratic polynomials and had a sufficiently accurate level of accuracy. The basis of this approach to be used both the normal force equilibrium and the moment equilibrium. The abscissa of the intersection of the two curves is the ratio that had been mentioned, since it fulfill both of the equilibrium. The application of this method is relatively more complicated than the existing method but provided with tables and graphs (N vs ξN ) and (M vs ξM ) so that its used could be simplified. The uniqueness of these tables are only distinguished based on the compresssive strength of the concrete, so in application it could be combined with various yield strenght of the reinforcement available in the market. This method could be solved by using programming languages such as Fortran.
Baskar, Gurunathan; Sathya, Shree Rajesh K
2011-01-01
Statistical and evolutionary optimization of media composition was employed for the production of medicinal exopolysaccharide (EPS) by Lingzhi or Reishi medicinal mushroom Ganoderma lucidium MTCC 1039 using soya bean meal flour as low-cost substrate. Soya bean meal flour, ammonium chloride, glucose, and pH were identified as the most important variables for EPS yield using the two-level Plackett-Burman design and further optimized using the central composite design (CCD) and the artificial neural network (ANN)-linked genetic algorithm (GA). The high value of coefficient of determination of ANN (R² = 0.982) indicates that the ANN model was more accurate than the second-order polynomial model of CCD (R² = 0.91) for representing the effect of media composition on EPS yield. The predicted optimum media composition using ANN-linked GA was soybean meal flour 2.98%, glucose 3.26%, ammonium chloride 0.25%, and initial pH 7.5 for the maximum predicted EPS yield of 1005.55 mg/L. The experimental EPS yield obtained using the predicted optimum media composition was 1012.36 mg/L, which validates the high degree of accuracy of evolutionary optimization for enhanced production of EPS by submerged fermentation of G. lucidium.
Rohling, Heide; Sihver, Lembit; Priegnitz, Marlen; Enghardt, Wolfgang; Fiedler, Fine
2013-09-21
For quality assurance in particle therapy, a non-invasive, in vivo range verification is highly desired. Particle therapy positron-emission-tomography (PT-PET) is the only clinically proven method up to now for this purpose. It makes use of the β(+)-activity produced during the irradiation by the nuclear fragmentation processes between the therapeutic beam and the irradiated tissue. Since a direct comparison of β(+)-activity and dose is not feasible, a simulation of the expected β(+)-activity distribution is required. For this reason it is essential to have a quantitatively reliable code for the simulation of the yields of the β(+)-emitting nuclei at every position of the beam path. In this paper results of the three-dimensional Monte-Carlo simulation codes PHITS, GEANT4, and the one-dimensional deterministic simulation code HIBRAC are compared to measurements of the yields of the most abundant β(+)-emitting nuclei for carbon, lithium, helium, and proton beams. In general, PHITS underestimates the yields of positron-emitters. With GEANT4 the overall most accurate results are obtained. HIBRAC and GEANT4 provide comparable results for carbon and proton beams. HIBRAC is considered as a good candidate for the implementation to clinical routine PT-PET.
NASA Astrophysics Data System (ADS)
Rohling, Heide; Sihver, Lembit; Priegnitz, Marlen; Enghardt, Wolfgang; Fiedler, Fine
2013-09-01
For quality assurance in particle therapy, a non-invasive, in vivo range verification is highly desired. Particle therapy positron-emission-tomography (PT-PET) is the only clinically proven method up to now for this purpose. It makes use of the β+-activity produced during the irradiation by the nuclear fragmentation processes between the therapeutic beam and the irradiated tissue. Since a direct comparison of β+-activity and dose is not feasible, a simulation of the expected β+-activity distribution is required. For this reason it is essential to have a quantitatively reliable code for the simulation of the yields of the β+-emitting nuclei at every position of the beam path. In this paper results of the three-dimensional Monte-Carlo simulation codes PHITS, GEANT4, and the one-dimensional deterministic simulation code HIBRAC are compared to measurements of the yields of the most abundant β+-emitting nuclei for carbon, lithium, helium, and proton beams. In general, PHITS underestimates the yields of positron-emitters. With GEANT4 the overall most accurate results are obtained. HIBRAC and GEANT4 provide comparable results for carbon and proton beams. HIBRAC is considered as a good candidate for the implementation to clinical routine PT-PET.
Isotopic yield measurement in the heavy mass region for 239Pu thermal neutron induced fission
NASA Astrophysics Data System (ADS)
Bail, A.; Serot, O.; Mathieu, L.; Litaize, O.; Materna, T.; Köster, U.; Faust, H.; Letourneau, A.; Panebianco, S.
2011-09-01
Despite the huge number of fission yield data available in the different evaluated nuclear data libraries, such as JEFF-3.1.1, ENDF/B-VII.0, and JENDL-4.0, more accurate data are still needed both for nuclear energy applications and for our understanding of the fission process itself. It is within the framework of this that measurements on the recoil mass spectrometer Lohengrin (at the Institut Laue-Langevin, Grenoble, France) was undertaken, to determine isotopic yields for the heavy fission products from the 239Pu(nth,f) reaction. In order to do this, a new experimental method based on γ-ray spectrometry was developed and validated by comparing our results with those performed in the light mass region with completely different setups. Hence, about 65 fission product yields were measured with an uncertainty that has been reduced on average by a factor of 2 compared to that previously available in the nuclear data libraries. In addition, for some fission products, a strongly deformed ionic charge distribution compared to a normal Gaussian shape was found, which was interpreted as being caused by the presence of a nanosecond isomeric state. Finally, a nuclear charge polarization has been observed in agreement, with the one described on other close fissioning systems.
Morrow, Linda; Hompesch, Marcus; Tideman, Ann M; Matson, Jennifer; Dunne, Nancy; Pardo, Scott; Parkes, Joan L; Schachner, Holly C; Simmons, David A
2011-01-01
Background This glucose clamp study assessed the performance of an electrochemical continuous glucose monitoring (CGM) system for monitoring levels of interstitial glucose. This novel system does not require use of a trocar or needle for sensor insertion. Method Continuous glucose monitoring sensors were inserted subcutaneously into the abdominal tissue of 14 adults with type 1 or type 2 diabetes. Subjects underwent an automated glucose clamp procedure with four consecutive post-steady-state glucose plateau periods (40 min each): (a) hypoglycemic (50 mg/dl), (b) hyperglycemic (250 mg/dl), (c) second hypoglycemic (50 mg/dl), and (d) euglycemic (90 mg/dl). Plasma glucose results obtained with YSI glucose analyzers were used for sensor calibration. Accuracy was assessed retrospectively for plateau periods and transition states, when glucose levels were changing rapidly (approximately 2 mg/dl/min). Results Mean absolute percent difference (APD) was lowest during hypoglycemic plateaus (11.68%, 14.15%) and the euglycemic-to-hypoglycemic transition (14.21%). Mean APD during the hyperglycemic plateau was 17.11%; mean APDs were 18.12% and 19.25% during the hypoglycemic-to-hyperglycemic and hyperglycemic-to-hypoglycemic transitions, respectively. Parkes (consensus) error grid analysis (EGA) and rate EGA of the plateaus and transition periods, respectively, yielded 86.8% and 68.6% accurate results (zone A) and 12.1% and 20.0% benign errors (zone B). Continuous EGA yielded 88.5%, 75.4%, and 79.3% accurate results and 8.3%, 14.3%, and 2.4% benign errors for the euglycemic, hyperglycemic, and hypoglycemic transition periods, respectively. Adverse events were mild and unlikely to be device related. Conclusion This novel CGM system was safe and accurate across the clinically relevant glucose range. PMID:21880226
Strategy For Yield Control And Enhancement In VLSI Wafer Manufacturing
NASA Astrophysics Data System (ADS)
Neilson, B.; Rickey, D.; Bane, R. P.
1988-01-01
In most fully utilized integrated circuit (IC) production facilities, profit is very closely linked with yield. In even the most controlled manufacturing environments, defects due to foreign material are a still major contributor to yield loss. Ideally, an IC manufacturer will have ample engineering resources to address any problem that arises. In the real world, staffing limitations require that some tasks must be left undone and potential benefits left unrealized. Therefore, it is important to prioritize problems in a manner that will give the maximum benefit to the manufacturer. When offered a smorgasbord of problems to solve, most people (engineers included) will start with what is most interesting or the most comfortable to work on. By providing a system that accurately predicts the impact of a wide variety of defect types, a rational method of prioritizing engineering effort can be made. To that effect, a program was developed to determine and rank the major yield detractors in a mixed analog/digital FET manufacturing line. The two classical methods of determining yield detractors are chip failure analysis and defect monitoring on drop in test die. Both of these methods have short comings: 1) Chip failure analysis is painstaking and very time consuming. As a result, the sample size is very small. 2) Drop in test die are usually designed for device parametric analysis rather than defect analysis. To provide enough wafer real estate to do meaningful defect analysis would render the wafer worthless for production. To avoid these problems, a defect monitor was designed that provided enough area to detect defects at the same rate or better than the NMOS product die whose yield was to be optimized. The defect monitor was comprehensive and electrically testable using such equipment as the Prometrix LM25 and other digital testers. This enabled the quick accumulation of data which could be handled statistically and mapped individually. By scaling the defect densities found on the monitors to the known sensitivities of the product wafer, the defect types were ranked by defect limiting yield. (Limiting yield is the resultant product yield if there were no other failure mechanisms other than the type being considered.) These results were then compared to the product failure analysis results to verify that the monitor was finding the same types of defects in the same proportion which were troubling our product. Finally, the major defect types were isolated and reduced using the short loop capability of the monitor.
Nonempirical range-separated hybrid functionals for solids and molecules
Skone, Jonathan H.; Govoni, Marco; Galli, Giulia
2016-06-03
Dielectric-dependent hybrid (DDH) functionals were recently shown to yield accurate energy gaps and dielectric constants for a wide variety of solids, at a computational cost considerably less than that of GW calculations. The fraction of exact exchange included in the definition of DDH functionals depends (self-consistently) on the dielectric constant of the material. Here we introduce a range-separated (RS) version of DDH functionals where short and long-range components are matched using system dependent, non-empirical parameters. We show that RS DDHs yield accurate electronic properties of inorganic and organic solids, including energy gaps and absolute ionization potentials. Moreover, we show thatmore » these functionals may be generalized to finite systems.« less
NASA Astrophysics Data System (ADS)
Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.
2018-04-01
The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.
Filament capturing with the multimaterial moment-of-fluid method*
Jemison, Matthew; Sussman, Mark; Shashkov, Mikhail
2015-01-15
A novel method for capturing two-dimensional, thin, under-resolved material configurations, known as “filaments,” is presented in the context of interface reconstruction. This technique uses a partitioning procedure to detect disconnected regions of material in the advective preimage of a cell (indicative of a filament) and makes use of the existing functionality of the Multimaterial Moment-of-Fluid interface reconstruction method to accurately capture the under-resolved feature, while exactly conserving volume. An algorithm for Adaptive Mesh Refinement in the presence of filaments is developed so that refinement is introduced only near the tips of filaments and where the Moment-of-Fluid reconstruction error is stillmore » large. Comparison to the standard Moment-of-Fluid method is made. As a result, it is demonstrated that using filament capturing at a given resolution yields gains in accuracy comparable to introducing an additional level of mesh refinement at significantly lower cost.« less
Delaunay Triangulation as a New Coverage Measurement Method in Wireless Sensor Network
Chizari, Hassan; Hosseini, Majid; Poston, Timothy; Razak, Shukor Abd; Abdullah, Abdul Hanan
2011-01-01
Sensing and communication coverage are among the most important trade-offs in Wireless Sensor Network (WSN) design. A minimum bound of sensing coverage is vital in scheduling, target tracking and redeployment phases, as well as providing communication coverage. Some methods measure the coverage as a percentage value, but detailed information has been missing. Two scenarios with equal coverage percentage may not have the same Quality of Coverage (QoC). In this paper, we propose a new coverage measurement method using Delaunay Triangulation (DT). This can provide the value for all coverage measurement tools. Moreover, it categorizes sensors as ‘fat’, ‘healthy’ or ‘thin’ to show the dense, optimal and scattered areas. It can also yield the largest empty area of sensors in the field. Simulation results show that the proposed DT method can achieve accurate coverage information, and provides many tools to compare QoC between different scenarios. PMID:22163792
Hunt, Alison C; Ek, Mattias; Schönbächler, Maria
2017-12-01
This study presents a new measurement procedure for the isolation of Pt from iron meteorite samples. The method also allows for the separation of Pd from the same sample aliquot. The separation entails a two-stage anion-exchange procedure. In the first stage, Pt and Pd are separated from each other and from major matrix constituents including Fe and Ni. In the second stage, Ir is reduced with ascorbic acid and eluted from the column before Pt collection. Platinum yields for the total procedure were typically 50-70%. After purification, high-precision Pt isotope determinations were performed by multi-collector ICP-MS. The precision of the new method was assessed using the IIAB iron meteorite North Chile. Replicate analyses of multiple digestions of this material yielded an intermediate precision for the measurement results of 0.73 for ε 192 Pt, 0.15 for ε 194 Pt and 0.09 for ε 196 Pt (2 standard deviations). The NIST SRM 3140 Pt solution reference material was passed through the measurement procedure and yielded an isotopic composition that is identical to the unprocessed Pt reference material. This indicates that the new technique is unbiased within the limit of the estimated uncertainties. Data for three iron meteorites support that Pt isotope variations in these samples are due to exposure to galactic cosmic rays in space.
1980-09-01
group. Perhaps- people in a more fully closed group would be more accurate. 7. The data we collected were essentially precognitive . Perhaps postcognitive...sets yield the fol- lowing results: I. Postcognitive data are (mainly) more accurate than precognitive , but not signif- icantly so. 2. With the
Meseret, S.; Tamir, B.; Gebreyohannes, G.; Lidauer, M.; Negussie, E.
2015-01-01
The development of effective genetic evaluations and selection of sires requires accurate estimates of genetic parameters for all economically important traits in the breeding goal. The main objective of this study was to assess the relative performance of the traditional lactation average model (LAM) against the random regression test-day model (RRM) in the estimation of genetic parameters and prediction of breeding values for Holstein Friesian herds in Ethiopia. The data used consisted of 6,500 test-day (TD) records from 800 first-lactation Holstein Friesian cows that calved between 1997 and 2013. Co-variance components were estimated using the average information restricted maximum likelihood method under single trait animal model. The estimate of heritability for first-lactation milk yield was 0.30 from LAM whilst estimates from the RRM model ranged from 0.17 to 0.29 for the different stages of lactation. Genetic correlations between different TDs in first-lactation Holstein Friesian ranged from 0.37 to 0.99. The observed genetic correlation was less than unity between milk yields at different TDs, which indicated that the assumption of LAM may not be optimal for accurate evaluation of the genetic merit of animals. A close look at estimated breeding values from both models showed that RRM had higher standard deviation compared to LAM indicating that the TD model makes efficient utilization of TD information. Correlations of breeding values between models ranged from 0.90 to 0.96 for different group of sires and cows and marked re-rankings were observed in top sires and cows in moving from the traditional LAM to RRM evaluations. PMID:26194217
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2017-07-01
Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.
Combined slope ratio analysis and linear-subtraction: An extension of the Pearce ratio method
NASA Astrophysics Data System (ADS)
De Waal, Sybrand A.
1996-07-01
A new technique, called combined slope ratio analysis, has been developed by extending the Pearce element ratio or conserved-denominator method (Pearce, 1968) to its logical conclusions. If two stoichiometric substances are mixed and certain chemical components are uniquely contained in either one of the two mixing substances, then by treating these unique components as conserved, the composition of the substance not containing the relevant component can be accurately calculated within the limits allowed by analytical and geological error. The calculated composition can then be subjected to rigorous statistical testing using the linear-subtraction method recently advanced by Woronow (1994). Application of combined slope ratio analysis to the rocks of the Uwekahuna Laccolith, Hawaii, USA, and the lavas of the 1959-summit eruption of Kilauea Volcano, Hawaii, USA, yields results that are consistent with field observations.
Using electrical impedance to predict catheter-endocardial contact during RF cardiac ablation.
Cao, Hong; Tungjitkusolmun, Supan; Choy, Young Bin; Tsai, Jang-Zern; Vorperian, Vicken R; Webster, John G
2002-03-01
During radio-frequency (RF) cardiac catheter ablation, there is little information to estimate the contact between the catheter tip electrode and endocardium because only the metal electrode shows up under fluoroscopy. We present a method that utilizes the electrical impedance between the catheter electrode and the dispersive electrode to predict the catheter tip electrode insertion depth into the endocardium. Since the resistivity of blood differs from the resistivity of the endocardium, the impedance increases as the catheter tip lodges deeper in the endocardium. In vitro measurements yielded the impedance-depth relations at 1, 10, 100, and 500 kHz. We predict the depth by spline curve interpolation using the obtained calibration curve. This impedance method gives reasonably accurate predicted depth. We also evaluated alternative methods, such as impedance difference and impedance ratio.
Measurement of the configuration of a concave surface by the interference of reflected light
NASA Technical Reports Server (NTRS)
Kumazawa, T.; Sakamoto, T.; Shida, S.
1985-01-01
A method whereby a concave surface is irradiated with coherent light and the resulting interference fringes yield information on the concave surface is described. This method can be applied to a surface which satisfies the following conditions: (1) the concave face has a mirror surface; (2) the profile of the face is expressed by a mathematical function with a point of inflection. In this interferometry, multilight waves reflected from the concave surface interfere and make fringes wherever the reflected light propagates. Interference fringe orders. Photographs of the fringe patterns for a uniformly loaded thin silicon plate clamped at the edge are shown experimentally. The experimental and the theoretical values of the maximum optical path difference show good agreement. This simple method can be applied to obtain accurate information on concave surfaces.
High-order Path Integral Monte Carlo methods for solving strongly correlated fermion problems
NASA Astrophysics Data System (ADS)
Chin, Siu A.
2015-03-01
In solving for the ground state of a strongly correlated many-fermion system, the conventional second-order Path Integral Monte Carlo method is plagued with the sign problem. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the square of the ground state wave function at large imaginary time. In this work, I show that optimized fourth-order Path Integral Monte Carlo methods, which uses no more than 5 free-fermion propagators, in conjunction with the use of the Hamiltonian energy estimator, can yield accurate ground state energies for quantum dots with up to 20 polarized electrons. The correlations are directly built-in and no explicit wave functions are needed. This work is supported by the Qatar National Research Fund NPRP GRANT #5-674-1-114.
A fully Bayesian method for jointly fitting instrumental calibration and X-ray spectral models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Jin; Yu, Yaming; Van Dyk, David A.
2014-10-20
Owing to a lack of robust principled methods, systematic instrumental uncertainties have generally been ignored in astrophysical data analysis despite wide recognition of the importance of including them. Ignoring calibration uncertainty can cause bias in the estimation of source model parameters and can lead to underestimation of the variance of these estimates. We previously introduced a pragmatic Bayesian method to address this problem. The method is 'pragmatic' in that it introduced an ad hoc technique that simplified computation by neglecting the potential information in the data for narrowing the uncertainty for the calibration product. Following that work, we use amore » principal component analysis to efficiently represent the uncertainty of the effective area of an X-ray (or γ-ray) telescope. Here, however, we leverage this representation to enable a principled, fully Bayesian method that coherently accounts for the calibration uncertainty in high-energy spectral analysis. In this setting, the method is compared with standard analysis techniques and the pragmatic Bayesian method. The advantage of the fully Bayesian method is that it allows the data to provide information not only for estimation of the source parameters but also for the calibration product—here the effective area, conditional on the adopted spectral model. In this way, it can yield more accurate and efficient estimates of the source parameters along with valid estimates of their uncertainty. Provided that the source spectrum can be accurately described by a parameterized model, this method allows rigorous inference about the effective area by quantifying which possible curves are most consistent with the data.« less
Piao, Yongjun; Piao, Minghao; Ryu, Keun Ho
2017-01-01
Cancer classification has been a crucial topic of research in cancer treatment. In the last decade, messenger RNA (mRNA) expression profiles have been widely used to classify different types of cancers. With the discovery of a new class of small non-coding RNAs; known as microRNAs (miRNAs), various studies have shown that the expression patterns of miRNA can also accurately classify human cancers. Therefore, there is a great demand for the development of machine learning approaches to accurately classify various types of cancers using miRNA expression data. In this article, we propose a feature subset-based ensemble method in which each model is learned from a different projection of the original feature space to classify multiple cancers. In our method, the feature relevance and redundancy are considered to generate multiple feature subsets, the base classifiers are learned from each independent miRNA subset, and the average posterior probability is used to combine the base classifiers. To test the performance of our method, we used bead-based and sequence-based miRNA expression datasets and conducted 10-fold and leave-one-out cross validations. The experimental results show that the proposed method yields good results and has higher prediction accuracy than popular ensemble methods. The Java program and source code of the proposed method and the datasets in the experiments are freely available at https://sourceforge.net/projects/mirna-ensemble/. Copyright © 2016 Elsevier Ltd. All rights reserved.
Max, Jean-Joseph; Meddeb-Mouelhi, Fatma; Beauregard, Marc; Chapados, Camille
2012-12-01
Enzymatic assays need robust, rapid colorimetric methods that can follow ongoing reactions. For this, we developed a highly accurate, multi-wavelength detection method that could be used for several systems. Here, it was applied to the detection of para-nitrophenol (pNP) in basic and acidic solutions. First, we confirmed by factor analysis that pNP has two forms, with unique spectral characteristics in the 240 to 600 nm range: Phenol in acidic conditions absorbs in the lower range, whereas phenolate in basic conditions absorbs in the higher range. Thereafter, the method was used for the determination of species concentration. For this, the intensity measurements were made at only two wavelengths with a microtiter plate reader. This yielded total dye concentration, species relative abundance, and solution pH value. The method was applied to an enzymatic assay. For this, a chromogenic substrate that generates pNP after hydrolysis catalyzed by a lipase from the fungus Yarrowia lipolytica was used. Over the pH range of 3-11, accurate amounts of acidic and basic pNP were determined at 340 and 405 nm, respectively. This method surpasses the commonly used single-wavelength assay at 405 nm, which does not detect pNP acidic species, leading to activity underestimations. Moreover, alleviation of this pH-related problem by neutralization is not necessary. On the whole, the method developed is readily applicable to rapid high-throughput of enzymatic activity measurements over a wide pH range.
Myocardial strains from 3D displacement encoded magnetic resonance imaging
2012-01-01
Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791
A new method of measuring lens refractive index.
Buckley, John
2008-07-01
A new clinical method for determining the refractive index of a lens is described. By measuring lens power in air and then immersing the lens in a liquid of known refractive index (n), it is possible to calculate the refractive index of the lens material (micro) by using the formula: micro = (nK (v,1) - K(v,n))/(K (v,1) - K (v,n)) where K (v,1) is the lens power determined in air K (v,n) is the lens power determined in the immersion liquid. The only materials required are a digital lensmeter and a wet cell for holding the lens in a liquid. The theoretical basis of the method is explained and a description given of the limitations. The optimal method of measuring different types of lenses is discussed. Sources of error include the thin lens theory behind the method, the use of a wetcell and the digital lensmeter. The theoretical accuracy of the results is given as 0.02 but 0.01 is usually achieved. In all cases, measuring the front vertex powers (FVP) yields a more accurate estimate of refractive index of a lens than measuring back vertex power (BVP). The author found half the lenses measured attained values within 0.005 of the known material index. This method is usually sufficiently accurate to isolate which lens material has been used in manufacturing and permit manufacturing spectacles that mimic the appearance of an earlier pair. Some suggestions for further refinement are given.
NASA Astrophysics Data System (ADS)
Luo, Ning; Zhao, Zhanfeng; Illman, Walter A.; Berg, Steven J.
2017-11-01
Transient hydraulic tomography (THT) is a robust method of aquifer characterization to estimate the spatial distributions (or tomograms) of both hydraulic conductivity (K) and specific storage (Ss). However, the highly-parameterized nature of the geostatistical inversion approach renders it computationally intensive for large-scale investigations. In addition, geostatistics-based THT may produce overly smooth tomograms when head data used to constrain the inversion is limited. Therefore, alternative model conceptualizations for THT need to be examined. To investigate this, we simultaneously calibrated different groundwater models with varying parameterizations and zonations using two cases of different pumping and monitoring data densities from a laboratory sandbox. Specifically, one effective parameter model, four geology-based zonation models with varying accuracy and resolution, and five geostatistical models with different prior information are calibrated. Model performance is quantitatively assessed by examining the calibration and validation results. Our study reveals that highly parameterized geostatistical models perform the best among the models compared, while the zonation model with excellent knowledge of stratigraphy also yields comparable results. When few pumping tests with sparse monitoring intervals are available, the incorporation of accurate or simplified geological information into geostatistical models reveals more details in heterogeneity and yields more robust validation results. However, results deteriorate when inaccurate geological information are incorporated. Finally, our study reveals that transient inversions are necessary to obtain reliable K and Ss estimates for making accurate predictions of transient drawdown events.
Benchmarking Attosecond Physics with Atomic Hydrogen
2015-05-25
theoretical simulations are available in this regime. We provided accurate reference data on the photoionization yield and the CEP-dependent...this difficulty. This experiment claimed to show that, contrary to current understanding, the photoionization of an atomic electron is not an... photoion yield and transferrable intensity calibration. The dependence of photoionization probability on laser intensity is one of the most
Improved accuracy for finite element structural analysis via a new integrated force method
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Aiello, Robert A.; Berke, Laszlo
1992-01-01
A comparative study was carried out to determine the accuracy of finite element analyses based on the stiffness method, a mixed method, and the new integrated force and dual integrated force methods. The numerical results were obtained with the following software: MSC/NASTRAN and ASKA for the stiffness method; an MHOST implementation method for the mixed method; and GIFT for the integrated force methods. The results indicate that on an overall basis, the stiffness and mixed methods present some limitations. The stiffness method generally requires a large number of elements in the model to achieve acceptable accuracy. The MHOST method tends to achieve a higher degree of accuracy for course models than does the stiffness method implemented by MSC/NASTRAN and ASKA. The two integrated force methods, which bestow simultaneous emphasis on stress equilibrium and strain compatibility, yield accurate solutions with fewer elements in a model. The full potential of these new integrated force methods remains largely unexploited, and they hold the promise of spawning new finite element structural analysis tools.
Salissou, Yacoubou; Panneton, Raymond
2010-11-01
Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.
Deformation field correction for spatial normalization of PET images
Bilgel, Murat; Carass, Aaron; Resnick, Susan M.; Wong, Dean F.; Prince, Jerry L.
2015-01-01
Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet the current state of the art in PET-to-PET registration is limited to the application of conventional deformable registration methods that were developed for structural images. A method is presented for the spatial normalization of PET images that improves their anatomical alignment over the state of the art. The approach works by correcting the deformable registration result using a model that is learned from training data having both PET and structural images. In particular, viewing the structural registration of training data as ground truth, correction factors are learned by using a generalized ridge regression at each voxel given the PET intensities and voxel locations in a population-based PET template. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation evaluation on 79 subjects shows that the proposed method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations. PMID:26142272
Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution
NASA Astrophysics Data System (ADS)
Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing
2016-12-01
The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of 80.3+/- 4.5 , yielding a mean Dice similarity coefficient of 97.25+/- 0.65 % , and an average symmetric surface distance of 0.84+/- 0.25 mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.
Computation of mass-density images from x-ray refraction-angle images.
Wernick, Miles N; Yang, Yongyi; Mondal, Indrasis; Chapman, Dean; Hasnah, Moumen; Parham, Christopher; Pisano, Etta; Zhong, Zhong
2006-04-07
In this paper, we investigate the possibility of computing quantitatively accurate images of mass density variations in soft tissue. This is a challenging task, because density variations in soft tissue, such as the breast, can be very subtle. Beginning from an image of refraction angle created by either diffraction-enhanced imaging (DEI) or multiple-image radiography (MIR), we estimate the mass-density image using a constrained least squares (CLS) method. The CLS algorithm yields accurate density estimates while effectively suppressing noise. Our method improves on an analytical method proposed by Hasnah et al (2005 Med. Phys. 32 549-52), which can produce significant artefacts when even a modest level of noise is present. We present a quantitative evaluation study to determine the accuracy with which mass density can be determined in the presence of noise. Based on computer simulations, we find that the mass-density estimation error can be as low as a few per cent for typical density variations found in the breast. Example images computed from less-noisy real data are also shown to illustrate the feasibility of the technique. We anticipate that density imaging may have application in assessment of water content of cartilage resulting from osteoarthritis, in evaluation of bone density, and in mammographic interpretation.
NASA Astrophysics Data System (ADS)
Khalili, Ashkan; Jha, Ratneshwar; Samaratunga, Dulip
2016-11-01
Wave propagation analysis in 2-D composite structures is performed efficiently and accurately through the formulation of a User-Defined Element (UEL) based on the wavelet spectral finite element (WSFE) method. The WSFE method is based on the first-order shear deformation theory which yields accurate results for wave motion at high frequencies. The 2-D WSFE model is highly efficient computationally and provides a direct relationship between system input and output in the frequency domain. The UEL is formulated and implemented in Abaqus (commercial finite element software) for wave propagation analysis in 2-D composite structures with complexities. Frequency domain formulation of WSFE leads to complex valued parameters, which are decoupled into real and imaginary parts and presented to Abaqus as real values. The final solution is obtained by forming a complex value using the real number solutions given by Abaqus. Five numerical examples are presented in this article, namely undamaged plate, impacted plate, plate with ply drop, folded plate and plate with stiffener. Wave motions predicted by the developed UEL correlate very well with Abaqus simulations. The results also show that the UEL largely retains computational efficiency of the WSFE method and extends its ability to model complex features.
Receptive Field Inference with Localized Priors
Park, Mijung; Pillow, Jonathan W.
2011-01-01
The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets. PMID:22046110
Comparative study of quantitative phase imaging techniques for refractometry of optical fibers
NASA Astrophysics Data System (ADS)
de Dorlodot, Bertrand; Bélanger, Erik; Bérubé, Jean-Philippe; Vallée, Réal; Marquet, Pierre
2018-02-01
The refractive index difference profile of optical fibers is the key design parameter because it determines, among other properties, the insertion losses and propagating modes. Therefore, an accurate refractive index profiling method is of paramount importance to their development and optimization. Quantitative phase imaging (QPI) is one of the available tools to retrieve structural characteristics of optical fibers, including the refractive index difference profile. Having the advantage of being non-destructive, several different QPI methods have been developed over the last decades. Here, we present a comparative study of three different available QPI techniques, namely the transport-of-intensity equation, quadriwave lateral shearing interferometry and digital holographic microscopy. To assess the accuracy and precision of those QPI techniques, quantitative phase images of the core of a well-characterized optical fiber have been retrieved for each of them and a robust image processing procedure has been applied in order to retrieve their refractive index difference profiles. As a result, even if the raw images for all the three QPI methods were suffering from different shortcomings, our robust automated image-processing pipeline successfully corrected these. After this treatment, all three QPI techniques yielded accurate, reliable and mutually consistent refractive index difference profiles in agreement with the accuracy and precision of the refracted near-field benchmark measurement.
Ramirez-Sanchez, Israel; Maya, Lisandro; Ceballos, Guillermo; Villarreal, Francisco
2010-12-01
Polyphenolic compounds of the flavanoid family are abundantly present in cacao seed and its cocoa products. Results from studies using cocoa products indicate beneficial effects of flavanols on cardiovascular endpoints. Evidence indicates that (-)-epicatechin is the main cacao flavanol associated with cardiovascular effects, so the accurate quantification of its content in cacao seeds or cocoa products is important. Common methods for the quantification of phenolic content in cocoa products are based on the reaction of phenols with colorimetric reagents such as the Folin-Ciocalteu (FC) In this study, we compared the FC method of phenolic determinations using 2 different standards (gallic acid and (-)-epicatechin) to construct calibration curves. We compare these results with those obtained from a simple fluorometric method (Ex(280)/Em(320) nm) used to determine catechin/(-)-epicatechin content in samples of cacao seeds and cocoa products. Values obtained from the FC method determination of polyphenols yield an overestimation of phenol (flavonoid) content when gallic acid is used as standard. Moreover, the epicatechin is a more reliable standard because of its abundance in cacao seeds and cocoa products. The use of fluorometric spectra yields a simple and highly quantitative means for a more precise and rapid quantification of cacao catechins. Fluorometric values are essentially in agreement with those reported using more cumbersome methods. In conclusion, the use of fluorescence emission spectra is a quick, practical and suitable means to quantifying catechins in cacao seeds and cocoa products.
Stengel, Andreas; Keire, David; Goebel, Miriam; Evilevitch, Lena; Wiggins, Brian; Taché, Yvette; Reeve, Joseph R
2009-11-01
The correct identification of circulating molecular forms and measurement of peptide levels in blood entails that the endocrine peptide being studied is stable and recovered in good yields during blood processing. However, it is not clear whether this is achieved in studies using standard blood processing. Therefore, we compared peptide concentration and form of 12 (125)I-labeled peptides using the standard procedure (EDTA-blood on ice) and a new method employing Reduced temperatures, Acidification, Protease inhibition, Isotopic exogenous controls, and Dilution (RAPID). During standard processing there was at least 80% loss for calcitonin-gene-related peptide and cholecystokinin-58 (CCK-58) and more than 35% loss for amylin, insulin, peptide YY forms (PYY((1-36)) and PYY((3-36))), and somatostatin-28. In contrast, the RAPID method significantly improved the recovery for 11 of 12 peptides (P < 0.05) and eliminated the breakdown of endocrine peptides occurring after standard processing as reflected in radically changed molecular forms for CCK-58, gastrin-releasing peptide, somatostatin-28, and ghrelin. For endogenous ghrelin, this led to an acyl/total ghrelin ratio of 1:5 instead of 1:19 by the standard method. These results show that the RAPID method enables accurate assessment of circulating gut peptide concentrations and forms such as CCK-58, acylated ghrelin, and somatostatin-28. Therefore, the RAPID method represents an efficacious means to detect circulating variations in peptide concentrations and form relevant to the understanding of physiological function of endocrine peptides.
Herreros, María Luisa; Tagarro, Alfredo; García-Pose, Araceli; Sánchez, Aida; Cañete, Alfonso; Gili, Pablo
2018-01-01
This study evaluated using urine dipstick tests with the clean-catch method to screen for urinary tract infection (UTI) in febrile infants under 90 days of age. We carried out a comparative diagnostic accuracy study of infants under 90 days old, who were studied for unexplained fever without any source, in the emergency room of a hospital in Madrid from January 2011 to January 2013. We obtained matched samples of urine using two different methods: a clean-catch, standardised stimulation technique and catheterisation collection. The results of the leucocyte esterase test and nitrite test were compared with their urine cultures. We obtained 60 pairs of matched samples. A combined analysis of leukocyte esterase and, or, nitrites yielded a sensitivity of 86% and a specificity of 80% for the diagnosis of UTIs in clean-catch samples. The sensitivity of leukocyte esterase and, or, nitrites in samples obtained by catheterisation were not statistically different to the clean-catch samples (p = 0.592). Performing urine dipstick tests using urine samples obtained by the clean-catch method was an accurate screening test for diagnosing UTIs in febrile infants of less than 90 days old. This provided a good alternative to bladder catheterisation when screening for UTIs. ©2017 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Nilsson, Markus; Szczepankiewicz, Filip; van Westen, Danielle; Hansson, Oskar
2015-01-01
Conventional motion and eddy-current correction, where each diffusion-weighted volume is registered to a non diffusion-weighted reference, suffers from poor accuracy for high b-value data. An alternative approach is to extrapolate reference volumes from low b-value data. We aim to compare the performance of conventional and extrapolation-based correction of diffusional kurtosis imaging (DKI) data, and to demonstrate the impact of the correction approach on group comparison studies. DKI was performed in patients with Parkinson's disease dementia (PDD), and healthy age-matched controls, using b-values of up to 2750 s/mm2. The accuracy of conventional and extrapolation-based correction methods was investigated. Parameters from DTI and DKI were compared between patients and controls in the cingulum and the anterior thalamic projection tract. Conventional correction resulted in systematic registration errors for high b-value data. The extrapolation-based methods did not exhibit such errors, yielding more accurate tractography and up to 50% lower standard deviation in DKI metrics. Statistically significant differences were found between patients and controls when using the extrapolation-based motion correction that were not detected when using the conventional method. We recommend that conventional motion and eddy-current correction should be abandoned for high b-value data in favour of more accurate methods using extrapolation-based references.
NASA Astrophysics Data System (ADS)
McLaughlin, P. W.; Kaihatu, J. M.; Irish, J. L.; Taylor, N. R.; Slinn, D.
2013-12-01
Recent hurricane activity in the Gulf of Mexico has led to a need for accurate, computationally efficient prediction of hurricane damage so that communities can better assess risk of local socio-economic disruption. This study focuses on developing robust, physics based non-dimensional equations that accurately predict maximum significant wave height at different locations near a given hurricane track. These equations (denoted as Wave Response Functions, or WRFs) were developed from presumed physical dependencies between wave heights and hurricane characteristics and fit with data from numerical models of waves and surge under hurricane conditions. After curve fitting, constraints which correct for fully developed sea state were used to limit the wind wave growth. When applied to the region near Gulfport, MS, back prediction of maximum significant wave height yielded root mean square errors between 0.22-0.42 (m) at open coast stations and 0.07-0.30 (m) at bay stations when compared to the numerical model data. The WRF method was also applied to Corpus Christi, TX and Panama City, FL with similar results. Back prediction errors will be included in uncertainty evaluations connected to risk calculations using joint probability methods. These methods require thousands of simulations to quantify extreme value statistics, thus requiring the use of reduced methods such as the WRF to represent the relevant physical processes.
Shimol, Eli Ben; Joskowicz, Leo; Eliahou, Ruth; Shoshan, Yigal
2018-02-01
Stereotactic radiosurgery (SRS) is a common treatment for intracranial meningiomas. SRS is planned on a pre-therapy gadolinium-enhanced T1-weighted MRI scan (Gd-T1w MRI) in which the meningioma contours have been delineated. Post-SRS therapy serial Gd-T1w MRI scans are then acquired for longitudinal treatment evaluation. Accurate tumor volume change quantification is required for treatment efficacy evaluation and for treatment continuation. We present a new algorithm for the automatic segmentation and volumetric assessment of meningioma in post-therapy Gd-T1w MRI scans. The inputs are the pre- and post-therapy Gd-T1w MRI scans and the meningioma delineation in the pre-therapy scan. The output is the meningioma delineations and volumes in the post-therapy scan. The algorithm uses the pre-therapy scan and its meningioma delineation to initialize an extended Chan-Vese active contour method and as a strong patient-specific intensity and shape prior for the post-therapy scan meningioma segmentation. The algorithm is automatic, obviates the need for independent tumor localization and segmentation initialization, and incorporates the same tumor delineation criteria in both the pre- and post-therapy scans. Our experimental results on retrospective pre- and post-therapy scans with a total of 32 meningiomas with volume ranges 0.4-26.5 cm[Formula: see text] yield a Dice coefficient of [Formula: see text]% with respect to ground-truth delineations in post-therapy scans created by two clinicians. These results indicate a high correspondence to the ground-truth delineations. Our algorithm yields more reliable and accurate tumor volume change measurements than other stand-alone segmentation methods. It may be a useful tool for quantitative meningioma prognosis evaluation after SRS.
Classification of Ancient Mammal Individuals Using Dental Pulp MALDI-TOF MS Peptide Profiling
Tran, Thi-Nguyen-Ny; Aboudharam, Gérard; Gardeisen, Armelle; Davoust, Bernard; Bocquet-Appel, Jean-Pierre; Flaudrops, Christophe; Belghazi, Maya; Raoult, Didier; Drancourt, Michel
2011-01-01
Background The classification of ancient animal corpses at the species level remains a challenging task for forensic scientists and anthropologists. Severe damage and mixed, tiny pieces originating from several skeletons may render morphological classification virtually impossible. Standard approaches are based on sequencing mitochondrial and nuclear targets. Methodology/Principal Findings We present a method that can accurately classify mammalian species using dental pulp and mass spectrometry peptide profiling. Our work was organized into three successive steps. First, after extracting proteins from the dental pulp collected from 37 modern individuals representing 13 mammalian species, trypsin-digested peptides were used for matrix-assisted laser desorption/ionization time-of-flight mass spectrometry analysis. The resulting peptide profiles accurately classified every individual at the species level in agreement with parallel cytochrome b gene sequencing gold standard. Second, using a 279–modern spectrum database, we blindly classified 33 of 37 teeth collected in 37 modern individuals (89.1%). Third, we classified 10 of 18 teeth (56%) collected in 15 ancient individuals representing five mammal species including human, from five burial sites dating back 8,500 years. Further comparison with an upgraded database comprising ancient specimen profiles yielded 100% classification in ancient teeth. Peptide sequencing yield 4 and 16 different non-keratin proteins including collagen (alpha-1 type I and alpha-2 type I) in human ancient and modern dental pulp, respectively. Conclusions/Significance Mass spectrometry peptide profiling of the dental pulp is a new approach that can be added to the arsenal of species classification tools for forensics and anthropology as a complementary method to DNA sequencing. The dental pulp is a new source for collagen and other proteins for the species classification of modern and ancient mammal individuals. PMID:21364886
DNA-barcoding of forensically important blow flies (Diptera: Calliphoridae) in the Caribbean Region
Agnarsson, Ingi
2017-01-01
Correct identification of forensically important insects, such as flies in the family Calliphoridae, is a crucial step for them to be used as evidence in legal investigations. Traditional identification based on morphology has been effective, but has some limitations when it comes to identifying immature stages of certain species. DNA-barcoding, using COI, has demonstrated potential for rapid and accurate identification of Calliphoridae, however, this gene does not reliably distinguish among some recently diverged species, raising questions about its use for delimitation of species of forensic importance. To facilitate DNA based identification of Calliphoridae in the Caribbean we developed a vouchered reference collection from across the region, and a DNA sequence database, and further added the nuclear ITS2 as a second marker to increase accuracy of identification through barcoding. We morphologically identified freshly collected specimens, did phylogenetic analyses and employed several species delimitation methods for a total of 468 individuals representing 19 described species. Our results show that combination of COI + ITS2 genes yields more accurate identification and diagnoses, and better agreement with morphological data, than the mitochondrial barcodes alone. All of our results from independent and concatenated trees and most of the species delimitation methods yield considerably higher diversity estimates than the distance based approach and morphology. Molecular data support at least 24 distinct clades within Calliphoridae in this study, recovering substantial geographic variation for Lucilia eximia, Lucilia retroversa, Lucilia rica and Chloroprocta idioidea, probably indicating several cryptic species. In sum, our study demonstrates the importance of employing a second nuclear marker for barcoding analyses and species delimitation of calliphorids, and the power of molecular data in combination with a complete reference database to enable identification of taxonomically and geographically diverse insects of forensic importance. PMID:28761780
Tiecco, Matteo; Corte, Laura; Roscini, Luca; Colabella, Claudia; Germani, Raimondo; Cardinali, Gianluigi
2014-07-25
Conductometry is widely used to determine critical micellar concentration and micellar aggregates surface properties of amphiphiles. Current conductivity experiments of surfactant solutions are typically carried out by manual pipetting, yielding some tens reading points within a couple of hours. In order to study the properties of surfactant-cells interactions, each amphiphile must be tested in different conditions against several types of cells. This calls for complex experimental designs making the application of current methods seriously time consuming, especially because long experiments risk to determine alterations of cells, independently of the surfactant action. In this paper we present a novel, accurate and rapid automated procedure to obtain conductometric curves with several hundreds reading points within tens of minutes. The method was validated with surfactant solutions alone and in combination with Saccharomyces cerevisiae cells. An easy-to use R script, calculates conductometric parameters and their statistical significance with a graphic interface to visualize data and results. The validations showed that indeed the procedure works in the same manner with surfactant alone or in combination with cells, yielding around 1000 reading points within 20 min and with high accuracy, as determined by the regression analysis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Tian, Xiumei; Zeng, Dong; Zhang, Shanli; Huang, Jing; Zhang, Hua; He, Ji; Lu, Lijun; Xi, Weiwen; Ma, Jianhua; Bian, Zhaoying
2016-11-22
Dynamic cerebral perfusion x-ray computed tomography (PCT) imaging has been advocated to quantitatively and qualitatively assess hemodynamic parameters in the diagnosis of acute stroke or chronic cerebrovascular diseases. However, the associated radiation dose is a significant concern to patients due to its dynamic scan protocol. To address this issue, in this paper we propose an image restoration method by utilizing coupled dictionary learning (CDL) scheme to yield clinically acceptable PCT images with low-dose data acquisition. Specifically, in the present CDL scheme, the 2D background information from the average of the baseline time frames of low-dose unenhanced CT images and the 3D enhancement information from normal-dose sequential cerebral PCT images are exploited to train the dictionary atoms respectively. After getting the two trained dictionaries, we couple them to represent the desired PCT images as spatio-temporal prior in objective function construction. Finally, the low-dose dynamic cerebral PCT images are restored by using a general DL image processing. To get a robust solution, the objective function is solved by using a modified dictionary learning based image restoration algorithm. The experimental results on clinical data show that the present method can yield more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps than the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Regnier, D.; Dubray, N.; Schunck, N.; Verrière, M.
2017-09-01
Accurate knowledge of fission fragment yields is an essential ingredient of numerous applications ranging from the formation of elements in the r-process to fuel cycle optimization in nuclear energy. The need for a predictive theory applicable where no data is available, together with the variety of potential applications, is an incentive to develop a fully microscopic approach to fission dynamics. One of the most promising theoretical frameworks is the time dependent generator coordinate method (TDGCM) applied under the Gaussian overlap approximation (GOA). However, the computational cost of this method makes it difficult to perform calculations with more than two collective degree of freedom. Meanwhile, it is well-known from both semi-phenomenological and fully microscopic approaches that at least four or five dimensions may play a role in the dynamics of fission. To overcome this limitation, we develop the code FELIX aiming to solve the TDGCM+GOA equation for an arbitrary number of collective variables. In this talk, we report the recent progress toward this enriched description of fission dynamics. We will briefly present the numerical methods adopted as well as the status of the latest version of FELIX. Finally, we will discuss fragments yields obtained within this approach for the low energy fission of major actinides.
Micro-feeding and dosing of powders via a small-scale powder pump.
Besenhard, M O; Fathollahi, S; Siegmann, E; Slama, E; Faulhammer, E; Khinast, J G
2017-03-15
Robust and accurate powder micro-feeding (<100mg/s) and micro-dosing (<5 mg) are major challenges, especially with regard to regulatory limitations applicable to pharmaceutical development and production. Since known micro-feeders that yield feed rates below 5mg/s use gravimetric feeding principles, feed rates depend primarily on powder properties. In contrast, volumetric powder feeders do not require regular calibration because their feed rates are primarily determined by the feeder's characteristic volume replacement. In this paper, we present a volumetric micro-feeder based on a cylinder piston system (i.e., a powder pump), which allows accurate micro-feeding and feed rates of a few grams per hours even for very fine powders. Our experimental studies addressed the influence of cylinder geometries, the initial conditions of bulk powder, and the piston speeds. Additional computational studies via Discrete Element Method simulations offered a better understanding of the feeding process, its possible limitations and ways to overcome them. The powder pump is a simple yet valuable tool for accurate powder feeding at feed rates of several orders of magnitude. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.
2009-05-01
A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.
Accuracy of quantum sensors measuring yield photon flux and photosynthetic photon flux
NASA Technical Reports Server (NTRS)
Barnes, C.; Tibbitts, T.; Sager, J.; Deitzer, G.; Bubenheim, D.; Koerner, G.; Bugbee, B.; Knott, W. M. (Principal Investigator)
1993-01-01
Photosynthesis is fundamentally driven by photon flux rather than energy flux, but not all absorbed photons yield equal amounts of photosynthesis. Thus, two measures of photosynthetically active radiation have emerged: photosynthetic photon flux (PPF), which values all photons from 400 to 700 nm equally, and yield photon flux (YPF), which weights photons in the range from 360 to 760 nm according to plant photosynthetic response. We selected seven common radiation sources and measured YPF and PPF from each source with a spectroradiometer. We then compared these measurements with measurements from three quantum sensors designed to measure YPF, and from six quantum sensors designed to measure PPF. There were few differences among sensors within a group (usually <5%), but YPF values from sensors were consistently lower (3% to 20%) than YPF values calculated from spectroradiometric measurements. Quantum sensor measurements of PPF also were consistently lower than PPF values calculated from spectroradiometric measurements, but the differences were <7% for all sources, except red-light-emitting diodes. The sensors were most accurate for broad-band sources and least accurate for narrow-band sources. According to spectroradiometric measurements, YPF sensors were significantly less accurate (>9% difference) than PPF sensors under metal halide, high-pressure sodium, and low-pressure sodium lamps. Both sensor types were inaccurate (>18% error) under red-light-emitting diodes. Because both YPF and PPF sensors are imperfect integrators, and because spectroradiometers can measure photosynthetically active radiation much more accurately, researchers should consider developing calibration factors from spectroradiometric data for some specific radiation sources to improve the accuracy of integrating sensors.
Absolute dimensions and masses of eclipsing binaries. V. IQ Persei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lacy, C.H.; Frueh, M.L.
1985-08-01
New photometric and spectroscopic observations of the 1.7 day eclipsing binary IQ Persei (B8 + A6) have been analyzed to yield very accurate fundamental properties of the system. Reticon spectroscopic observations obtained at McDonald Observatory were used to determine accurate radial velocities of both stars in this slightly eccentric large light-ratio binary. A new set of VR light curves obtained at McDonald Observatory were analyzed by synthesis techniques, and previously published UBV light curves were reanalyzed to yield accurate photometric orbits. Orbital parameters derived from both sets of photometric observations are in excellent agreement. The absolute dimensions, masses, luminosities, andmore » apsidal motion period (140 yr) derived from these observations agree well with the predictions of theoretical stellar evolution models. The A6 secondary is still very close to the zero-age main sequence. The B8 primary is about one-third of the way through its main-sequence evolution. 27 references.« less
Multi-scale Modeling of Plasticity in Tantalum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describingmore » temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct and quantitative comparisons between experimental measurements and simulation show that the proposed model accurately captures plasticity in deformation of polycrystalline tantalum.« less
Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat
2017-05-01
Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.
Eberl, D.D.; Drits, V.A.; Środoń, Jan; Nüesch, R.
1996-01-01
Particle size may strongly influence the physical and chemical properties of a substance (e.g. its rheology, surface area, cation exchange capacity, solubility, etc.), and its measurement in rocks may yield geological information about ancient environments (sediment provenance, degree of metamorphism, degree of weathering, current directions, distance to shore, etc.). Therefore mineralogists, geologists, chemists, soil scientists, and others who deal with clay-size material would like to have a convenient method for measuring particle size distributions. Nano-size crystals generally are too fine to be measured by light microscopy. Laser scattering methods give only average particle sizes; therefore particle size can not be measured in a particular crystallographic direction. Also, the particles measured by laser techniques may be composed of several different minerals, and may be agglomerations of individual crystals. Measurement by electron and atomic force microscopy is tedious, expensive, and time consuming. It is difficult to measure more than a few hundred particles per sample by these methods. This many measurements, often taking several days of intensive effort, may yield an accurate mean size for a sample, but may be too few to determine an accurate distribution of sizes. Measurement of size distributions by X-ray diffraction (XRD) solves these shortcomings. An X-ray scan of a sample occurs automatically, taking a few minutes to a few hours. The resulting XRD peaks average diffraction effects from billions of individual nano-size crystals. The size that is measured by XRD may be related to the size of the individual crystals of the mineral in the sample, rather than to the size of particles formed from the agglomeration of these crystals. Therefore one can determine the size of a particular mineral in a mixture of minerals, and the sizes in a particular crystallographic direction of that mineral.
Storey, Rebecca
2007-01-01
Comparison of different adult age estimation methods on the same skeletal sample with unknown ages could forward paleodemographic inference, while researchers sort out various controversies. The original aging method for the auricular surface (Lovejoy et al., 1985a) assigned an age estimation based on several separate characteristics. Researchers have found this original method hard to apply. It is usually forgotten that before assigning an age, there was a seriation, an ordering of all available individuals from youngest to oldest. Thus, age estimation reflected the place of an individual within its sample. A recent article (Buckberry and Chamberlain, 2002) proposed a revised method that scores theses various characteristics into age stages, which can then be used with a Bayesian method to estimate an adult age distribution for the sample. Both methods were applied to the adult auricular surfaces of a Pre-Columbian Maya skeletal population from Copan, Honduras and resulted in age distributions with significant numbers of older adults. However, contrary to the usual paleodemographic distribution, one Bayesian estimation based on uniform prior probabilities yielded a population with 57% of the ages at death over 65, while another based on a high mortality life table still had 12% of the individuals aged over 75 years. The seriation method yielded an age distribution more similar to that known from preindustrial historical situations, without excessive longevity of adults. Paleodemography must still wrestle with its elusive goal of accurate adult age estimation from skeletons, a necessary base for demographic study of past populations. (c) 2006 Wiley-Liss, Inc
Anisotropic nature of radially strained metal tubes
NASA Astrophysics Data System (ADS)
Strickland, Julie N.
Metal pipes are sometimes swaged by a metal cone to enlarge them, which increases the strain in the material. The amount of strain is important because it affects the burst and collapse strength. Burst strength is the amount of internal pressure that a pipe can withstand before failure, while collapse strength is the amount of external pressure that a pipe can withstand before failure. If the burst or collapse strengths are exceeded, the pipe may fracture, causing critical failure. Such an event could cost the owners and their customers millions of dollars in clean up, repair, and lost time, in addition to the potential environmental damage. Therefore, a reliable way of estimating the burst and collapse strength of strained pipe is desired and valuable. The sponsor currently rates strained pipes using the properties of raw steel, because those properties are easily measured (for example, yield strength). In the past, the engineers assumed that the metal would be work-hardened when swaged, so that yield strength would increase. However, swaging introduces anisotropic strain, which may decrease the yield strength. This study measured the yield strength of strained material in the transverse and axial direction and compared them to raw material, to determine the amount of anisotropy. This information will be used to more accurately determine burst and collapse ratings for strained pipes. More accurate ratings mean safer products, which will minimize risk for the sponsor's customers. Since the strained metal has a higher yield strength than the raw material, using the raw yield strength to calculate burst and collapse ratings is a conservative method. The metal has even higher yield strength after strain aging, which indicates that the stresses are relieved. Even with the 12% anisotropy in the strained and 9% anisotropy in the strain aged specimens, the raw yield strengths are lower and therefore more conservative. I recommend that the sponsor continue using the raw yield strength to calculate these ratings. I set out to characterize the anisotropic nature of swaged metal. As expected, the tensile tests showed a difference between the axial and transverse tensile strength. The correlation was 12% difference in yield strength in the axial and transverse directions for strained material and 9% in strained and aged material. This means that the strength of the metal in the hoop (transverse) direction is approximately 10% stronger than in the axial direction, because the metal was work hardened during the swaging process. Therefore, the metal is more likely to fail in axial tension than in burst or collapse. I presented the findings from the microstructure examination, standard tensile tests, and SEM data. All of this data supported the findings of the mini-tensile tests. This information will help engineers set burst and collapse ratings and allow material scientists to predict the anisotropic characteristics of swaged steel tubes.
Bernard R. Parresol; Steven C. Stedman
2004-01-01
The accuracy of forest growth and yield forecasts affects the quality of forest management decisions (Rauscher et al. 2000). Users of growth and yield models want assurance that model outputs are reasonable and mimic local/regional forest structure and composition and accurately reflect the influences of stand dynamics such as competition and disturbance. As such,...
Beste, A; Harrison, R J; Yanai, T
2006-08-21
Chemists are mainly interested in energy differences. In contrast, most quantum chemical methods yield the total energy which is a large number compared to the difference and has therefore to be computed to a higher relative precision than would be necessary for the difference alone. Hence, it is desirable to compute energy differences directly, thereby avoiding the precision problem. Whenever it is possible to find a parameter which transforms smoothly from an initial to a final state, the energy difference can be obtained by integrating the energy derivative with respect to that parameter (cf. thermodynamic integration or adiabatic connection methods). If the dependence on the parameter is predominantly linear, accurate results can be obtained by single-point integration. In density functional theory and Hartree-Fock, we applied the formalism to ionization potentials, excitation energies, and chemical bond breaking. Example calculations for ionization potentials and excitation energies showed that accurate results could be obtained with a linear estimate. For breaking bonds, we introduce a nongeometrical parameter which gradually turns the interaction between two fragments of a molecule on. The interaction changes the potentials used to determine the orbitals as well as the constraint on the orbitals to be orthogonal.
Dama, Elisa; Tillhon, Micol; Bertalot, Giovanni; de Santis, Francesca; Troglio, Flavia; Pessina, Simona; Passaro, Antonio; Pece, Salvatore; de Marinis, Filippo; Dell'Orto, Patrizia; Viale, Giuseppe; Spaggiari, Lorenzo; Di Fiore, Pier Paolo; Bianchi, Fabrizio; Barberis, Massimo; Vecchi, Manuela
2016-06-14
Accurate detection of altered anaplastic lymphoma kinase (ALK) expression is critical for the selection of lung cancer patients eligible for ALK-targeted therapies. To overcome intrinsic limitations and discrepancies of currently available companion diagnostics for ALK, we developed a simple, affordable and objective PCR-based predictive model for the quantitative measurement of any ALK fusion as well as wild-type ALK upregulation. This method, optimized for low-quantity/-quality RNA from FFPE samples, combines cDNA pre-amplification with ad hoc generated calibration curves. All the models we derived yielded concordant predictions when applied to a cohort of 51 lung tumors, and correctly identified all 17 ALK FISH-positive and 33 of the 34 ALK FISH-negative samples. The one discrepant case was confirmed as positive by IHC, thus raising the accuracy of our test to 100%. Importantly, our method was accurate when using low amounts of input RNA (10 ng), also in FFPE samples with limited tumor cellularity (5-10%) and in FFPE cytology specimens. Thus, our test is an easily implementable diagnostic tool for the rapid, efficacious and cost-effective screening of ALK status in patients with lung cancer.
Accurate ab initio Quartic Force Fields of Cyclic and Bent HC2N Isomers
NASA Technical Reports Server (NTRS)
Inostroza, Natalia; Huang, Xinchuan; Lee, Timothy J.
2012-01-01
Highly correlated ab initio quartic force field (QFFs) are used to calculate the equilibrium structures and predict the spectroscopic parameters of three HC2N isomers. Specifically, the ground state quasilinear triplet and the lowest cyclic and bent singlet isomers are included in the present study. Extensive treatment of correlation effects were included using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, denoted CCSD(T). Dunning s correlation-consistent basis sets cc-pVXZ, X=3,4,5, were used, and a three-point formula for extrapolation to the one-particle basis set limit was used. Core-correlation and scalar relativistic corrections were also included to yield highly accurate QFFs. The QFFs were used together with second-order perturbation theory (with proper treatment of Fermi resonances) and variational methods to solve the nuclear Schr dinger equation. The quasilinear nature of the triplet isomer is problematic, and it is concluded that a QFF is not adequate to describe properly all of the fundamental vibrational frequencies and spectroscopic constants (though some constants not dependent on the bending motion are well reproduced by perturbation theory). On the other hand, this procedure (a QFF together with either perturbation theory or variational methods) leads to highly accurate fundamental vibrational frequencies and spectroscopic constants for the cyclic and bent singlet isomers of HC2N. All three isomers possess significant dipole moments, 3.05D, 3.06D, and 1.71D, for the quasilinear triplet, the cyclic singlet, and the bent singlet isomers, respectively. It is concluded that the spectroscopic constants determined for the cyclic and bent singlet isomers are the most accurate available, and it is hoped that these will be useful in the interpretation of high-resolution astronomical observations or laboratory experiments.
A time-accurate finite volume method valid at all flow velocities
NASA Technical Reports Server (NTRS)
Kim, S.-W.
1993-01-01
A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil. The calculated streaklines are in very good comparison with the experimentally obtained smoke picture. The calculated turbulent viscosity contours show that the transition from laminar to turbulent state and the relaminarization occur widely in space as well as in time. The ensemble-averaged velocity profiles are also in good agreement with the measured data and the good comparison indicates that the numerical method as well as the multipletime-scale turbulence equations successfully predict the unsteady transitional turbulence field. The chemical reactions for the hydrogen in the vitiated supersonic airstream are described using 9 chemical species and 48 reaction-steps. Consider that a fast chemistry can not be used to describe the fine details (such as the instability) of chemically reacting flows while a reduced chemical kinetics can not be used confidently due to the uncertainty contained in the reaction mechanisms. However, the use of a detailed finite rate chemistry may make it difficult to obtain a fully converged solution due to the coupling between the large number of flow, turbulence, and chemical equations. The numerical results obtained in the present study are in good agreement with the measured data. The good comparison is attributed to the numerical method that can yield strongly converged results for the reacting flow and to the use of the multiple-time-scale turbulence equations that can accurately describe the mixing of the fuel and the oxidant.
Final Report on X-ray Yields from OMEGA II Targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fournier, K B; May, M J; MacLaren, S A
2007-06-20
We present details about X-ray yields measured with Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratories (SNL) diagnostics in soft and moderately hard X-ray bands from laser-driven, doped-aerogel targets shot on 07/14/06 during the OMEGA II test series. Yields accurate to {+-}25% in the 5-15 keV band are measured with Livermore's HENWAY spectrometer. Yields in the sub-keV to 3.2 keV band are measured with LLNL's DANTE diagnostic, the DANTE yields are accurate to 10-15%. SNL ran a PCD-based diagnostic that also measured X-ray yields in the spectral region above 4 keV, and also down to the sub-keV range. Themore » PCD and HENWAY and DANTE numbers are compared. The time histories of the moderately hard (h{nu} > 4 keV) X-ray signals are measured with LLNL's H11 PCD, and from two SNL PCDs with comparable filtration. There is general agreement between the H11 PCD and SNL PCD measured FWHM except for two of the shorter-laser-pulse shots, which is shown not to be due to analysis techniques. The recommended X-ray waveform is that from the SNL PCD p66k10, which was recorded on a fast, high-bandwidth TDS 6804 oscilloscope. X-ray waveforms from target emission in two softer spectral bands are also shown; the X-ray emissions have increasing duration as the spectral content gets softer.« less
NASA Astrophysics Data System (ADS)
Lauer, Tod
1995-07-01
We request deep, near-IR (F814W) WFPC2 images of five nearby Brightest Cluster Galaxies (BCG) to calibrate the BCG Hubble diagram by the Surface Brightness Fluctuation (SBF) method. Lauer & Postman (1992) show that the BCG Hubble diagram measured out to 15,000 km s^-1 is highly linear. Calibration of the Hubble diagram zeropoint by SBF will thus yield an accurate far-field measure of H_0 based on the entire volume within 15,000 km s^-1, thus circumventing any strong biases caused by local peculiar velocity fields. This method of reaching the far field is contrasted with those using distance ratios between Virgo and Coma, or any other limited sample of clusters. HST is required as the ground-based SBF method is limited to <3,000 km s^-1. The high spatial resolution of HST allows precise measurement of the SBF signal at large distances, and allows easy recognition of globular clusters, background galaxies, and dust clouds in the BCG images that must be removed prior to SBF detection. The proposing team developed the SBF method, the first BCG Hubble diagram based on a full-sky, volume-limited BCG sample, played major roles in the calibration of WFPC and WFPC2, and are conducting observations of local galaxies that will validate the SBF zeropoint (through GTO programs). This work uses the SBF method to tie both the Cepheid and Local Group giant-branch distances generated by HST to the large scale Hubble flow, which is most accurately traced by BCGs.
Quantification of pulmonary vessel diameter in low-dose CT images
NASA Astrophysics Data System (ADS)
Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate
2015-03-01
Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.
NASA Astrophysics Data System (ADS)
Bhatia, C.; Fallin, B.; Gooden, M. E.; Howell, C. R.; Kelley, J. H.; Tornow, W.; Arnold, C. W.; Bond, E. M.; Bredeweg, T. A.; Fowler, M. M.; Moody, W. A.; Rundberg, R. S.; Rusev, G.; Vieira, D. J.; Wilhelmy, J. B.; Becker, J. A.; Macri, R.; Ryan, C.; Sheets, S. A.; Stoyer, M. A.; Tonchev, A. P.
2014-09-01
A program has been initiated to measure the energy dependence of selected high-yield fission products used in the analysis of nuclear test data. We present out initial work of neutron activation using a dual-fission chamber with quasi-monoenergetic neutrons and gamma-counting method. Quasi-monoenergetic neutrons of energies from 0.5 to 15 MeV using the TUNL 10 MV FM tandem to provide high-precision and self-consistent measurements of fission product yields (FPY). The final FPY results will be coupled with theoretical analysis to provide a more fundamental understanding of the fission process. To accomplish this goal, we have developed and tested a set of dual-fission ionization chambers to provide an accurate determination of the number of fissions occurring in a thick target located in the middle plane of the chamber assembly. Details of the fission chamber and its performance are presented along with neutron beam production and characterization. Also presented are studies on the background issues associated with room-return and off-energy neutron production. We show that the off-energy neutron contribution can be significant, but correctable, while room-return neutron background levels contribute less than <1% to the fission signal.
Robust hepatic vessel segmentation using multi deep convolution network
NASA Astrophysics Data System (ADS)
Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei
2017-03-01
Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.
Predicting human blood viscosity in silico
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fedosov, Dmitry A.; Pan, Wenxiao; Caswell, Bruce
2011-07-05
Cellular suspensions such as blood are a part of living organisms and their rheological and flow characteristics determine and affect majority of vital functions. The rheological and flow properties of cell suspensions are determined by collective dynamics of cells, their structure or arrangement, cell properties and interactions. We study these relations for blood in silico using a mesoscopic particle-based method and two different models (multi-scale/low-dimensional) of red blood cells. The models yield accurate quantitative predictions of the dependence of blood viscosity on shear rate and hematocrit. We explicitly model cell aggregation interactions and demonstrate the formation of reversible rouleaux structuresmore » resulting in a tremendous increase of blood viscosity at low shear rates and yield stress, in agreement with experiments. The non-Newtonian behavior of such cell suspensions (e.g., shear thinning, yield stress) is analyzed and related to the suspension’s microstructure, deformation and dynamics of single cells. We provide the flrst quantitative estimates of normal stress differences and magnitude of aggregation forces in blood. Finally, the flexibility of the cell models allows them to be employed for quantitative analysis of a much wider class of complex fluids including cell, capsule, and vesicle suspensions.« less
Aerodynamic loads on a Darrieus rotor blade
NASA Astrophysics Data System (ADS)
Wilson, R. E.; McKie, W. R.; Lissaman, P. B. S.; James, M.
1983-03-01
A method is presented for the free vortex analysis of a Darrieus rotor blade in nonsteady motion, which employs the circle theorem to map the moving rotor airfoil into the circle plane and models the wake generated in terms of point vortices. Nascent vortex strength and position are taken from the Kutta condition, so that the nascent vortex has the same strength as a vortex sheet of uniform strength. Pressure integration over the plate and wake vortex impulse methods yields the same numerical results. The numerical results presented for a one-bladed Darrieus rotor at a tip/speed ratio of three, and two different chord sizes, indicate that the moment on the blade can be adequately approximated by quasi-steady relationships, although the accurate determination of local velocity and circulation are still required.
NASA Astrophysics Data System (ADS)
Garrido Torres, José A.; Ramberger, Benjamin; Früchtl, Herbert A.; Schaub, Renald; Kresse, Georg
2017-11-01
The adsorption energy of benzene on various metal substrates is predicted using the random phase approximation (RPA) for the correlation energy. Agreement with available experimental data is systematically better than 10% for both coinage and reactive metals. The results are also compared with more approximate methods, including van der Waals density functional theory (DFT), as well as dispersion-corrected DFT functionals. Although dispersion-corrected DFT can yield accurate results, for instance, on coinage metals, the adsorption energies are clearly overestimated on more reactive transition metals. Furthermore, coverage dependent adsorption energies are well described by the RPA. This shows that for the description of aromatic molecules on metal surfaces further improvements in density functionals are necessary, or more involved many-body methods such as the RPA are required.
Validation of Leaf Area Index measurements based on the Wireless Sensor Network platform
NASA Astrophysics Data System (ADS)
Song, Q.; Li, X.; Liu, Q.
2017-12-01
The leaf area index (LAI) is one of the important parameters for estimating plant canopy function, which has significance for agricultural analysis such as crop yield estimation and disease evaluation. The quick and accurate access to acquire crop LAI is particularly vital. In the study, LAI measurement of corn crops is mainly through three kinds of methods: the leaf length and width method (LAILLW), the instruments indirect measurement method (LAII) and the leaf area index sensor method(LAIS). Among them, LAI value obtained from LAILLW can be regarded as approximate true value. LAI-2200,the current widespread LAI canopy analyzer,is used in LAII. LAIS based on wireless sensor network can realize the automatic acquisition of crop images,simplifying the data collection work,while the other two methods need person to carry out field measurements.Through the comparison of LAIS and other two methods, the validity and reliability of LAIS observation system is verified. It is found that LAI trend changes are similar in three methods, and the rate of change of LAI has an increase with time in the first two months of corn growth when LAIS costs less manpower, energy and time. LAI derived from LAIS is more accurate than LAII in the early growth stage,due to the small blade especially under the strong light. Besides, LAI processed from a false color image with near infrared information is much closer to the true value than true color picture after the corn growth period up to one and half months.
TH-A-18C-02: An Electrostatic Model for Assessment of Joint Space Morphology in Cone-Beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Q; Thawait, G; Gang, G
Purpose: High-resolution cone-beam CT (CBCT) of the extremities presents a potentially valuable basis for image-based biomarkers of arthritis, trauma, and risk of injury. We present a new method for 3D joint space analysis that exploits the high isotropic spatial resolution of CBCT and is sensitive to small changes in disease-related morphology. Methods: The approach uses an “electrostatic” model in which joint surfaces (e.g., distal femur and proximal tibia) are labeled as charge densities between which the electric field is solved by approximation to the Laplace equation. The method yields a unique solution determined by the field lines across the “capacitor”more » and is hypothesized to be more sensitive than conventional (Sharp) scores and immune to degeneracies that limit simple distance-along-axis or closest-point analysis. The algorithm was validated in CBCT phantom images and applied in two clinical scenarios: osteoarthritis (OA, change in loadbearing tibiofemoral joint space); and assessment of injury risk (correlation of 3D joint space to tibial slope). Results: Joint space maps computed from the electrostatic model were accurate to within the voxel size (0.26 mm). The method highlighted subtle regions of morphological change that would likely be missed by conventional scalar metrics. Regions of subtle cartilage erosion were well quantified, and the method confidently discriminated OA and non-OA cohorts. 3D joint space maps correlated well with tibial slope and provide a new basis for principal component analysis of loadbearing injury risk. Runtime was less than 5 min (235×235×121 voxel subvolume in Matlab). Conclusion: A new method for joint space assessment was reported as a possible image-based biomarker of subtle articular change. The algorithm yields accurate quantitation of the joint in a manner that is robust against operator and patient setup variation. The method shows promising initial results in ongoing trials of CBCT in osteoarthritis, rheumatoid arthritis, and injury risk assessment. Research supported by R01 and R21 grants from the National Institutes of Health, academic-industry partnership with Carestream Health, and a grant from the US Army Natick Soldier Research, Development and Engineering Center.« less
Experimental study on the dynamic mechanical behaviors of polycarbonate
NASA Astrophysics Data System (ADS)
Zhang, Wei; Gao, Yubo; Cai, Xuanming; Ye, Nan; Huang, Wei; Hypervelocity Impact Research Center Team
2015-06-01
Polycarbonate (PC) is a widely used engineering material in aerospace field, since it has excellent mechanical and optical property. In present study, both compress and tensile tests of PC were conducted at high strain rates by using a split Hopkinson pressure bar. The high-speed camera and 2D digital speckle correlation method (DIC) were used to analyze the dynamic deformation behavior of PC. Meanwhile, the plate impact experiment was carried out to measure the equation of state of PC in a single-stage gas gun, which consists of asymmetric impact technology, manganin gauges, PVDF, electromagnetic particle velocity gauges. The results indicate that the yield stress of PC increased with the strain rates. The strain softening occurred when the stress over yield point except the tensile tests in the strain rates of 1076s-1 and 1279s-1. The ZWT model can describe the constitutive behaviors of PC accurately in different strain rates by contrast with the results of 2D-DIC. At last, The D-u Hugoniot curve of polycarbonate in high pressure was fitted by the least square method. And the final results showed more closely to Cater and Mash than other previous data.
Real-Time Plasma Process Condition Sensing and Abnormal Process Detection
Yang, Ryan; Chen, Rongshun
2010-01-01
The plasma process is often used in the fabrication of semiconductor wafers. However, due to the lack of real-time etching control, this may result in some unacceptable process performances and thus leads to significant waste and lower wafer yield. In order to maximize the product wafer yield, a timely and accurately process fault or abnormal detection in a plasma reactor is needed. Optical emission spectroscopy (OES) is one of the most frequently used metrologies in in-situ process monitoring. Even though OES has the advantage of non-invasiveness, it is required to provide a huge amount of information. As a result, the data analysis of OES becomes a big challenge. To accomplish real-time detection, this work employed the sigma matching method technique, which is the time series of OES full spectrum intensity. First, the response model of a healthy plasma spectrum was developed. Then, we defined a matching rate as an indictor for comparing the difference between the tested wafers response and the health sigma model. The experimental results showed that this proposal method can detect process faults in real-time, even in plasma etching tools. PMID:22219683
How well does multiple OCR error correction generalize?
NASA Astrophysics Data System (ADS)
Lund, William B.; Ringger, Eric K.; Walker, Daniel D.
2013-12-01
As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.
Donné, Simon; Goossens, Bart; Philips, Wilfried
2017-08-23
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
Hansen, Andreas; Bannwarth, Christoph; Grimme, Stefan; Petrović, Predrag; Werlé, Christophe; Djukic, Jean-Pierre
2014-10-01
Reliable thermochemical measurements and theoretical predictions for reactions involving large transition metal complexes in which long-range intramolecular London dispersion interactions contribute significantly to their stabilization are still a challenge, particularly for reactions in solution. As an illustrative and chemically important example, two reactions are investigated where a large dipalladium complex is quenched by bulky phosphane ligands (triphenylphosphane and tricyclohexylphosphane). Reaction enthalpies and Gibbs free energies were measured by isotherm titration calorimetry (ITC) and theoretically 'back-corrected' to yield 0 K gas-phase reaction energies (ΔE). It is shown that the Gibbs free solvation energy calculated with continuum models represents the largest source of error in theoretical thermochemistry protocols. The ('back-corrected') experimental reaction energies were used to benchmark (dispersion-corrected) density functional and wave function theory methods. Particularly, we investigated whether the atom-pairwise D3 dispersion correction is also accurate for transition metal chemistry, and how accurately recently developed local coupled-cluster methods describe the important long-range electron correlation contributions. Both, modern dispersion-corrected density functions (e.g., PW6B95-D3(BJ) or B3LYP-NL), as well as the now possible DLPNO-CCSD(T) calculations, are within the 'experimental' gas phase reference value. The remaining uncertainties of 2-3 kcal mol(-1) can be essentially attributed to the solvation models. Hence, the future for accurate theoretical thermochemistry of large transition metal reactions in solution is very promising.
2010-01-01
Background Accurate identification is necessary to discriminate harmless environmental Yersinia species from the food-borne pathogens Yersinia enterocolitica and Yersinia pseudotuberculosis and from the group A bioterrorism plague agent Yersinia pestis. In order to circumvent the limitations of current phenotypic and PCR-based identification methods, we aimed to assess the usefulness of matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) protein profiling for accurate and rapid identification of Yersinia species. As a first step, we built a database of 39 different Yersinia strains representing 12 different Yersinia species, including 13 Y. pestis isolates representative of the Antiqua, Medievalis and Orientalis biotypes. The organisms were deposited on the MALDI-TOF plate after appropriate ethanol-based inactivation, and a protein profile was obtained within 6 minutes for each of the Yersinia species. Results When compared with a 3,025-profile database, every Yersinia species yielded a unique protein profile and was unambiguously identified. In the second step of analysis, environmental and clinical isolates of Y. pestis (n = 2) and Y. enterocolitica (n = 11) were compared to the database and correctly identified. In particular, Y. pestis was unambiguously identified at the species level, and MALDI-TOF was able to successfully differentiate the three biotypes. Conclusion These data indicate that MALDI-TOF can be used as a rapid and accurate first-line method for the identification of Yersinia isolates. PMID:21073689
Ayyadurai, Saravanan; Flaudrops, Christophe; Raoult, Didier; Drancourt, Michel
2010-11-12
Accurate identification is necessary to discriminate harmless environmental Yersinia species from the food-borne pathogens Yersinia enterocolitica and Yersinia pseudotuberculosis and from the group A bioterrorism plague agent Yersinia pestis. In order to circumvent the limitations of current phenotypic and PCR-based identification methods, we aimed to assess the usefulness of matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) protein profiling for accurate and rapid identification of Yersinia species. As a first step, we built a database of 39 different Yersinia strains representing 12 different Yersinia species, including 13 Y. pestis isolates representative of the Antiqua, Medievalis and Orientalis biotypes. The organisms were deposited on the MALDI-TOF plate after appropriate ethanol-based inactivation, and a protein profile was obtained within 6 minutes for each of the Yersinia species. When compared with a 3,025-profile database, every Yersinia species yielded a unique protein profile and was unambiguously identified. In the second step of analysis, environmental and clinical isolates of Y. pestis (n = 2) and Y. enterocolitica (n = 11) were compared to the database and correctly identified. In particular, Y. pestis was unambiguously identified at the species level, and MALDI-TOF was able to successfully differentiate the three biotypes. These data indicate that MALDI-TOF can be used as a rapid and accurate first-line method for the identification of Yersinia isolates.
Physical-geometric optics method for large size faceted particles.
Sun, Bingqiang; Yang, Ping; Kattawar, George W; Zhang, Xiaodong
2017-10-02
A new physical-geometric optics method is developed to compute the single-scattering properties of faceted particles. It incorporates a general absorption vector to accurately account for inhomogeneous wave effects, and subsequently yields the relevant analytical formulas effective and computationally efficient for absorptive scattering particles. A bundle of rays incident on a certain facet can be traced as a single beam. For a beam incident on multiple facets, a systematic beam-splitting technique based on computer graphics is used to split the original beam into several sub-beams so that each sub-beam is incident only on an individual facet. The new beam-splitting technique significantly reduces the computational burden. The present physical-geometric optics method can be generalized to arbitrary faceted particles with either convex or concave shapes and with a homogeneous or an inhomogeneous (e.g., a particle with a core) composition. The single-scattering properties of irregular convex homogeneous and inhomogeneous hexahedra are simulated and compared to their counterparts from two other methods including a numerically rigorous method.
Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong
2014-01-01
The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.
Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong
2014-01-01
The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection. PMID:24503700
NASA Astrophysics Data System (ADS)
Adushkin, V. V.
- A statistical procedure is described for estimating the yields of underground nuclear tests at the former Soviet Semipalatinsk test site using the peak amplitudes of short-period surface waves observed at near-regional distances (Δ < 150 km) from these explosions. This methodology is then applied to data recorded from a large sample of the Semipalatinsk explosions, including the Soviet JVE explosion of September 14, 1988, and it is demonstrated that it provides seismic estimates of explosion yield which are typically within 20% of the yields determined for these same explosions using more accurate, non-seismic techniques based on near-source observations.
Computational Material Processing in Microgravity
NASA Technical Reports Server (NTRS)
2005-01-01
Working with Professor David Matthiesen at Case Western Reserve University (CWRU) a computer model of the DPIMS (Diffusion Processes in Molten Semiconductors) space experiment was developed that is able to predict the thermal field, flow field and concentration profile within a molten germanium capillary under both ground-based and microgravity conditions as illustrated. These models are coupled with a novel nonlinear statistical methodology for estimating the diffusion coefficient from measured concentration values after a given time that yields a more accurate estimate than traditional methods. This code was integrated into a web-based application that has become a standard tool used by engineers in the Materials Science Department at CWRU.
Spectral interpolation - Zero fill or convolution. [image processing
NASA Technical Reports Server (NTRS)
Forman, M. L.
1977-01-01
Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.