2017-09-01
ER D C/ CH L TR -1 7- 15 Strategic Environmental Research and Development Program Develop Accurate Methods for Characterizing and...current environments. This research will provide more accurate methods for assessing contaminated sediment stability for many DoD and Environmental...47.88026 pascals yards 0.9144 meters ERDC/CHL TR-17-15 xi Executive Summary Objective The proposed research goal is to develop laboratory methods
2017-09-01
ER D C/ CH L TR -1 7- 15 Strategic Environmental Research and Development Program Develop Accurate Methods for Characterizing and...current environments. This research will provide more accurate methods for assessing contaminated sediment stability for many DoD and Environmental...47.88026 pascals yards 0.9144 meters ERDC/CHL TR-17-15 xi Executive Summary Objective The proposed research goal is to develop laboratory methods
A photogrammetric technique for generation of an accurate multispectral optical flow dataset
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2017-06-01
A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.
Parturition prediction and timing of canine pregnancy
Kim, YeunHee; Travis, Alexander J.; Meyers-Wallen, Vicki N.
2007-01-01
An accurate method of predicting the date of parturition in the bitch is clinically useful to minimize or prevent reproductive losses by timely intervention. Similarly, an accurate method of timing canine ovulation and gestation is critical for development of assisted reproductive technologies, e.g. estrous synchronization and embryo transfer. This review discusses present methods for accurately timing canine gestational age and outlines their use in clinical management of high-risk pregnancies and embryo transfer research. PMID:17904630
A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry
NASA Astrophysics Data System (ADS)
Stamer, Torsten; Inutsuka, Shu-ichiro
2018-06-01
We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng
2016-01-01
An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.
NASA Astrophysics Data System (ADS)
Ji, Yang; Chen, Hong; Tang, Hongwu
2017-06-01
A highly accurate wide-angle scheme, based on the generalized mutistep scheme in the propagation direction, is developed for the finite difference beam propagation method (FD-BPM). Comparing with the previously presented method, the simulation shows that our method results in a more accurate solution, and the step size can be much larger
Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P
2016-03-24
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
A, Golkari; A, Sabokseir; D, Blane; A, Sheiham; RG, Watt
2017-01-01
Statement of Problem: Early childhood is a crucial period of life as it affects one’s future health. However, precise data on adverse events during this period is usually hard to access or collect, especially in developing countries. Objectives: This paper first reviews the existing methods for retrospective data collection in health and social sciences, and then introduces a new method/tool for obtaining more accurate general and oral health related information from early childhood retrospectively. Materials and Methods: The Early Childhood Events Life-Grid (ECEL) was developed to collect information on the type and time of health-related adverse events during the early years of life, by questioning the parents. The validity of ECEL and the accuracy of information obtained by this method were assessed in a pilot study and in a main study of 30 parents of 8 to 11 year old children from Shiraz (Iran). Responses obtained from parents using the final ECEL were compared with the recorded health insurance documents. Results: There was an almost perfect agreement between the health insurance and ECEL data sets (Kappa value=0.95 and p < 0.001). Interviewees remembered the important events more accurately (100% exact timing match in case of hospitalization). Conclusions: The Early Childhood Events Life-Grid method proved to be highly accurate when compared with recorded medical documents. PMID:28959773
Aircraft Dynamic Modeling in Turbulence
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Cunninham, Kevin
2012-01-01
A method for accurately identifying aircraft dynamic models in turbulence was developed and demonstrated. The method uses orthogonal optimized multisine excitation inputs and an analytic method for enhancing signal-to-noise ratio for dynamic modeling in turbulence. A turbulence metric was developed to accurately characterize the turbulence level using flight measurements. The modeling technique was demonstrated in simulation, then applied to a subscale twin-engine jet transport aircraft in flight. Comparisons of modeling results obtained in turbulent air to results obtained in smooth air were used to demonstrate the effectiveness of the approach.
NASA Astrophysics Data System (ADS)
Han, Hyung-Seop; Kim, Hee-Kyoung; Kim, Yu-Chan; Seok, Hyun-Kwang; Kim, Young-Yul
2015-11-01
Unique biodegradable property of magnesium has spawned countless studies to develop ideal biodegradable orthopedic implant materials in the last decade. However, due to the rapid pH change and extensive amount of hydrogen gas generated during biocorrosion, it is extremely difficult to determine the accurate cytotoxicity of newly developed magnesium alloys using the existing methods. Herein, we report a new method to accurately determine the cytotoxicity of magnesium alloys with varying corrosion rate while taking in-vivo condition into the consideration. For conventional method, extract quantities of each metal ion were determined using ICP-MS and the result showed that the cytotoxicity due to pH change caused by corrosion affected the cell viability rather than the intrinsic cytotoxicity of magnesium alloy. In physiological environment, pH is regulated and adjusted within normal pH (˜7.4) range by homeostasis. Two new methods using pH buffered extracts were proposed and performed to show that environmental buffering effect of pH, dilution of the extract, and the regulation of eluate surface area must be taken into consideration for accurate cytotoxicity measurement of biodegradable magnesium alloys.
Detection of malondialdehyde in processed meat products without interference from the ingredients.
Jung, Samooel; Nam, Ki Chang; Jo, Cheorun
2016-10-15
Our aim was to develop a method for accurate quantification of malondialdehyde (MDA) in meat products. MDA content of uncured ground pork (Control); ground pork cured with sodium nitrite (Nitrite); and ground pork cured with sodium nitrite, sodium chloride, sodium pyrophosphate, maltodextrin, and a sausage seasoning (Mix) was measured by the 2-thiobarbituric acid (TBA) assay with MDA extraction by trichloroacetic acid (method A) and two high-performance liquid chromatography (HPLC) methods: i) HPLC separation of the MDA-dinitrophenyl hydrazine adduct (method B) and ii) HPLC separation of MDA (method C) after MDA extraction with acetonitrile. Methods A and B could not quantify MDA accurately in groups Nitrite and Mix. Nevertheless, MDA in groups Control, Nitrite, and Mix was accurately quantified by method C with good recovery. Therefore, direct MDA quantification by HPLC after MDA extraction with acetonitrile (method C) is useful for accurate measurement of MDA content in processed meat products. Copyright © 2016 Elsevier Ltd. All rights reserved.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
An automated method of tuning an attitude estimator
NASA Technical Reports Server (NTRS)
Mason, Paul A. C.; Mook, D. Joseph
1995-01-01
Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.
Method for Accurately Calibrating a Spectrometer Using Broadband Light
NASA Technical Reports Server (NTRS)
Simmons, Stephen; Youngquist, Robert
2011-01-01
A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.
NASA Astrophysics Data System (ADS)
Liu, Chun-Ho; Leung, Dennis Y. C.
2006-02-01
This study employed a direct numerical simulation (DNS) technique to contrast the plume behaviours and mixing of passive scalar emitted from line sources (aligned with the spanwise direction) in neutrally and unstably stratified open-channel flows. The DNS model was developed using the Galerkin finite element method (FEM) employing trilinear brick elements with equal-order interpolating polynomials that solved the momentum and continuity equations, together with conservation of energy and mass equations in incompressible flow. The second-order accurate fractional-step method was used to handle the implicit velocity-pressure coupling in incompressible flow. It also segregated the solution to the advection and diffusion terms, which were then integrated in time, respectively, by the explicit third-order accurate Runge-Kutta method and the implicit second-order accurate Crank-Nicolson method. The buoyancy term under unstable stratification was integrated in time explicitly by the first-order accurate Euler method. The DNS FEM model calculated the scalar-plume development and the mean plume path. In particular, it calculated the plume meandering in the wall-normal direction under unstable stratification that agreed well with the laboratory and field measurements, as well as previous modelling results available in literature.
Studies of HZE particle interactions and transport for space radiation protection purposes
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Wilson, John W.; Schimmerling, Walter; Wong, Mervyn
1987-01-01
The main emphasis is on developing general methods for accurately predicting high-energy heavy ion (HZE) particle interactions and transport for use by researchers in mission planning studies, in evaluating astronaut self-shielding factors, and in spacecraft shield design and optimization studies. The two research tasks are: (1) to develop computationally fast and accurate solutions to the Boltzmann (transport) equation; and (2) to develop accurate HZE interaction models, from fundamental physical considerations, for use as inputs into these transport codes. Accurate solutions to the HZE transport problem have been formulated through a combination of analytical and numerical techniques. In addition, theoretical models for the input interaction parameters are under development: stopping powers, nuclear absorption cross sections, and fragmentation parameters.
A picture's worth a thousand words: a food-selection observational method.
Carins, Julia E; Rundle-Thiele, Sharyn R; Parkinson, Joy E
2016-05-04
Issue addressed: Methods are needed to accurately measure and describe behaviour so that social marketers and other behaviour change researchers can gain consumer insights before designing behaviour change strategies and so, in time, they can measure the impact of strategies or interventions when implemented. This paper describes a photographic method developed to meet these needs. Methods: Direct observation and photographic methods were developed and used to capture food-selection behaviour and examine those selections according to their healthfulness. Four meals (two lunches and two dinners) were observed at a workplace buffet-style cafeteria over a 1-week period. The healthfulness of individual meals was assessed using a classification scheme developed for the present study and based on the Australian Dietary Guidelines. Results: Approximately 27% of meals (n = 168) were photographed. Agreement was high between raters classifying dishes using the scheme, as well as between researchers when coding photographs. The subset of photographs was representative of patterns observed in the entire dining room. Diners chose main dishes in line with the proportions presented, but in opposition to the proportions presented for side dishes. Conclusions: The present study developed a rigorous observational method to investigate food choice behaviour. The comprehensive food classification scheme produced consistent classifications of foods. The photographic data collection method was found to be robust and accurate. Combining the two observation methods allows researchers and/or practitioners to accurately measure and interpret food selections. Consumer insights gained suggest that, in this setting, increasing the availability of green (healthful) offerings for main dishes would assist in improving healthfulness, whereas other strategies (e.g. promotion) may be needed for side dishes. So what?: Visual observation methods that accurately measure and interpret food-selection behaviour provide both insight for those developing healthy eating interventions and a means to evaluate the effect of implemented interventions on food selection.
Automated seed localization from CT datasets of the prostate.
Brinkmann, D H; Kline, R W
1998-09-01
With the increasing utilization of permanent brachytherapy implants for treating carcinoma of the prostate, the importance of accurate post-treatment dose calculation also increases for assessing patient outcome and planning future treatments. An automatic method for seed localization of permanent brachytherapy implants, using CT datasets of the prostate, has been developed and tested on a phantom using an actual patient planned seed distribution. This method was also compared to results with the three-film technique for three patient datasets. The automatic method is as accurate or more accurate than the three film technique for 1 mm, 3 mm, and 5 mm contiguous CT slices, and eliminates the inter- and intra-observer variability of the manual methods. The automated method improves the localization of brachytherapy seeds while reducing the time required for the user to input information, and is demonstrated to be less operator dependent, less time consuming, and potentially more accurate than the three-film technique.
Development and evaluation of the photoload sampling technique
Robert E. Keane; Laura J. Dickinson
2007-01-01
Wildland fire managers need better estimates of fuel loading so they can accurately predict potential fire behavior and effects of alternative fuel and ecosystem restoration treatments. This report presents the development and evaluation of a new fuel sampling method, called the photoload sampling technique, to quickly and accurately estimate loadings for six common...
Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa
2010-02-21
We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
NASA Astrophysics Data System (ADS)
Jung, C. C.; Stumpe, J.
2005-02-01
The new method of immersion transmission ellipsometry (ITE) [1] has been developed. It allows the highly accurate determination of the absolute three-dimensional (3D) refractive indices of anisotropic thin films. The method is combined with conventional ellipsometry in transmission and reflection, and the thickness determination of anisotropic films solely by optical methods also becomes more accurate. The method is applied to the determination of the 3D refractive indices of thin spin-coated films of an azobenzene-containing liquid-crystalline copolymer. The development of the anisotropy in these films by photo-orientation and subsequent annealing is demonstrated. Depending on the annealing temperature, oblate or prolate orders are generated.
Trunk density profile estimates from dual X-ray absorptiometry.
Wicke, Jason; Dumas, Geneviève A; Costigan, Patrick A
2008-01-01
Accurate body segment parameters are necessary to estimate joint loads when using biomechanical models. Geometric methods can provide individualized data for these models but the accuracy of the geometric methods depends on accurate segment density estimates. The trunk, which is important in many biomechanical models, has the largest variability in density along its length. Therefore, the objectives of this study were to: (1) develop a new method for modeling trunk density profiles based on dual X-ray absorptiometry (DXA) and (2) develop a trunk density function for college-aged females and males that can be used in geometric methods. To this end, the density profiles of 25 females and 24 males were determined by combining the measurements from a photogrammetric method and DXA readings. A discrete Fourier transformation was then used to develop the density functions for each sex. The individual density and average density profiles compare well with the literature. There were distinct differences between the profiles of two of participants (one female and one male), and the average for their sex. It is believed that the variations in these two participants' density profiles were a result of the amount and distribution of fat they possessed. Further studies are needed to support this possibility. The new density functions eliminate the uniform density assumption associated with some geometric models thus providing more accurate trunk segment parameter estimates. In turn, more accurate moments and forces can be estimated for the kinetic analyses of certain human movements.
Macintyre, Lisa
2011-11-01
Accurate measurement of the pressure delivered by medical compression products is highly desirable both in monitoring treatment and in developing new pressure inducing garments or products. There are several complications in measuring pressure at the garment/body interface and at present no ideal pressure measurement tool exists for this purpose. This paper summarises a thorough evaluation of the accuracy and reproducibility of measurements taken following both of Tekscan Inc.'s recommended calibration procedures for I-scan sensors; and presents an improved method for calibrating and using I-scan pressure sensors. The proposed calibration method enables accurate (±2.1 mmHg) measurement of pressures delivered by pressure garments to body parts with a circumference ≥30 cm. This method is too cumbersome for routine clinical use but is very useful, accurate and reproducible for product development or clinical evaluation purposes. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that have resulted from this work. A review of computational aeroacoustics has recently been given by Lele.
NASA Technical Reports Server (NTRS)
Whalen, Robert T.; Napel, Sandy; Yan, Chye H.
1996-01-01
Progress in development of the methods required to study bone remodeling as a function of time is reported. The following topics are presented: 'A New Methodology for Registration Accuracy Evaluation', 'Registration of Serial Skeletal Images for Accurately Measuring Changes in Bone Density', and 'Precise and Accurate Gold Standard for Multimodality and Serial Registration Method Evaluations.'
NASA Technical Reports Server (NTRS)
Sanger, Eugen
1932-01-01
In the present report the computation is actually carried through for the case of parallel spars of equal resistance in bending without direct loading, including plotting of the influence lines; for other cases the method of calculation is explained. The development of large size airplanes can be speeded up by accurate methods of calculation such as this.
Sagiyama, Koki; Rudraraju, Shiva; Garikipati, Krishna
2016-09-13
Here, we consider solid state phase transformations that are caused by free energy densities with domains of non-convexity in strain-composition space; we refer to the non-convex domains as mechano-chemical spinodals. The non-convexity with respect to composition and strain causes segregation into phases with different crystal structures. We work on an existing model that couples the classical Cahn-Hilliard model with Toupin’s theory of gradient elasticity at finite strains. Both systems are represented by fourth-order, nonlinear, partial differential equations. The goal of this work is to develop unconditionally stable, second-order accurate time-integration schemes, motivated by the need to carry out large scalemore » computations of dynamically evolving microstructures in three dimensions. We also introduce reduced formulations naturally derived from these proposed schemes for faster computations that are still second-order accurate. Although our method is developed and analyzed here for a specific class of mechano-chemical problems, one can readily apply the same method to develop unconditionally stable, second-order accurate schemes for any problems for which free energy density functions are multivariate polynomials of solution components and component gradients. Apart from an analysis and construction of methods, we present a suite of numerical results that demonstrate the schemes in action.« less
Real-time determination of sarcomere length of a single cardiomyocyte during contraction
Kalda, Mari; Vendelin, Marko
2013-01-01
Sarcomere length of a cardiomyocyte is an important control parameter for physiology studies on a single cell level; for instance, its accurate determination in real time is essential for performing single cardiomyocyte contraction experiments. The aim of this work is to develop an efficient and accurate method for estimating a mean sarcomere length of a contracting cardiomyocyte using microscopy images as an input. The novelty in developed method lies in 1) using unbiased measure of similarities to eliminate systematic errors from conventional autocorrelation function (ACF)-based methods when applied to region of interest of an image, 2) using a semianalytical, seminumerical approach for evaluating the similarity measure to take into account spatial dependence of neighboring image pixels, and 3) using a detrend algorithm to extract the sarcomere striation pattern content from the microscopy images. The developed sarcomere length estimation procedure has superior computational efficiency and estimation accuracy compared with the conventional ACF and spectral analysis-based methods using fast Fourier transform. As shown by analyzing synthetic images with the known periodicity, the estimates obtained by the developed method are more accurate at the subpixel level than ones obtained using ACF analysis. When applied in practice on rat cardiomyocytes, our method was found to be robust to the choice of the region of interest that may 1) include projections of carbon fibers and nucleus, 2) have uneven background, and 3) be slightly disoriented with respect to average direction of sarcomere striation pattern. The developed method is implemented in open-source software. PMID:23255581
METHOD 544. DETERMINATION OF MICROCYSTINS AND ...
Method 544 is an accurate and precise analytical method to determine six microcystins (including MC-LR) and nodularin in drinking water using solid phase extraction and liquid chromatography tandem mass spectrometry (SPE-LC/MS/MS). The advantage of this SPE-LC/MS/MS is its sensitivity and ability to speciate the microcystins. This method development task establishes sample preservation techniques, sample concentration and analytical procedures, aqueous and extract holding time criteria and quality control procedures. Draft Method 544 undergone a multi-laboratory verification to ensure other laboratories can implement the method and achieve the quality control measures specified in the method. It is anticipated that Method 544 may be used in UCMR 4 to collect nationwide occurrence data for selected microcystins in drinking water. The purpose of this research project is to develop an accurate and precise analytical method to concentrate and determine selected MCs and nodularin in drinking water.
An Accurate and Stable FFT-based Method for Pricing Options under Exp-Lévy Processes
NASA Astrophysics Data System (ADS)
Ding, Deng; Chong U, Sio
2010-05-01
An accurate and stable method for pricing European options in exp-Lévy models is presented. The main idea of this new method is combining the quadrature technique and the Carr-Madan Fast Fourier Transform methods. The theoretical analysis shows that the overall complexity of this new method is still O(N log N) with N grid points as the fast Fourier transform methods. Numerical experiments for different exp-Lévy processes also show that the numerical algorithm proposed by this new method has an accuracy and stability for the small strike prices K. That develops and improves the Carr-Madan method.
NASA Technical Reports Server (NTRS)
Ray, Ronald J.
1994-01-01
New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.
Shao, Zhecheng; Wyatt, Mark F; Stein, Bridget K; Brenton, A Gareth
2010-10-30
A method for the accurate mass measurement of negative radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. This is an extension to our previously described method for the accurate mass measurement of positive radical ions (Griffiths NW, Wyatt MF, Kean SD, Graham AE, Stein BK, Brenton AG. Rapid Commun. Mass Spectrom. 2010; 24: 1629). The porphyrin standard reference materials (SRMs) developed for positive mode measurements cannot be observed in negative ion mode, so fullerene and fluorinated porphyrin compounds were identified as effective SRMs. The method is of immediate practical use for the accurate mass measurement of functionalised fullerenes, for which negative ion MALDI-TOFMS is the principal mass spectrometry characterisation technique. This was demonstrated by the accurate mass measurement of six functionalised C(60) compounds. Copyright © 2010 John Wiley & Sons, Ltd.
An accurate computational method for the diffusion regime verification
NASA Astrophysics Data System (ADS)
Zhokh, Alexey A.; Strizhak, Peter E.
2018-04-01
The diffusion regime (sub-diffusive, standard, or super-diffusive) is defined by the order of the derivative in the corresponding transport equation. We develop an accurate computational method for the direct estimation of the diffusion regime. The method is based on the derivative order estimation using the asymptotic analytic solutions of the diffusion equation with the integer order and the time-fractional derivatives. The robustness and the computational cheapness of the proposed method are verified using the experimental methane and methyl alcohol transport kinetics through the catalyst pellet.
Huang, Lei
2015-01-01
To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required. PMID:26437409
Procedure for the systematic orientation of digitised cranial models. Design and validation.
Bailo, M; Baena, S; Marín, J J; Arredondo, J M; Auría, J M; Sánchez, B; Tardío, E; Falcón, L
2015-12-01
Comparison of bony pieces requires that they are oriented systematically to ensure that homologous regions are compared. Few orientation methods are highly accurate; this is particularly true for methods applied to three-dimensional models obtained by surface scanning, a technique whose special features make it a powerful tool in forensic contexts. The aim of this study was to develop and evaluate a systematic, assisted orientation method for aligning three-dimensional cranial models relative to the Frankfurt Plane, which would be produce accurate orientations independent of operator and anthropological expertise. The study sample comprised four crania of known age and sex. All the crania were scanned and reconstructed using an Eva Artec™ portable 3D surface scanner and subsequently, the position of certain characteristic landmarks were determined by three different operators using the Rhinoceros 3D surface modelling software. Intra-observer analysis showed a tendency for orientation to be more accurate when using the assisted method than when using conventional manual orientation. Inter-observer analysis showed that experienced evaluators achieve results at least as accurate if not more accurate using the assisted method than those obtained using manual orientation; while inexperienced evaluators achieved more accurate orientation using the assisted method. The method tested is a an innovative system capable of providing very precise, systematic and automatised spatial orientations of virtual cranial models relative to standardised anatomical planes independent of the operator and operator experience. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Progress Toward Accurate Measurements of Power Consumptions of DBD Plasma Actuators
NASA Technical Reports Server (NTRS)
Ashpis, David E.; Laun, Matthew C.; Griebeler, Elmer L.
2012-01-01
The accurate measurement of power consumption by Dielectric Barrier Discharge (DBD) plasma actuators is a challenge due to the characteristics of the actuator current signal. Micro-discharges generate high-amplitude, high-frequency current spike transients superimposed on a low-amplitude, low-frequency current. We have used a high-speed digital oscilloscope to measure the actuator power consumption using the Shunt Resistor method and the Monitor Capacitor method. The measurements were performed simultaneously and compared to each other in a time-accurate manner. It was found that low signal-to-noise ratios of the oscilloscopes used, in combination with the high dynamic range of the current spikes, make the Shunt Resistor method inaccurate. An innovative, nonlinear signal compression circuit was applied to the actuator current signal and yielded excellent agreement between the two methods. The paper describes the issues and challenges associated with performing accurate power measurements. It provides insights into the two methods including new insight into the Lissajous curve of the Monitor Capacitor method. Extension to a broad range of parameters and further development of the compression hardware will be performed in future work.
Hu, Hao; Yang, Weitao
2013-01-01
Determining the free energies and mechanisms of chemical reactions in solution and enzymes is a major challenge. For such complex reaction processes, combined quantum mechanics/molecular mechanics (QM/MM) method is the most effective simulation method to provide an accurate and efficient theoretical description of the molecular system. The computational costs of ab initio QM methods, however, have limited the application of ab initio QM/MM methods. Recent advances in ab initio QM/MM methods allowed the accurate simulation of the free energies for reactions in solution and in enzymes and thus paved the way for broader application of the ab initio QM/MM methods. We review here the theoretical developments and applications of the ab initio QM/MM methods, focusing on the determination of reaction path and the free energies of the reaction processes in solution and enzymes. PMID:24146439
NASA Astrophysics Data System (ADS)
Sizov, Gennadi Y.
In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.
Musmade, Kranti P.; Trilok, M.; Dengale, Swapnil J.; Bhat, Krishnamurthy; Reddy, M. S.; Musmade, Prashant B.; Udupa, N.
2014-01-01
A simple, precise, accurate, rapid, and sensitive reverse phase high performance liquid chromatography (RP-HPLC) method with UV detection has been developed and validated for quantification of naringin (NAR) in novel pharmaceutical formulation. NAR is a polyphenolic flavonoid present in most of the citrus plants having variety of pharmacological activities. Method optimization was carried out by considering the various parameters such as effect of pH and column. The analyte was separated by employing a C18 (250.0 × 4.6 mm, 5 μm) column at ambient temperature in isocratic conditions using phosphate buffer pH 3.5: acetonitrile (75 : 25% v/v) as mobile phase pumped at a flow rate of 1.0 mL/min. UV detection was carried out at 282 nm. The developed method was validated according to ICH guidelines Q2(R1). The method was found to be precise and accurate on statistical evaluation with a linearity range of 0.1 to 20.0 μg/mL for NAR. The intra- and interday precision studies showed good reproducibility with coefficients of variation (CV) less than 1.0%. The mean recovery of NAR was found to be 99.33 ± 0.16%. The proposed method was found to be highly accurate, sensitive, and robust. The proposed liquid chromatographic method was successfully employed for the routine analysis of said compound in developed novel nanopharmaceuticals. The presence of excipients did not show any interference on the determination of NAR, indicating method specificity. PMID:26556205
What can formal methods offer to digital flight control systems design
NASA Technical Reports Server (NTRS)
Good, Donald I.
1990-01-01
Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.
Rapid glucosinolate detection and identification using accurate mass MS-MS
USDA-ARS?s Scientific Manuscript database
Currently, there is a demand for accurate evaluation of brassica plat species for their glucosinolate content. An optimized method has been developed for detecting and identifying glucosinolates in plant extracts using MS-MS fragmentation with ion trap collision induced dissociation (CID) and higher...
1H NMR quantification in very dilute toxin solutions: application to anatoxin-a analysis.
Dagnino, Denise; Schripsema, Jan
2005-08-01
A complete procedure is described for the extraction, detection and quantification of anatoxin-a in biological samples. Anatoxin-a is extracted from biomass by a routine acid base extraction. The extract is analysed by GC-MS, without the need of derivatization, with a detection limit of 0.5 ng. A method was developed for the accurate quantification of anatoxin-a in the standard solution to be used for the calibration of the GC analysis. 1H NMR allowed the accurate quantification of microgram quantities of anatoxin-a. The accurate quantification of compounds in standard solutions is rarely discussed, but for compounds like anatoxin-a (toxins with prices in the range of a million dollar a gram), of which generally only milligram quantities or less are available, this factor in the quantitative analysis is certainly not trivial. The method that was developed can easily be adapted for the accurate quantification of other toxins in very dilute solutions.
Zhao, Xin; Liu, Jun; Yao, Yong-Xin; ...
2018-01-23
Developing accurate and computationally efficient methods to calculate the electronic structure and total energy of correlated-electron materials has been a very challenging task in condensed matter physics and materials science. Recently, we have developed a correlation matrix renormalization (CMR) method which does not assume any empirical Coulomb interaction U parameters and does not have double counting problems in the ground-state total energy calculation. The CMR method has been demonstrated to be accurate in describing both the bonding and bond breaking behaviors of molecules. In this study, we extend the CMR method to the treatment of electron correlations in periodic solidmore » systems. By using a linear hydrogen chain as a benchmark system, we show that the results from the CMR method compare very well with those obtained recently by accurate quantum Monte Carlo (QMC) calculations. We also study the equation of states of three-dimensional crystalline phases of atomic hydrogen. We show that the results from the CMR method agree much better with the available QMC data in comparison with those from density functional theory and Hartree-Fock calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Xin; Liu, Jun; Yao, Yong-Xin
Developing accurate and computationally efficient methods to calculate the electronic structure and total energy of correlated-electron materials has been a very challenging task in condensed matter physics and materials science. Recently, we have developed a correlation matrix renormalization (CMR) method which does not assume any empirical Coulomb interaction U parameters and does not have double counting problems in the ground-state total energy calculation. The CMR method has been demonstrated to be accurate in describing both the bonding and bond breaking behaviors of molecules. In this study, we extend the CMR method to the treatment of electron correlations in periodic solidmore » systems. By using a linear hydrogen chain as a benchmark system, we show that the results from the CMR method compare very well with those obtained recently by accurate quantum Monte Carlo (QMC) calculations. We also study the equation of states of three-dimensional crystalline phases of atomic hydrogen. We show that the results from the CMR method agree much better with the available QMC data in comparison with those from density functional theory and Hartree-Fock calculations.« less
EXPOSURE ASSESSMENT METHODS DEVELOPMENT PILOTS FOR THE NATIONAL CHILDREN'S STUDY
Accurate exposure classification tools are needed to link exposure with health effects. EPA began methods development pilot studies in 2000 to address general questions about exposures and outcome measures. Selected pilot studies are highlighted in this poster. The “Literature Re...
On the Development of Parameterized Linear Analytical Longitudinal Airship Models
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Johnson, Joseph R.; Bayard, David S.; Elfes, Alberto; Quadrelli, Marco B.
2008-01-01
In order to explore Titan, a moon of Saturn, airships must be able to traverse the atmosphere autonomously. To achieve this, an accurate model and accurate control of the vehicle must be developed so that it is understood how the airship will react to specific sets of control inputs. This paper explains how longitudinal aircraft stability derivatives can be used with airship parameters to create a linear model of the airship solely by combining geometric and aerodynamic airship data. This method does not require system identification of the vehicle. All of the required data can be derived from computational fluid dynamics and wind tunnel testing. This alternate method of developing dynamic airship models will reduce time and cost. Results are compared to other stable airship dynamic models to validate the methods. Future work will address a lateral airship model using the same methods.
Trinder, P.; Harper, F. E.
1962-01-01
A colorimetric technique for the determination of carboxyhaemoglobin in blood is described. Carbon monoxide released from blood in a standard Conway unit reacts with palladous chloride/arsenomolybdate solution to produce a blue colour. Using 0·5 to 2 ml. of blood, the method will estimate carboxyhaemoglobin accurately at levels from 0·1% to 100% of total haemoglobin and in the presence of other abnormal pigments. A number of methods are available for the determination of carboxyhaemoglobin; none is accurate below a concentration of 1·5 g. carboxyhaemoglobin per 100 ml. but for most clinical purposes this is not important. For forensic purposes and occasionally in clinical use, an accurate determination of carboxyhaemoglobin below 750 mg. per 100 ml. may be required and no really satisfactory method is at present available. Some time ago when it was important to know whether a person who was found dead in a burning house had died before or after the fire had started, we became interested in developing a method which would determine accurately carboxyhaemoglobin at levels of 750 mg. per 100 ml. PMID:13922505
NASA Astrophysics Data System (ADS)
Hester, David Barry
The objective of this research was to develop methods for urban land cover analysis using QuickBird high spatial resolution satellite imagery. Such imagery has emerged as a rich commercially available remote sensing data source and has enjoyed high-profile broadcast news media and Internet applications, but methods of quantitative analysis have not been thoroughly explored. The research described here consists of three studies focused on the use of pan-sharpened 61-cm spatial resolution QuickBird imagery, the spatial resolution of which is the highest of any commercial satellite. In the first study, a per-pixel land cover classification method is developed for use with this imagery. This method utilizes a per-pixel classification approach to generate an accurate six-category high spatial resolution land cover map of a developing suburban area. The primary objective of the second study was to develop an accurate land cover change detection method for use with QuickBird land cover products. This work presents an efficient fuzzy framework for transforming map uncertainty into accurate and meaningful high spatial resolution land cover change analysis. The third study described here is an urban planning application of the high spatial resolution QuickBird-based land cover product developed in the first study. This work both meaningfully connects this exciting new data source to urban watershed management and makes an important empirical contribution to the study of suburban watersheds. Its analysis of residential roads and driveways as well as retail parking lots sheds valuable light on the impact of transportation-related land use on the suburban landscape. Broadly, these studies provide new methods for using state-of-the-art remote sensing data to inform land cover analysis and urban planning. These methods are widely adaptable and produce land cover products that are both meaningful and accurate. As additional high spatial resolution satellites are launched and the cost of high resolution imagery continues to decline, this research makes an important contribution to this exciting era in the science of remote sensing.
Development of a practical costing method for hospitals.
Cao, Pengyu; Toyabe, Shin-Ichi; Akazawa, Kouhei
2006-03-01
To realize an effective cost control, a practical and accurate cost accounting system is indispensable in hospitals. In traditional cost accounting systems, the volume-based costing (VBC) is the most popular cost accounting method. In this method, the indirect costs are allocated to each cost object (services or units of a hospital) using a single indicator named a cost driver (e.g., Labor hours, revenues or the number of patients). However, this method often results in rough and inaccurate results. The activity based costing (ABC) method introduced in the mid 1990s can prove more accurate results. With the ABC method, all events or transactions that cause costs are recognized as "activities", and a specific cost driver is prepared for each activity. Finally, the costs of activities are allocated to cost objects by the corresponding cost driver. However, it is much more complex and costly than other traditional cost accounting methods because the data collection for cost drivers is not always easy. In this study, we developed a simplified ABC (S-ABC) costing method to reduce the workload of ABC costing by reducing the number of cost drivers used in the ABC method. Using the S-ABC method, we estimated the cost of the laboratory tests, and as a result, similarly accurate results were obtained with the ABC method (largest difference was 2.64%). Simultaneously, this new method reduces the seven cost drivers used in the ABC method to four. Moreover, we performed an evaluation using other sample data from physiological laboratory department to certify the effectiveness of this new method. In conclusion, the S-ABC method provides two advantages in comparison to the VBC and ABC methods: (1) it can obtain accurate results, and (2) it is simpler to perform. Once we reduce the number of cost drivers by applying the proposed S-ABC method to the data for the ABC method, we can easily perform the cost accounting using few cost drivers after the second round of costing.
Conjugate-gradient optimization method for orbital-free density functional calculations.
Jiang, Hong; Yang, Weitao
2004-08-01
Orbital-free density functional theory as an extension of traditional Thomas-Fermi theory has attracted a lot of interest in the past decade because of developments in both more accurate kinetic energy functionals and highly efficient numerical methodology. In this paper, we developed a conjugate-gradient method for the numerical solution of spin-dependent extended Thomas-Fermi equation by incorporating techniques previously used in Kohn-Sham calculations. The key ingredient of the method is an approximate line-search scheme and a collective treatment of two spin densities in the case of spin-dependent extended Thomas-Fermi problem. Test calculations for a quartic two-dimensional quantum dot system and a three-dimensional sodium cluster Na216 with a local pseudopotential demonstrate that the method is accurate and efficient. (c) 2004 American Institute of Physics.
NASA Astrophysics Data System (ADS)
Gorewoda, Tadeusz; Mzyk, Zofia; Anyszkiewicz, Jacek; Charasińska, Jadwiga
2015-04-01
The purpose of this study was to develop an accurate method for the determination of bromine in polymer materials using X-ray fluorescence spectrometry when the thickness of the sample is less than the bromine critical thickness (tc) value. This is particularly important for analyzing compliance with the Restriction of Hazardous Substances Directive. Mathematically and experimentally estimated tc values in polyethylene and cellulose matrixes were up to several millimeters. Four methods were developed to obtain an accurate result. These methods include the addition of an element with a high mass absorption coefficient, the measurement of the total bromine contained in a defined volume of the sample, the exploitation of tube-Rayleigh line intensities and using the Br-Lβ line.
NASA Technical Reports Server (NTRS)
Mei, Ren-Wei; Shyy, Wei; Yu, Da-Zhi; Luo, Li-Shi; Rudy, David (Technical Monitor)
2001-01-01
The lattice Boltzmann equation (LBE) is a kinetic formulation which offers an alternative computational method capable of solving fluid dynamics for various systems. Major advantages of the method are owing to the fact that the solution for the particle distribution functions is explicit, easy to implement, and the algorithm is natural to parallelize. In this final report, we summarize the works accomplished in the past three years. Since most works have been published, the technical details can be found in the literature. Brief summary will be provided in this report. In this project, a second-order accurate treatment of boundary condition in the LBE method is developed for a curved boundary and tested successfully in various 2-D and 3-D configurations. To evaluate the aerodynamic force on a body in the context of LBE method, several force evaluation schemes have been investigated. A simple momentum exchange method is shown to give reliable and accurate values for the force on a body in both 2-D and 3-D cases. Various 3-D LBE models have been assessed in terms of efficiency, accuracy, and robustness. In general, accurate 3-D results can be obtained using LBE methods. The 3-D 19-bit model is found to be the best one among the 15-bit, 19-bit, and 27-bit LBE models. To achieve desired grid resolution and to accommodate the far field boundary conditions in aerodynamics computations, a multi-block LBE method is developed by dividing the flow field into various blocks each having constant lattice spacing. Substantial contribution to the LBE method is also made through the development of a new, generalized lattice Boltzmann equation constructed in the moment space in order to improve the computational stability, detailed theoretical analysis on the stability, dispersion, and dissipation characteristics of the LBE method, and computational studies of high Reynolds number flows with singular gradients. Finally, a finite difference-based lattice Boltzmann method is developed for inviscid compressible flows.
A Legendre tau-spectral method for solving time-fractional heat equation with nonlocal conditions.
Bhrawy, A H; Alghamdi, M A
2014-01-01
We develop the tau-spectral method to solve the time-fractional heat equation (T-FHE) with nonlocal condition. In order to achieve highly accurate solution of this problem, the operational matrix of fractional integration (described in the Riemann-Liouville sense) for shifted Legendre polynomials is investigated in conjunction with tau-spectral scheme and the Legendre operational polynomials are used as the base function. The main advantage in using the presented scheme is that it converts the T-FHE with nonlocal condition to a system of algebraic equations that simplifies the problem. For demonstrating the validity and applicability of the developed spectral scheme, two numerical examples are presented. The logarithmic graphs of the maximum absolute errors is presented to achieve the exponential convergence of the proposed method. Comparing between our spectral method and other methods ensures that our method is more accurate than those solved similar problem.
A Legendre tau-Spectral Method for Solving Time-Fractional Heat Equation with Nonlocal Conditions
Bhrawy, A. H.; Alghamdi, M. A.
2014-01-01
We develop the tau-spectral method to solve the time-fractional heat equation (T-FHE) with nonlocal condition. In order to achieve highly accurate solution of this problem, the operational matrix of fractional integration (described in the Riemann-Liouville sense) for shifted Legendre polynomials is investigated in conjunction with tau-spectral scheme and the Legendre operational polynomials are used as the base function. The main advantage in using the presented scheme is that it converts the T-FHE with nonlocal condition to a system of algebraic equations that simplifies the problem. For demonstrating the validity and applicability of the developed spectral scheme, two numerical examples are presented. The logarithmic graphs of the maximum absolute errors is presented to achieve the exponential convergence of the proposed method. Comparing between our spectral method and other methods ensures that our method is more accurate than those solved similar problem. PMID:25057507
Proximate Composition Analysis.
2016-01-01
The proximate composition of foods includes moisture, ash, lipid, protein and carbohydrate contents. These food components may be of interest in the food industry for product development, quality control (QC) or regulatory purposes. Analyses used may be rapid methods for QC or more accurate but time-consuming official methods. Sample collection and preparation must be considered carefully to ensure analysis of a homogeneous and representative sample, and to obtain accurate results. Estimation methods of moisture content, ash value, crude lipid, total carbohydrates, starch, total free amino acids and total proteins are put together in a lucid manner.
Computation of turbulent boundary layers on curved surfaces, 1 June 1975 - 31 January 1976
NASA Technical Reports Server (NTRS)
Wilcox, D. C.; Chambers, T. L.
1976-01-01
An accurate method was developed for predicting effects of streamline curvature and coordinate system rotation on turbulent boundary layers. A new two-equation model of turbulence was developed which serves as the basis of the study. In developing the new model, physical reasoning is combined with singular perturbation methods to develop a rational, physically-based set of equations which are, on the one hand, as accurate as mixing-length theory for equilibrium boundary layers and, on the other hand, suitable for computing effects of curvature and rotation. The equations are solved numerically for several boundary layer flows over plane and curved surfaces. For incompressible boundary layers, results of the computations are generally within 10% of corresponding experimental data. Somewhat larger discrepancies are noted for compressible applications.
Multi-Optimisation Consensus Clustering
NASA Astrophysics Data System (ADS)
Li, Jian; Swift, Stephen; Liu, Xiaohui
Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.
Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions
Chen, Shengyong; Xiao, Gang; Li, Xiaoli
2014-01-01
This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954
Step-height measurement with a low coherence interferometer using continuous wavelet transform
NASA Astrophysics Data System (ADS)
Jian, Zhang; Suzuki, Takamasa; Choi, Samuel; Sasaki, Osami
2013-12-01
With the development of electronic technology in recent years, electronic components become increasingly miniaturized. At the same time a more accurate measurement method becomes indispensable. In the current measurement of nano-level, the Michelson interferometer with the laser diode is widely used, the method can measure the object accurately without touching the object. However it can't measure the step height that is larger than the half-wavelength. In this study, we improve the conventional Michelson interferometer by using a super luminescent diode and continuous wavelet transform, which can detect the time that maximizes the amplitude of the interference signal. We can accurately measure the surface-position of the object with this time. The method used in this experiment measured the step height of 20 microns.
Recent Developments in the Assessment and Treatment of Pediatric Obsessive-Compulsive Disorder
ERIC Educational Resources Information Center
Berman, Noah C.; Abramowitz, Jonathan S.
2010-01-01
Although tremendous strides have recently been made in the development of assessment and treatment methods for pediatric obsessive-compulsive disorder (OCD), more accurate methods for diagnosis, more effective treatments, and more refined instruments for monitoring progress during therapy are still needed. The present commentary highlights the…
A Mixed Finite Volume Element Method for Flow Calculations in Porous Media
NASA Technical Reports Server (NTRS)
Jones, Jim E.
1996-01-01
A key ingredient in the simulation of flow in porous media is the accurate determination of the velocities that drive the flow. The large scale irregularities of the geology, such as faults, fractures, and layers suggest the use of irregular grids in the simulation. Work has been done in applying the finite volume element (FVE) methodology as developed by McCormick in conjunction with mixed methods which were developed by Raviart and Thomas. The resulting mixed finite volume element discretization scheme has the potential to generate more accurate solutions than standard approaches. The focus of this paper is on a multilevel algorithm for solving the discrete mixed FVE equations. The algorithm uses a standard cell centered finite difference scheme as the 'coarse' level and the more accurate mixed FVE scheme as the 'fine' level. The algorithm appears to have potential as a fast solver for large size simulations of flow in porous media.
NASA Astrophysics Data System (ADS)
Afan, Haitham Abdulmohsin; El-shafie, Ahmed; Mohtar, Wan Hanna Melini Wan; Yaseen, Zaher Mundher
2016-10-01
An accurate model for sediment prediction is a priority for all hydrological researchers. Many conventional methods have shown an inability to achieve an accurate prediction of suspended sediment. These methods are unable to understand the behaviour of sediment transport in rivers due to the complexity, noise, non-stationarity, and dynamism of the sediment pattern. In the past two decades, Artificial Intelligence (AI) and computational approaches have become a remarkable tool for developing an accurate model. These approaches are considered a powerful tool for solving any non-linear model, as they can deal easily with a large number of data and sophisticated models. This paper is a review of all AI approaches that have been applied in sediment modelling. The current research focuses on the development of AI application in sediment transport. In addition, the review identifies major challenges and opportunities for prospective research. Throughout the literature, complementary models superior to classical modelling.
NASA Astrophysics Data System (ADS)
Dizaji, Farzad; Marshall, Jeffrey; Grant, John; Jin, Xing
2017-11-01
Accounting for the effect of subgrid-scale turbulence on interacting particles remains a challenge when using Reynolds-Averaged Navier Stokes (RANS) or Large Eddy Simulation (LES) approaches for simulation of turbulent particulate flows. The standard stochastic Lagrangian method for introducing turbulence into particulate flow computations is not effective when the particles interact via collisions, contact electrification, etc., since this method is not intended to accurately model relative motion between particles. We have recently developed the stochastic vortex structure (SVS) method and demonstrated its use for accurate simulation of particle collision in homogeneous turbulence; the current work presents an extension of the SVS method to turbulent shear flows. The SVS method simulates subgrid-scale turbulence using a set of randomly-positioned, finite-length vortices to generate a synthetic fluctuating velocity field. It has been shown to accurately reproduce the turbulence inertial-range spectrum and the probability density functions for the velocity and acceleration fields. In order to extend SVS to turbulent shear flows, a new inversion method has been developed to orient the vortices in order to generate a specified Reynolds stress field. The extended SVS method is validated in the present study with comparison to direct numerical simulations for a planar turbulent jet flow. This research was supported by the U.S. National Science Foundation under Grant CBET-1332472.
Airframe Icing Research Gaps: NASA Perspective
NASA Technical Reports Server (NTRS)
Potapczuk, Mark
2009-01-01
qCurrent Airframe Icing Technology Gaps: Development of a full 3D ice accretion simulation model. Development of an improved simulation model for SLD conditions. CFD modeling of stall behavior for ice-contaminated wings/tails. Computational methods for simulation of stability and control parameters. Analysis of thermal ice protection system performance. Quantification of 3D ice shape geometric characteristics Development of accurate ground-based simulation of SLD conditions. Development of scaling methods for SLD conditions. Development of advanced diagnostic techniques for assessment of tunnel cloud conditions. Identification of critical ice shapes for aerodynamic performance degradation. Aerodynamic scaling issues associated with testing scale model ice shape geometries. Development of altitude scaling methods for thermal ice protections systems. Development of accurate parameter identification methods. Measurement of stability and control parameters for an ice-contaminated swept wing aircraft. Creation of control law modifications to prevent loss of control during icing encounters. 3D ice shape geometries. Collection efficiency data for ice shape geometries. SLD ice shape data, in-flight and ground-based, for simulation verification. Aerodynamic performance data for 3D geometries and various icing conditions. Stability and control parameter data for iced aircraft configurations. Thermal ice protection system data for simulation validation.
Bai, Yuqiang; Nichols, Jason J
2017-05-01
The thickness of tear film has been investigated under both invasive and non-invasive methods. While invasive methods are largely historical, more recent noninvasive methods are generally based on optical approaches that provide accurate, precise, and rapid measures. Optical microscopy, interferometry, and optical coherence tomography (OCT) have been developed to characterize the thickness of tear film or certain aspects of the tear film (e.g., the lipid layer). This review provides an in-depth overview on contemporary optical techniques used in studying the tear film, including both advantages and limitations of these approaches. It is anticipated that further developments of high-resolution OCT and other interferometric methods will enable a more accurate and precise measurement of the thickness of the tear film and its related dynamic properties. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Stricker, J.
1985-01-01
Electronic heterodyne moire deflectometry and electronic heterodyne holographic interferometry are compared as methods for the accurate measurement of refractive index and density change distributions of phase objects. Experimental results are presented to show that the two methods have comparable accuracy for measuring the first derivative of the interferometric fringe shift. The phase object for the measurements is a large crystal of KD*P, whose refractive index distribution can be changed accurately and repeatably for the comparison. Although the refractive index change causes only about one interferometric fringe shift over the entire crystal, the derivative shows considerable detail for the comparison. As electronic phase measurement methods, both methods are very accurate and are intrinsically compatible with computer controlled readout and data processing. Heterodyne moire is relatively inexpensive and has high variable sensitivity. Heterodyne holographic interferometry is better developed, and can be used with poor quality optical access to the experiment.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer
2017-04-01
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
Roumeliotis, Grayson; Willing, Ryan; Neuert, Mark; Ahluwalia, Romy; Jenkyn, Thomas; Yazdani, Arjang
2015-09-01
The accurate assessment of symmetry in the craniofacial skeleton is important for cosmetic and reconstructive craniofacial surgery. Although there have been several published attempts to develop an accurate system for determining the correct plane of symmetry, all are inaccurate and time consuming. Here, the authors applied a novel semi-automatic method for the calculation of craniofacial symmetry, based on principal component analysis and iterative corrective point computation, to a large sample of normal adult male facial computerized tomography scans obtained clinically (n = 32). The authors hypothesized that this method would generate planes of symmetry that would result in less error when one side of the face was compared to the other than a symmetry plane generated using a plane defined by cephalometric landmarks. When a three-dimensional model of one side of the face was reflected across the semi-automatic plane of symmetry there was less error than when reflected across the cephalometric plane. The semi-automatic plane was also more accurate when the locations of bilateral cephalometric landmarks (eg, frontozygomatic sutures) were compared across the face. The authors conclude that this method allows for accurate and fast measurements of craniofacial symmetry. This has important implications for studying the development of the facial skeleton, and clinical application for reconstruction.
A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation
Ali Khan, Wajahat; Hur, Taeho; Muhammad Bilal, Hafiz Syed; Ul Hassan, Anees; Lee, Sungyoung
2018-01-01
The user experience (UX) is an emerging field in user research and design, and the development of UX evaluation methods presents a challenge for both researchers and practitioners. Different UX evaluation methods have been developed to extract accurate UX data. Among UX evaluation methods, the mixed-method approach of triangulation has gained importance. It provides more accurate and precise information about the user while interacting with the product. However, this approach requires skilled UX researchers and developers to integrate multiple devices, synchronize them, analyze the data, and ultimately produce an informed decision. In this paper, a method and system for measuring the overall UX over time using a triangulation method are proposed. The proposed platform incorporates observational and physiological measurements in addition to traditional ones. The platform reduces the subjective bias and validates the user’s perceptions, which are measured by different sensors through objectification of the subjective nature of the user in the UX assessment. The platform additionally offers plug-and-play support for different devices and powerful analytics for obtaining insight on the UX in terms of multiple participants. PMID:29783712
A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation.
Hussain, Jamil; Khan, Wajahat Ali; Hur, Taeho; Bilal, Hafiz Syed Muhammad; Bang, Jaehun; Hassan, Anees Ul; Afzal, Muhammad; Lee, Sungyoung
2018-05-18
The user experience (UX) is an emerging field in user research and design, and the development of UX evaluation methods presents a challenge for both researchers and practitioners. Different UX evaluation methods have been developed to extract accurate UX data. Among UX evaluation methods, the mixed-method approach of triangulation has gained importance. It provides more accurate and precise information about the user while interacting with the product. However, this approach requires skilled UX researchers and developers to integrate multiple devices, synchronize them, analyze the data, and ultimately produce an informed decision. In this paper, a method and system for measuring the overall UX over time using a triangulation method are proposed. The proposed platform incorporates observational and physiological measurements in addition to traditional ones. The platform reduces the subjective bias and validates the user's perceptions, which are measured by different sensors through objectification of the subjective nature of the user in the UX assessment. The platform additionally offers plug-and-play support for different devices and powerful analytics for obtaining insight on the UX in terms of multiple participants.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardisty, M.; Gordon, L.; Agarwal, P.
2007-08-15
Quantitative assessment of metastatic disease in bone is often considered immeasurable and, as such, patients with skeletal metastases are often excluded from clinical trials. In order to effectively quantify the impact of metastatic tumor involvement in the spine, accurate segmentation of the vertebra is required. Manual segmentation can be accurate but involves extensive and time-consuming user interaction. Potential solutions to automating segmentation of metastatically involved vertebrae are demons deformable image registration and level set methods. The purpose of this study was to develop a semiautomated method to accurately segment tumor-bearing vertebrae using the aforementioned techniques. By maintaining morphology of anmore » atlas, the demons-level set composite algorithm was able to accurately differentiate between trans-cortical tumors and surrounding soft tissue of identical intensity. The algorithm successfully segmented both the vertebral body and trabecular centrum of tumor-involved and healthy vertebrae. This work validates our approach as equivalent in accuracy to an experienced user.« less
Traditional and Modern Cell Culture in Virus Diagnosis.
Hematian, Ali; Sadeghifard, Nourkhoda; Mohebi, Reza; Taherikalani, Morovat; Nasrolahi, Abbas; Amraei, Mansour; Ghafourian, Sobhan
2016-04-01
Cell cultures are developed from tissue samples and then disaggregated by mechanical, chemical, and enzymatic methods to extract cells suitable for isolation of viruses. With the recent advances in technology, cell culture is considered a gold standard for virus isolation. This paper reviews the evolution of cell culture methods and demonstrates why cell culture is a preferred method for identification of viruses. In addition, the advantages and disadvantages of both traditional and modern cell culture methods for diagnosis of each type of virus are discussed. Detection of viruses by the novel cell culture methods is considered more accurate and sensitive. However, there is a need to include some more accurate methods such as molecular methods in cell culture for precise identification of viruses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhen, X; Chen, H; Zhou, L
2014-06-15
Purpose: To propose and validate a novel and accurate deformable image registration (DIR) scheme to facilitate dose accumulation among treatment fractions of high-dose-rate (HDR) gynecological brachytherapy. Method: We have developed a method to adapt DIR algorithms to gynecologic anatomies with HDR applicators by incorporating a segmentation step and a point-matching step into an existing DIR framework. In the segmentation step, random walks algorithm is used to accurately segment and remove the applicator region (AR) in the HDR CT image. A semi-automatic seed point generation approach is developed to obtain the incremented foreground and background point sets to feed the randommore » walks algorithm. In the subsequent point-matching step, a feature-based thin-plate spline-robust point matching (TPS-RPM) algorithm is employed for AR surface point matching. With the resulting mapping, a DVF characteristic of the deformation between the two AR surfaces is generated by B-spline approximation, which serves as the initial DVF for the following Demons DIR between the two AR-free HDR CT images. Finally, the calculated DVF via Demons combined with the initial one serve as the final DVF to map doses between HDR fractions. Results: The segmentation and registration accuracy are quantitatively assessed by nine clinical HDR cases from three gynecological cancer patients. The quantitative results as well as the visual inspection of the DIR indicate that our proposed method can suppress the interference of the applicator with the DIR algorithm, and accurately register HDR CT images as well as deform and add interfractional HDR doses. Conclusions: We have developed a novel and robust DIR scheme that can perform registration between HDR gynecological CT images and yield accurate registration results. This new DIR scheme has potential for accurate interfractional HDR dose accumulation. This work is supported in part by the National Natural ScienceFoundation of China (no 30970866 and no 81301940)« less
Orbital dependent functionals: An atom projector augmented wave method implementation
NASA Astrophysics Data System (ADS)
Xu, Xiao
This thesis explores the formulation and numerical implementation of orbital dependent exchange-correlation functionals within electronic structure calculations. These orbital-dependent exchange-correlation functionals have recently received renewed attention as a means to improve the physical representation of electron interactions within electronic structure calculations. In particular, electron self-interaction terms can be avoided. In this thesis, an orbital-dependent functional is considered in the context of Hartree-Fock (HF) theory as well as the Optimized Effective Potential (OEP) method and the approximate OEP method developed by Krieger, Li, and Iafrate, known as the KLI approximation. In this thesis, the Fock exchange term is used as a simple well-defined example of an orbital-dependent functional. The Projected Augmented Wave (PAW) method developed by P. E. Blochl has proven to be accurate and efficient for electronic structure calculations for local and semi-local functions because of its accurate evaluation of interaction integrals by controlling multiple moments. We have extended the PAW method to treat orbital-dependent functionals in Hartree-Fock theory and the Optimized Effective Potential method, particularly in the KLI approximation. In the course of study we develop a frozen-core orbital approximation that accurately treats the core electron contributions for above three methods. The main part of the thesis focuses on the treatment of spherical atoms. We have investigated the behavior of PAW-Hartree Fock and PAW-KLI basis, projector, and pseudopotential functions for several elements throughout the periodic table. We have also extended the formalism to the treatment of solids in a plane wave basis and implemented PWPAW-KLI code, which will appear in future publications.
Kuroda, Yukihiro; Saito, Madoka
2010-03-01
An in vitro method to predict phospholipidosis-inducing potential of cationic amphiphilic drugs (CADs) was developed using biochemical and physicochemical assays. The following parameters were applied to principal component analysis, as well as physicochemical parameters: pK(a) and clogP; dissociation constant of CADs from phospholipid, inhibition of enzymatic phospholipid degradation, and metabolic stability of CADs. In the score plot, phospholipidosis-inducing drugs (amiodarone, propranolol, imipramine, chloroquine) were plotted locally forming the subspace for positive CADs; while non-inducing drugs (chlorpromazine, chloramphenicol, disopyramide, lidocaine) were placed scattering out of the subspace, allowing a clear discrimination between both classes of CADs. CADs that often produce false results by conventional physicochemical or cell-based assay methods were accurately determined by our method. Basic and lipophilic disopyramide could be accurately predicted as a nonphospholipidogenic drug. Moreover, chlorpromazine, which is often falsely predicted as a phospholipidosis-inducing drug by in vitro methods, could be accurately determined. Because this method uses the pharmacokinetic parameters pK(a), clogP, and metabolic stability, which are usually obtained in the early stages of drug development, the method newly requires only the two parameters, binding to phospholipid, and inhibition of lipid degradation enzyme. Therefore, this method provides a cost-effective approach to predict phospholipidosis-inducing potential of a drug. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Shape design sensitivity analysis using domain information
NASA Technical Reports Server (NTRS)
Seong, Hwal-Gyeong; Choi, Kyung K.
1985-01-01
A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.
Two-dimensional radiative transfer. I - Planar geometry. [in stellar atmospheres
NASA Technical Reports Server (NTRS)
Mihalas, D.; Auer, L. H.; Mihalas, B. R.
1978-01-01
Differential-equation methods for solving the transfer equation in two-dimensional planar geometries are developed. One method, which uses a Hermitian integration formula on ray segments through grid points, proves to be extremely well suited to velocity-dependent problems. An efficient elimination scheme is developed for which the computing time scales linearly with the number of angles and frequencies; problems with large velocity amplitudes can thus be treated accurately. A very accurate and efficient method for performing a formal solution is also presented. A discussion is given of several examples of periodic media and free-standing slabs, both in static cases and with velocity fields. For the free-standing slabs, two-dimensional transport effects are significant near boundaries, but no important effects were found in any of the periodic cases studied.
Development of accurate potentials to explore the structure of water on 2D materials
NASA Astrophysics Data System (ADS)
Bejagam, Karteek; Singh, Samrendra; Deshmukh, Sanket; Deshmkuh Group Team; Samrendra Group Collaboration
Water play an important role in many biological and non-biological process. Thus structure of water at various interfaces and under confinement has always been the topic of immense interest. 2-D materials have shown great potential in surface coating applications and nanofluidic devices. However, the exact atomic level understanding of the wettability of single layer of these 2-D materials is still lacking mainly due to lack of experimental techniques and computational methodologies including accurate force-field potentials and algorithms to measure the contact angle of water. In the present study, we have developed a new algorithm to measure the accurate contact angle between water and 2-D materials. The algorithm is based on fitting the best sphere to the shape of the droplet. This novel spherical fitting method accounts for every individual molecule of the droplet, rather than those at the surface only. We employ this method of contact angle measurements to develop the accurate non-bonded potentials between water and 2-D materials including graphene and boron nitride (BN) to reproduce the experimentally observed contact angle of water on these 2-D materials. Different water models such as SPC, SPC/Fw, and TIP3P were used to study the structure of water at the interfaces.
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1999-01-01
The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made in the analysis were addressed and fully investigated for their accuracy by using the three-dimensional electromagnetic simulation code MAFIA (Solution of Maxwell's Equations by the Finite Integration Algorithm) (refs. 3 and 4). We found that several approximations introduced significant error (ref. 5).
Link, Daphna; Braginsky, Michael B; Joskowicz, Leo; Ben Sira, Liat; Harel, Shaul; Many, Ariel; Tarrasch, Ricardo; Malinger, Gustavo; Artzi, Moran; Kapoor, Cassandra; Miller, Elka; Ben Bashat, Dafna
2018-01-01
Accurate fetal brain volume estimation is of paramount importance in evaluating fetal development. The aim of this study was to develop an automatic method for fetal brain segmentation from magnetic resonance imaging (MRI) data, and to create for the first time a normal volumetric growth chart based on a large cohort. A semi-automatic segmentation method based on Seeded Region Growing algorithm was developed and applied to MRI data of 199 typically developed fetuses between 18 and 37 weeks' gestation. The accuracy of the algorithm was tested against a sub-cohort of ground truth manual segmentations. A quadratic regression analysis was used to create normal growth charts. The sensitivity of the method to identify developmental disorders was demonstrated on 9 fetuses with intrauterine growth restriction (IUGR). The developed method showed high correlation with manual segmentation (r2 = 0.9183, p < 0.001) as well as mean volume and volume overlap differences of 4.77 and 18.13%, respectively. New reference data on 199 normal fetuses were created, and all 9 IUGR fetuses were at or below the third percentile of the normal growth chart. The proposed method is fast, accurate, reproducible, user independent, applicable with retrospective data, and is suggested for use in routine clinical practice. © 2017 S. Karger AG, Basel.
Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier
2018-06-01
Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.
Dykes, Patricia C; Wantland, Dean; Whittenburg, Luann; Lipsitz, Stuart; Saba, Virginia K
2013-01-01
While nursing activities represent a significant proportion of inpatient care, there are no reliable methods for determining nursing costs based on the actual services provided by the nursing staff. Capture of data to support accurate measurement and reporting on the cost of nursing services is fundamental to effective resource utilization. Adopting standard terminologies that support tracking both the quality and the cost of care could reduce the data entry burden on direct care providers. This pilot study evaluated the feasibility of using a standardized nursing terminology, the Clinical Care Classification System (CCC), for developing a reliable costing method for nursing services. Two different approaches are explored; the Relative Value Unit RVU and the simple cost-to-time methods. We found that the simple cost-to-time method was more accurate and more transparent in its derivation than the RVU method and may support a more consistent and reliable approach for costing nursing services.
LDRD final report : leveraging multi-way linkages on heterogeneous data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Kolda, Tamara Gibson
2010-09-01
This report is a summary of the accomplishments of the 'Leveraging Multi-way Linkages on Heterogeneous Data' which ran from FY08 through FY10. The goal was to investigate scalable and robust methods for multi-way data analysis. We developed a new optimization-based method called CPOPT for fitting a particular type of tensor factorization to data; CPOPT was compared against existing methods and found to be more accurate than any faster method and faster than any equally accurate method. We extended this method to computing tensor factorizations for problems with incomplete data; our results show that you can recover scientifically meaningfully factorizations withmore » large amounts of missing data (50% or more). The project has involved 5 members of the technical staff, 2 postdocs, and 1 summer intern. It has resulted in a total of 13 publications, 2 software releases, and over 30 presentations. Several follow-on projects have already begun, with more potential projects in development.« less
Solar Power Tower Integrated Layout and Optimization Tool | Concentrating
methods to reduce the overall computational burden while generating accurate and precise results. These methods have been developed as part of the U.S. Department of Energy (DOE) SunShot Initiative research
Magnetic Moment Quantifications of Small Spherical Objects in MRI
Cheng, Yu-Chung N.; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin
2014-01-01
Purpose The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Methods Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5 T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Results Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. Conclusion An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. PMID:25490517
A Micromechanics-Based Method for Multiscale Fatigue Prediction
NASA Astrophysics Data System (ADS)
Moore, John Allan
An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.
Parametric model of human body shape and ligaments for patient-specific epidural simulation.
Vaughan, Neil; Dubey, Venketesh N; Wee, Michael Y K; Isaacs, Richard
2014-10-01
This work is to build upon the concept of matching a person's weight, height and age to their overall body shape to create an adjustable three-dimensional model. A versatile and accurate predictor of body size and shape and ligament thickness is required to improve simulation for medical procedures. A model which is adjustable for any size, shape, body mass, age or height would provide ability to simulate procedures on patients of various body compositions. Three methods are provided for estimating body circumferences and ligament thicknesses for each patient. The first method is using empirical relations from body shape and size. The second method is to load a dataset from a magnetic resonance imaging (MRI) scan or ultrasound scan containing accurate ligament measurements. The third method is a developed artificial neural network (ANN) which uses MRI dataset as a training set and improves accuracy using error back-propagation, which learns to increase accuracy as more patient data is added. The ANN is trained and tested with clinical data from 23,088 patients. The ANN can predict subscapular skinfold thickness within 3.54 mm, waist circumference 3.92 cm, thigh circumference 2.00 cm, arm circumference 1.21 cm, calf circumference 1.40 cm, triceps skinfold thickness 3.43 mm. Alternative regression analysis method gave overall slightly less accurate predictions for subscapular skinfold thickness within 3.75 mm, waist circumference 3.84 cm, thigh circumference 2.16 cm, arm circumference 1.34 cm, calf circumference 1.46 cm, triceps skinfold thickness 3.89 mm. These calculations are used to display a 3D graphics model of the patient's body shape using OpenGL and adjusted by 3D mesh deformations. A patient-specific epidural simulator is presented using the developed body shape model, able to simulate needle insertion procedures on a 3D model of any patient size and shape. The developed ANN gave the most accurate results for body shape, size and ligament thickness. The resulting simulator offers the experience of simulating needle insertions accurately whilst allowing for variation in patient body mass, height or age. Copyright © 2014 Elsevier B.V. All rights reserved.
Tatinati, Sivanagaraja; Nazarpour, Kianoush; Tech Ang, Wei; Veluvolu, Kalyana C
2016-08-01
Successful treatment of tumors with motion-adaptive radiotherapy requires accurate prediction of respiratory motion, ideally with a prediction horizon larger than the latency in radiotherapy system. Accurate prediction of respiratory motion is however a non-trivial task due to the presence of irregularities and intra-trace variabilities, such as baseline drift and temporal changes in fundamental frequency pattern. In this paper, to enhance the accuracy of the respiratory motion prediction, we propose a stacked regression ensemble framework that integrates heterogeneous respiratory motion prediction algorithms. We further address two crucial issues for developing a successful ensemble framework: (1) selection of appropriate prediction methods to ensemble (level-0 methods) among the best existing prediction methods; and (2) finding a suitable generalization approach that can successfully exploit the relative advantages of the chosen level-0 methods. The efficacy of the developed ensemble framework is assessed with real respiratory motion traces acquired from 31 patients undergoing treatment. Results show that the developed ensemble framework improves the prediction performance significantly compared to the best existing methods. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.
Huang, Shiping; Wu, Zhifeng; Misra, Anil
2017-12-11
Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.
S66: A Well-balanced Database of Benchmark Interaction Energies Relevant to Biomolecular Structures
2011-01-01
With numerous new quantum chemistry methods being developed in recent years and the promise of even more new methods to be developed in the near future, it is clearly critical that highly accurate, well-balanced, reference data for many different atomic and molecular properties be available for the parametrization and validation of these methods. One area of research that is of particular importance in many areas of chemistry, biology, and material science is the study of noncovalent interactions. Because these interactions are often strongly influenced by correlation effects, it is necessary to use computationally expensive high-order wave function methods to describe them accurately. Here, we present a large new database of interaction energies calculated using an accurate CCSD(T)/CBS scheme. Data are presented for 66 molecular complexes, at their reference equilibrium geometries and at 8 points systematically exploring their dissociation curves; in total, the database contains 594 points: 66 at equilibrium geometries, and 528 in dissociation curves. The data set is designed to cover the most common types of noncovalent interactions in biomolecules, while keeping a balanced representation of dispersion and electrostatic contributions. The data set is therefore well suited for testing and development of methods applicable to bioorganic systems. In addition to the benchmark CCSD(T) results, we also provide decompositions of the interaction energies by means of DFT-SAPT calculations. The data set was used to test several correlated QM methods, including those parametrized specifically for noncovalent interactions. Among these, the SCS-MI-CCSD method outperforms all other tested methods, with a root-mean-square error of 0.08 kcal/mol for the S66 data set. PMID:21836824
Desingu, P A; Singh, S D; Dhama, K; Kumar, O R Vinodh; Singh, R; Singh, R K
2015-02-01
A rapid and accurate method of detection and differentiation of virulent and avirulent Newcastle disease virus (NDV) pathotypes was developed. The NDV detection was carried out for different domestic avian field isolates and pigeon paramyxo virus-1 (25 field isolates and 9 vaccine strains) by using APMV-I "fusion" (F) gene Class II specific external primer A and B (535bp), internal primer C and D (238bp) based reverses transcriptase PCR (RT-PCR). The internal degenerative reverse primer D is specific for F gene cleavage position of virulent strain of NDV. The nested RT-PCR products of avirulent strains showed two bands (535bp and 424bp) while virulent strains showed four bands (535bp, 424bp, 349bp and 238bp) on agar gel electrophoresis. This is the first report regarding development and use of degenerate primer based nested RT-PCR for accurate detection and differentiation of NDV pathotypes by demonstrating multiple PCR band patterns. Being a rapid, simple, and economical test, the developed method could serve as a valuable alternate diagnostic tool for characterizing NDV isolates and carrying out molecular epidemiological surveillance studies for this important pathogen of poultry. Copyright © 2014 Elsevier B.V. All rights reserved.
Analytical study to define a helicopter stability derivative extraction method, volume 1
NASA Technical Reports Server (NTRS)
Molusis, J. A.
1973-01-01
A method is developed for extracting six degree-of-freedom stability and control derivatives from helicopter flight data. Different combinations of filtering and derivative estimate are investigated and used with a Bayesian approach for derivative identification. The combination of filtering and estimate found to yield the most accurate time response match to flight test data is determined and applied to CH-53A and CH-54B flight data. The method found to be most accurate consists of (1) filtering flight test data with a digital filter, followed by an extended Kalman filter (2) identifying a derivative estimate with a least square estimator, and (3) obtaining derivatives with the Bayesian derivative extraction method.
Magnetic moment quantifications of small spherical objects in MRI.
Cheng, Yu-Chung N; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin
2015-07-01
The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values, and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Simplified methods of predicting aircraft rolling moments due to vortex encounters
DOT National Transportation Integrated Search
1977-05-01
Computational methods suitable for fast and accurate prediction of rolling moments on aircraft : encountering wake vortices are presented. Appropriate modifications to strip theory are developed which account for the effects of finite wingspan. It is...
NASA Technical Reports Server (NTRS)
1985-01-01
An accurate method of surveying the soil was developed by NASA and the Department of Agriculture. The method involves using ground penetrating radar to produce subsurface graphs. By examining printouts from the system's recorder, scientists can determine whether a site is appropriate for building, etc.
Rapid Methods for the Detection of General Fecal Indicators
Specified that EPA should develop: appropriate and effective indicators for improving detection in a timely manner of pathogens in coastal waters appropriate, accurate, expeditious and cost-effective methods for the timely detection of pathogens in coastal waters
Van Duren, B H; Pandit, H; Beard, D J; Murray, D W; Gill, H S
2009-04-01
The recent development in Oxford lateral unicompartmental knee arthroplasty (UKA) design requires a valid method of assessing its kinematics. In particular, the use of single plane fluoroscopy to reconstruct the 3D kinematics of the implanted knee. The method has been used previously to investigate the kinematics of UKA, but mostly it has been used in conjunction with total knee arthroplasty (TKA). However, no accuracy assessment of the method when used for UKA has previously been reported. In this study we performed computer simulation tests to investigate the effect of the different geometry of the unicompartmental implant has on the accuracy of the method in comparison to the total knee implants. A phantom was built to perform in vitro tests to determine the accuracy of the method for UKA. The computer simulations suggested that the use of the method for UKA would prove less accurate than for TKA's. The rotational degrees of freedom for the femur showed greatest disparity between the UKA and TKA. The phantom tests showed that the in-plane translations were accurate to <0.5mm RMS and the out-of-plane translations were less accurate with 4.1mm RMS. The rotational accuracies were between 0.6 degrees and 2.3 degrees which are less accurate than those reported in the literature for TKA, however, the method is sufficient for studying overall knee kinematics.
Application of the Spectral Element Method to Interior Noise Problems
NASA Technical Reports Server (NTRS)
Doyle, James F.
1998-01-01
The primary effort of this research project was focused the development of analytical methods for the accurate prediction of structural acoustic noise and response. Of particular interest was the development of curved frame and shell spectral elements for the efficient computational of structural response and of schemes to match this to the surrounding fluid.
A Temperature-Monitoring Vaginal Ring for Measuring Adherence
Boyd, Peter; Desjardins, Delphine; Kumar, Sandeep; Fetherston, Susan M.; Le-Grand, Roger; Dereuddre-Bosquet, Nathalie; Helgadóttir, Berglind; Bjarnason, Ásgeir; Narasimhan, Manjula; Malcolm, R. Karl
2015-01-01
Background Product adherence is a pivotal issue in the development of effective vaginal microbicides to reduce sexual transmission of HIV. To date, the six Phase III studies of vaginal gel products have relied primarily on self-reporting of adherence. Accurate and reliable methods for monitoring user adherence to microbicide-releasing vaginal rings have yet to be established. Methods A silicone elastomer vaginal ring prototype containing an embedded, miniature temperature logger has been developed and tested in vitro and in cynomolgus macaques for its potential to continuously monitor environmental temperature and accurately determine episodes of ring insertion and removal. Results In vitro studies demonstrated that DST nano-T temperature loggers encapsulated in medical grade silicone elastomer were able to accurately and continuously measure environmental temperature. The devices responded quickly to temperature changes despite being embedded in different thickness of silicone elastomer. Prototype vaginal rings measured higher temperatures compared with a subcutaneously implanted device, showed high sensitivity to diurnal fluctuations in vaginal temperature, and accurately detected periods of ring removal when tested in macaques. Conclusions Vaginal rings containing embedded temperature loggers may be useful in the assessment of product adherence in late-stage clinical trials. PMID:25965956
High-throughput quantification of hydroxyproline for determination of collagen.
Hofman, Kathleen; Hall, Bronwyn; Cleaver, Helen; Marshall, Susan
2011-10-15
An accurate and high-throughput assay for collagen is essential for collagen research and development of collagen products. Hydroxyproline is routinely assayed to provide a measurement for collagen quantification. The time required for sample preparation using acid hydrolysis and neutralization prior to assay is what limits the current method for determining hydroxyproline. This work describes the conditions of alkali hydrolysis that, when combined with the colorimetric assay defined by Woessner, provide a high-throughput, accurate method for the measurement of hydroxyproline. Copyright © 2011 Elsevier Inc. All rights reserved.
Detection of blur artifacts in histopathological whole-slide images of endomyocardial biopsies.
Hang Wu; Phan, John H; Bhatia, Ajay K; Cundiff, Caitlin A; Shehata, Bahig M; Wang, May D
2015-01-01
Histopathological whole-slide images (WSIs) have emerged as an objective and quantitative means for image-based disease diagnosis. However, WSIs may contain acquisition artifacts that affect downstream image feature extraction and quantitative disease diagnosis. We develop a method for detecting blur artifacts in WSIs using distributions of local blur metrics. As features, these distributions enable accurate classification of WSI regions as sharp or blurry. We evaluate our method using over 1000 portions of an endomyocardial biopsy (EMB) WSI. Results indicate that local blur metrics accurately detect blurry image regions.
Matsuzaki, Ryosuke; Tachikawa, Takeshi; Ishizuka, Junya
2018-03-01
Accurate simulations of carbon fiber-reinforced plastic (CFRP) molding are vital for the development of high-quality products. However, such simulations are challenging and previous attempts to improve the accuracy of simulations by incorporating the data acquired from mold monitoring have not been completely successful. Therefore, in the present study, we developed a method to accurately predict various CFRP thermoset molding characteristics based on data assimilation, a process that combines theoretical and experimental values. The degree of cure as well as temperature and thermal conductivity distributions during the molding process were estimated using both temperature data and numerical simulations. An initial numerical experiment demonstrated that the internal mold state could be determined solely from the surface temperature values. A subsequent numerical experiment to validate this method showed that estimations based on surface temperatures were highly accurate in the case of degree of cure and internal temperature, although predictions of thermal conductivity were more difficult.
Quantitative comparison of in situ soil CO2 flux measurement methods
Jennifer D. Knoepp; James M. Vose
2002-01-01
Development of reliable regional or global carbon budgets requires accurate measurement of soil CO2 flux. We conducted laboratory and field studies to determine the accuracy and comparability of methods commonly used to measure in situ soil CO2 fluxes. Methods compared included CO2...
Grid Standards and Codes | Grid Modernization | NREL
simulations that take advantage of advanced concepts such as hardware-in-the-loop testing. Such methods of methods and solutions. Projects Accelerating Systems Integration Standards Sharp increases in goal of this project is to develop streamlined and accurate methods for New York utilities to determine
NASA Astrophysics Data System (ADS)
Plakhov, Iu. V.; Mytsenko, A. V.; Shel'Pov, V. A.
A numerical integration method is developed that is more accurate than Everhart's (1974) implicit single-sequence approach for integrating orbits. This method can be used to solve problems of space geodesy based on the use of highly precise laser observations.
Development of a method for measuring femoral torsion using real-time ultrasound.
Hafiz, Eliza; Hiller, Claire E; Nicholson, Leslie L; Nightingale, E Jean; Clarke, Jillian L; Grimaldi, Alison; Eisenhuth, John P; Refshauge, Kathryn M
2014-07-01
Excessive femoral torsion has been associated with various musculoskeletal and neurological problems. To explore this relationship, it is essential to be able to measure femoral torsion in the clinic accurately. Computerized tomography (CT) and magnetic resonance imaging (MRI) are thought to provide the most accurate measurements but CT involves significant radiation exposure and MRI is expensive. The aim of this study was to design a method for measuring femoral torsion in the clinic, and to determine the reliability of this method. Details of design process, including construction of a jig, the protocol developed and the reliability of the method are presented. The protocol developed used ultrasound to image a ridge on the greater trochanter, and a customized jig placed on the femoral condyles as reference points. An inclinometer attached to the customized jig allowed quantification of the degree of femoral torsion. Measurements taken with this protocol had excellent intra- and inter-rater reliability (ICC2,1 = 0.98 and 0.97, respectively). This method of measuring femoral torsion also permitted measurement of femoral torsion with a high degree of accuracy. This method is applicable to the research setting and, with minor adjustments, will be applicable to the clinical setting.
Singh, C L; Singh, A; Kumar, S; Kumar, M; Sharma, P K; Majumdar, D K
2015-01-01
In the present study a simple, accurate, precise, economical and specific UV-spectrophotometric method for estimation of besifloxacin in bulk and in different pharmaceutical formulation has been developed. The drug shows maximum λmax289 nm in distilled water, simulated tears and phosphate buffer saline. The linearity range of developed methods were in the range of 3-30 μg/ml of drug with a correlation coefficient (r(2)) 0.9992, 0.9989 and 0.9984 with respect to distilled water, simulated tears and phosphate buffer saline, respectively. Reproducibility by repeating methods as %RSD were found to be less than 2%. The limit of detection in different media was found to be 0.62, 0.72 and 0.88 μg/ml, respectively. The limit of quantification was found to be 1.88, 2.10, 2.60 μg/ml, respectively. The proposed method was validated statically according to International Conference on Harmonization guidelines with respect to specificity, linearity, range, accuracy, precision and robustness. The proposed methods of validation were found to be accurate and highly specific for the estimation of besifloxacin in different pharmaceutical formulations.
A novel method for the accurate evaluation of Poisson's ratio of soft polymer materials.
Lee, Jae-Hoon; Lee, Sang-Soo; Chang, Jun-Dong; Thompson, Mark S; Kang, Dong-Joong; Park, Sungchan; Park, Seonghun
2013-01-01
A new method with a simple algorithm was developed to accurately measure Poisson's ratio of soft materials such as polyvinyl alcohol hydrogel (PVA-H) with a custom experimental apparatus consisting of a tension device, a micro X-Y stage, an optical microscope, and a charge-coupled device camera. In the proposed method, the initial positions of the four vertices of an arbitrarily selected quadrilateral from the sample surface were first measured to generate a 2D 1st-order 4-node quadrilateral element for finite element numerical analysis. Next, minimum and maximum principal strains were calculated from differences between the initial and deformed shapes of the quadrilateral under tension. Finally, Poisson's ratio of PVA-H was determined by the ratio of minimum principal strain to maximum principal strain. This novel method has an advantage in the accurate evaluation of Poisson's ratio despite misalignment between specimens and experimental devices. In this study, Poisson's ratio of PVA-H was 0.44 ± 0.025 (n = 6) for 2.6-47.0% elongations with a tendency to decrease with increasing elongation. The current evaluation method of Poisson's ratio with a simple measurement system can be employed to a real-time automated vision-tracking system which is used to accurately evaluate the material properties of various soft materials.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groeneboom, N. E.; Dahle, H., E-mail: nicolaag@astro.uio.no
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images thatmore » can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.« less
Accurate image-charge method by the use of the residue theorem for core-shell dielectric sphere
NASA Astrophysics Data System (ADS)
Fu, Jing; Xu, Zhenli
2018-02-01
An accurate image-charge method (ICM) is developed for ionic interactions outside a core-shell structured dielectric sphere. Core-shell particles have wide applications for which the theoretical investigation requires efficient methods for the Green's function used to calculate pairwise interactions of ions. The ICM is based on an inverse Mellin transform from the coefficients of spherical harmonic series of the Green's function such that the polarization charge due to dielectric boundaries is represented by a series of image point charges and an image line charge. The residue theorem is used to accurately calculate the density of the line charge. Numerical results show that the ICM is promising in fast evaluation of the Green's function, and thus it is useful for theoretical investigations of core-shell particles. This routine can also be applicable for solving other problems with spherical dielectric interfaces such as multilayered media and Debye-Hückel equations.
Improving the accuracy of burn-surface estimation.
Nichter, L S; Williams, J; Bryant, C A; Edlich, R F
1985-09-01
A user-friendly computer-assisted method of calculating total body surface area burned (TBSAB) has been developed. This method is more accurate, faster, and subject to less error than conventional methods. For comparison, the ability of 30 physicians to estimate TBSAB was tested. Parameters studied included the effect of prior burn care experience, the influence of burn size, the ability to accurately sketch the size of burns on standard burn charts, and the ability to estimate percent TBSAB from the sketches. Despite the ability for physicians of all levels of training to accurately sketch TBSAB, significant burn size over-estimation (p less than 0.01) and large interrater variability of potential consequence was noted. Direct benefits of a computerized system are many. These include the need for minimal user experience and the ability for wound-trend analysis, permanent record storage, calculation of fluid and caloric requirements, hemodynamic parameters, and the ability to compare meaningfully the different treatment protocols.
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi; ...
2015-11-12
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scovazzi, Guglielmo; Carnes, Brian; Zeng, Xianyi
Here, we propose a new approach for the stabilization of linear tetrahedral finite elements in the case of nearly incompressible transient solid dynamics computations. Our method is based on a mixed formulation, in which the momentum equation is complemented by a rate equation for the evolution of the pressure field, approximated with piece-wise linear, continuous finite element functions. The pressure equation is stabilized to prevent spurious pressure oscillations in computations. Incidentally, it is also shown that many stabilized methods previously developed for the static case do not generalize easily to transient dynamics. Extensive tests in the context of linear andmore » nonlinear elasticity are used to corroborate the claim that the proposed method is robust, stable, and accurate.« less
Introducing GAMER: A Fast and Accurate Method for Ray-tracing Galaxies Using Procedural Noise
NASA Astrophysics Data System (ADS)
Groeneboom, N. E.; Dahle, H.
2014-03-01
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
On simulating flow with multiple time scales using a method of averages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, L.G.
1997-12-31
The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his newmore » method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.« less
Barkat, K; Ahmad, M; Minhas, M U; Malik, M Z; Sohail, M
2014-07-01
The objective of study was to develop an accurate and reproducible HPLC method for determination of piracetam in human plasma and to evaluate pharmacokinetic parameters of 800 mg piracetam. A simple, rapid, accurate, precise and sensitive high pressure liquid chromatography method has been developed and subsequently validated for determination of piracetam. This study represents the results of a randomized, single-dose and single-period in 18 healthy male volunteers to assess pharmacokinetic parameters of 800 mg piracetam tablets. Various pharmacokinetic parameters were determined from plasma for piracetam and found to be in good agreement with previous reported values. The data was analyzed by using Kinetica® version 4.4 according to non-compartment model of pharmacokinetic analysis and after comparison with previous studies, no significant differences were found in present study of tested product. The major pharmacokinetic parameters for piracetam were as follows: t1/2 was (4.40 ± 0.179) h; Tmax value was (2.33 ± 0.105) h; Cmax was (14.53 ± 0.282) µg/mL; the AUC(0-∞) was (59.19 ± 4.402) µg · h/mL. AUMC(0-∞) was (367.23 ± 38.96) µg. (h)(2)/mL; Ke was (0.16 ± 0.006) h; MRT was (5.80 ± 0.227) h; Vd was (96.36 ± 8.917 L). A rapid, accurate and precise high pressure liquid chromatography method was developed and validated before the study. It is concluded that this method is very useful for the analysis of pharmacokinetic parameters, in human plasma and assured the safety and efficacy of piracetam, can be effectively used in medical practice. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Herrington, Jason S.; Hays, Michael D.
2012-08-01
There is high demand for accurate and reliable airborne carbonyl measurement methods due to the human and environmental health impacts of carbonyls and their effects on atmospheric chemistry. Standardized 2,4-dinitrophenylhydrazine (DNPH)-based sampling methods are frequently applied for measuring gaseous carbonyls in the atmospheric environment. However, there are multiple short-comings associated with these methods that detract from an accurate understanding of carbonyl-related exposure, health effects, and atmospheric chemistry. The purpose of this brief technical communication is to highlight these method challenges and their influence on national ambient monitoring networks, and to provide a logical path forward for accurate carbonyl measurement. This manuscript focuses on three specific carbonyl compounds of high toxicological interest—formaldehyde, acetaldehyde, and acrolein. Further method testing and development, the revision of standardized methods, and the plausibility of introducing novel technology for these carbonyls are considered elements of the path forward. The consolidation of this information is important because it seems clear that carbonyl data produced utilizing DNPH-based methods are being reported without acknowledgment of the method short-comings or how to best address them.
Trimming Line Design using New Development Method and One Step FEM
NASA Astrophysics Data System (ADS)
Chung, Wan-Jin; Park, Choon-Dal; Yang, Dong-yol
2005-08-01
In most of automobile panel manufacturing, trimming is generally performed prior to flanging. To find feasible trimming line is crucial in obtaining accurate edge profile after flanging. Section-based method develops blank along section planes and find trimming line by generating loop of end points. This method suffers from inaccurate results for regions with out-of-section motion. On the other hand, simulation-based method can produce more accurate trimming line by iterative strategy. However, due to limitation of time and lack of information in initial die design, it is still not widely accepted in the industry. In this study, new fast method to find feasible trimming line is proposed. One step FEM is used to analyze the flanging process because we can define the desired final shape after flanging and most of strain paths are simple in flanging. When we use one step FEM, the main obstacle is the generation of initial guess. Robust initial guess generation method is developed to handle bad-shaped mesh, very different mesh size and undercut part. The new method develops 3D triangular mesh in propagational way from final mesh onto the drawing tool surface. Also in order to remedy mesh distortion during development, energy minimization technique is utilized. Trimming line is extracted from the outer boundary after one step FEM simulation. This method shows many benefits since trimming line can be obtained in the early design stage. The developed method is successfully applied to the complex industrial applications such as flanging of fender and door outer.
Uechi, Ken; Asakura, Keiko; Ri, Yui; Masayasu, Shizuko; Sasaki, Satoshi
2016-02-01
Several estimation methods for 24-h sodium excretion using spot urine sample have been reported, but accurate estimation at the individual level remains difficult. We aimed to clarify the most accurate method of estimating 24-h sodium excretion with different numbers of available spot urine samples. A total of 370 participants from throughout Japan collected multiple 24-h urine and spot urine samples independently. Participants were allocated randomly into a development and a validation dataset. Two estimation methods were established in the development dataset using the two 24-h sodium excretion samples as reference: the 'simple mean method' estimated by multiplying the sodium-creatinine ratio by predicted 24-h creatinine excretion, whereas the 'regression method' employed linear regression analysis. The accuracy of the two methods was examined by comparing the estimated means and concordance correlation coefficients (CCC) in the validation dataset. Mean sodium excretion by the simple mean method with three spot urine samples was closest to that by 24-h collection (difference: -1.62 mmol/day). CCC with the simple mean method increased with an increased number of spot urine samples at 0.20, 0.31, and 0.42 using one, two, and three samples, respectively. This method with three spot urine samples yielded higher CCC than the regression method (0.40). When only one spot urine sample was available for each study participant, CCC was higher with the regression method (0.36). The simple mean method with three spot urine samples yielded the most accurate estimates of sodium excretion. When only one spot urine sample was available, the regression method was preferable.
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.
An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
Ma, Ya-Jun; Lu, Xing; Carl, Michael; Zhu, Yanchun; Szeverenyi, Nikolaus M; Bydder, Graeme M; Chang, Eric Y; Du, Jiang
2018-08-01
To develop an accurate T 1 measurement method for short T 2 tissues using a combination of a 3-dimensional ultrashort echo time cones actual flip angle imaging technique and a variable repetition time technique (3D UTE-Cones AFI-VTR) on a clinical 3T scanner. First, the longitudinal magnetization mapping function of the excitation pulse was obtained with the 3D UTE-Cones AFI method, which provided information about excitation efficiency and B 1 inhomogeneity. Then, the derived mapping function was substituted into the VTR fitting to generate accurate T 1 maps. Numerical simulation and phantom studies were carried out to compare the AFI-VTR method with a B 1 -uncorrected VTR method, a B 1 -uncorrected variable flip angle (VFA) method, and a B 1 -corrected VFA method. Finally, the 3D UTE-Cones AFI-VTR method was applied to bovine bone samples (N = 6) and healthy volunteers (N = 3) to quantify the T 1 of cortical bone. Numerical simulation and phantom studies showed that the 3D UTE-Cones AFI-VTR technique provides more accurate measurement of the T 1 of short T 2 tissues than the B 1 -uncorrected VTR and VFA methods or the B 1 -corrected VFA method. The proposed 3D UTE-Cones AFI-VTR method showed a mean T 1 of 240 ± 25 ms for bovine cortical bone and 218 ± 10 ms for the tibial midshaft of human volunteers, respectively, at 3 T. The 3D UTE-Cones AFI-VTR method can provide accurate T 1 measurements of short T 2 tissues such as cortical bone. Magn Reson Med 80:598-608, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.
Domanski, Dominik; Murphy, Leigh C.; Borchers, Christoph H.
2010-01-01
We have developed a phosphatase-based phosphopeptide quantitation (PPQ) method for determining phosphorylation stoichiometry in complex biological samples. This PPQ method is based on enzymatic dephosphorylation, combined with specific and accurate peptide identification and quantification by multiple reaction monitoring (MRM) detection with stable-isotope-labeled standard peptides. In contrast with the classical MRM methods for the quantitation of phosphorylation stoichiometry, the PPQ-MRM method needs only one non-phosphorylated SIS (stable isotope-coded standard) and two analyses (one for the untreated and one for the phosphatase-treated sample), from which the expression and modification levels can accurately be determined. From these analyses, the % phosphorylation can be determined. In this manuscript, we compare the PPQ-MRM method with an MRM method without phosphatase, and demonstrate the application of these methods to the detection and quantitation of phosphorylation of the classic phosphorylated breast cancer biomarkers (ERα and HER2), and for phosphorylated RAF and ERK1, which also contain phosphorylation sites with important biological implications. Using synthetic peptides spiked into a complex protein digest, we were able to use our PPQ-MRM method to accurately determine the total phosphorylation stoichiometry on specific peptides, as well as the absolute amount of the peptide and phosphopeptide present. Analyses of samples containing ERα protein revealed that the PPQ-MRM is capable of determining phosphorylation stoichiometry in proteins from cell lines, and is in good agreement with determinations obtained using the direct MRM approach in terms of phosphorylation and total protein amount. PMID:20524616
NASA Technical Reports Server (NTRS)
Shertzer, Janine; Temkin, Aaron
2004-01-01
The development of a practical method of accurately calculating the full scattering amplitude, without making a partial wave decomposition is continued. The method is developed in the context of electron-hydrogen scattering, and here exchange is dealt with by considering e-H scattering in the static exchange approximation. The Schroedinger equation in this approximation can be simplified to a set of coupled integro-differential equations. The equations are solved numerically for the full scattering wave function. The scattering amplitude can most accurately be calculated from an integral expression for the amplitude; that integral can be formally simplified, and then evaluated using the numerically determined wave function. The results are essentially identical to converged partial wave results.
Methods to achieve accurate projection of regional and global raster databases
Usery, E. Lynn; Seong, Jeong Chang; Steinwand, Dan
2002-01-01
Modeling regional and global activities of climatic and human-induced change requires accurate geographic data from which we can develop mathematical and statistical tabulations of attributes and properties of the environment. Many of these models depend on data formatted as raster cells or matrices of pixel values. Recently, it has been demonstrated that regional and global raster datasets are subject to significant error from mathematical projection and that these errors are of such magnitude that model results may be jeopardized (Steinwand, et al., 1995; Yang, et al., 1996; Usery and Seong, 2001; Seong and Usery, 2001). There is a need to develop methods of projection that maintain the accuracy of these datasets to support regional and global analyses and modeling
Zheng, Dandan; Todor, Dorin A
2011-01-01
In real-time trans-rectal ultrasound (TRUS)-based high-dose-rate prostate brachytherapy, the accurate identification of needle-tip position is critical for treatment planning and delivery. Currently, needle-tip identification on ultrasound images can be subject to large uncertainty and errors because of ultrasound image quality and imaging artifacts. To address this problem, we developed a method based on physical measurements with simple and practical implementation to improve the accuracy and robustness of needle-tip identification. Our method uses measurements of the residual needle length and an off-line pre-established coordinate transformation factor, to calculate the needle-tip position on the TRUS images. The transformation factor was established through a one-time systematic set of measurements of the probe and template holder positions, applicable to all patients. To compare the accuracy and robustness of the proposed method and the conventional method (ultrasound detection), based on the gold-standard X-ray fluoroscopy, extensive measurements were conducted in water and gel phantoms. In water phantom, our method showed an average tip-detection accuracy of 0.7 mm compared with 1.6 mm of the conventional method. In gel phantom (more realistic and tissue-like), our method maintained its level of accuracy while the uncertainty of the conventional method was 3.4mm on average with maximum values of over 10mm because of imaging artifacts. A novel method based on simple physical measurements was developed to accurately detect the needle-tip position for TRUS-based high-dose-rate prostate brachytherapy. The method demonstrated much improved accuracy and robustness over the conventional method. Copyright © 2011 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Accurate prediction of bond dissociation energies of large n-alkanes using ONIOM-CCSD(T)/CBS methods
NASA Astrophysics Data System (ADS)
Wu, Junjun; Ning, Hongbo; Ma, Liuhao; Ren, Wei
2018-05-01
Accurate determination of the bond dissociation energies (BDEs) of large alkanes is desirable but practically impossible due to the expensive cost of high-level ab initio methods. We developed a two-layer ONIOM-CCSD(T)/CBS method which treats the high layer with CCSD(T) method and the low layer with DFT method, respectively. The accuracy of this method was validated by comparing the calculated BDEs of n-hexane with that obtained at the CCSD(T)-F12b/aug-cc-pVTZ level of theory. On this basis, the C-C BDEs of C6-C20 n-alkanes were calculated systematically using the ONIOM [CCSD(T)/CBS(D-T):M06-2x/6-311++G(d,p)] method, showing a good agreement with the data available in the literature.
Automatic segmentation of bones from digital hand radiographs
NASA Astrophysics Data System (ADS)
Liu, Brent J.; Taira, Ricky K.; Shim, Hyeonjoon; Keaton, Patricia
1995-05-01
The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The algorithm uses an object-oriented approach comprising several stages beginning with the most general objects to be segmented, such as the outline of the hand from background, and proceeding in a succession of stages to the most specific object, such as a specific phalangeal bone from a digit of the hand. Each stage carries custom operators unique to the needs of that specific stage which will aid in more accurate results. The method is further aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. Shape models, 1-D wrist profiles, as well as an interpretation tree are used to map model and data contour segments. Shape analysis is performed using an arc-length orientation transform. The method is tested on close to 340 phalangeal and epiphyseal objects to be segmented from 17 cases of pediatric hand images obtained from our clinical PACS. Patient age ranges from 2 - 16 years. A pediatric radiologist preliminarily assessed the results of the object contours and were found to be accurate to within 95% for cases with non-fused bones and to within 85% for cases with fused bones. With accurate and robust results, the method can be applied toward areas such as the determination of bone age, the development of a normal hand atlas, and the characterization of many congenital and acquired growth diseases. Furthermore, this method's architecture can be applied to other image segmentation problems.
Zhang, Yi-Bei; DA, Juan; Zhang, Jing-Xian; Li, Shang-Rong; Chen, Xin; Long, Hua-Li; Wang, Qiu-Rong; Cai, Lu-Ying; Yao, Shuai; Hou, Jin-Jun; Wu, Wan-Ying; Guo, De-An
2017-04-01
Aconiti Lateralis Radix Praeparata (Fuzi) is a commonly used traditional Chinese medicine in clinic for its potency in restoring yang and rescuing from collapse. Aconiti alkaloids, mainly including monoester-diterpenoidaconitines (MDAs) and diester-diterpenoidaconitines (DDAs), are considered to act as both bioactive and toxic constituents. In the present study, a feasible, economical, and accurate HPLC method for simultaneous determination of six alkaloid markers using the Single Standard for Determination of Multi-Components (SSDMC) method was developed and fully validated. Benzoylmesaconine was used as the unique reference standard. This method was proven as accurate (recovery varying between 97.5%-101.8%, RSD < 3%), precise (RSD 0.63%-2.05%), and linear (R > 0.999 9) over the concentration ranges, and subsequently applied to quantitative evaluation of 62 batches of samples, among which 45 batches were from good manufacturing practice (GMP) facilities and 17 batches from the drug market. The contents were then analyzed by principal component analysis (PCA) and homogeneity test. The present study provided valuable information for improving the quality standard of Aconiti Lateralis Radix Praeparata. The developed method also has the potential in analysis of other Aconitum species, such as Aconitum carmichaelii (prepared parent root) and Aconitum kusnezoffii (prepared root). Copyright © 2017 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.
Methods to Detect Nitric Oxide and its Metabolites in Biological Samples
Bryan, Nathan S.; Grisham, Matthew B.
2007-01-01
Nitric oxide (NO) methodology is a complex and often confusing science and the focus of many debates and discussion concerning NO biochemistry. NO is involved in many physiological processes including regulation of blood pressure, immune response and neural communication. Therefore its accurate detection and quantification is critical to understanding health and disease. Due to the extremely short physiological half life of this gaseous free radical, alternative strategies for the detection of reaction products of NO biochemistry have been developed. The quantification of NO metabolites in biological samples provides valuable information with regards to in vivo NO production, bioavailability and metabolism. Simply sampling a single compartment such as blood or plasma may not always provide an accurate assessment of whole body NO status, particularly in tissues. Therefore, extrapolation of plasma or blood NO status to specific tissues of interest is no longer a valid approach. As a result, methods continue to be developed and validated which allow the detection and quantification of NO and NO-related products/metabolites in multiple compartments of experimental animals in vivo. The methods described in this review is not an exhaustive or comprehensive discussion of all methods available for the detection of NO but rather a description of the most commonly used and practical methods which allow accurate and sensitive quantification of NO products/metabolites in multiple biological matrices under normal physiological conditions. PMID:17664129
Finite difference elastic wave modeling with an irregular free surface using ADER scheme
NASA Astrophysics Data System (ADS)
Almuhaidib, Abdulaziz M.; Nafi Toksöz, M.
2015-06-01
In numerical modeling of seismic wave propagation in the earth, we encounter two important issues: the free surface and the topography of the surface (i.e. irregularities). In this study, we develop a 2D finite difference solver for the elastic wave equation that combines a 4th- order ADER scheme (Arbitrary high-order accuracy using DERivatives), which is widely used in aeroacoustics, with the characteristic variable method at the free surface boundary. The idea is to treat the free surface boundary explicitly by using ghost values of the solution for points beyond the free surface to impose the physical boundary condition. The method is based on the velocity-stress formulation. The ultimate goal is to develop a numerical solver for the elastic wave equation that is stable, accurate and computationally efficient. The solver treats smooth arbitrary-shaped boundaries as simple plane boundaries. The computational cost added by treating the topography is negligible compared to flat free surface because only a small number of grid points near the boundary need to be computed. In the presence of topography, using 10 grid points per shortest shear-wavelength, the solver yields accurate results. Benchmark numerical tests using several complex models that are solved by our method and other independent accurate methods show an excellent agreement, confirming the validity of the method for modeling elastic waves with an irregular free surface.
Forensic 3D Scene Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
LITTLE,CHARLES Q.; PETERS,RALPH R.; RIGDON,J. BRIAN
Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3Dmore » measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene.« less
Han, Buhm; Kang, Hyun Min; Eskin, Eleazar
2009-01-01
With the development of high-throughput sequencing and genotyping technologies, the number of markers collected in genetic association studies is growing rapidly, increasing the importance of methods for correcting for multiple hypothesis testing. The permutation test is widely considered the gold standard for accurate multiple testing correction, but it is often computationally impractical for these large datasets. Recently, several studies proposed efficient alternative approaches to the permutation test based on the multivariate normal distribution (MVN). However, they cannot accurately correct for multiple testing in genome-wide association studies for two reasons. First, these methods require partitioning of the genome into many disjoint blocks and ignore all correlations between markers from different blocks. Second, the true null distribution of the test statistic often fails to follow the asymptotic distribution at the tails of the distribution. We propose an accurate and efficient method for multiple testing correction in genome-wide association studies—SLIDE. Our method accounts for all correlation within a sliding window and corrects for the departure of the true null distribution of the statistic from the asymptotic distribution. In simulations using the Wellcome Trust Case Control Consortium data, the error rate of SLIDE's corrected p-values is more than 20 times smaller than the error rate of the previous MVN-based methods' corrected p-values, while SLIDE is orders of magnitude faster than the permutation test and other competing methods. We also extend the MVN framework to the problem of estimating the statistical power of an association study with correlated markers and propose an efficient and accurate power estimation method SLIP. SLIP and SLIDE are available at http://slide.cs.ucla.edu. PMID:19381255
NASA Astrophysics Data System (ADS)
Abdel-Ghany, Maha F.; Abdel-Aziz, Omar; Ayad, Miriam F.; Tadros, Mariam M.
New, simple, specific, accurate, precise and reproducible spectrophotometric methods have been developed and subsequently validated for determination of vildagliptin (VLG) and metformin (MET) in binary mixture. Zero order spectrophotometric method was the first method used for determination of MET in the range of 2-12 μg mL-1 by measuring the absorbance at 237.6 nm. The second method was derivative spectrophotometric technique; utilized for determination of MET at 247.4 nm, in the range of 1-12 μg mL-1. Derivative ratio spectrophotometric method was the third technique; used for determination of VLG in the range of 4-24 μg mL-1 at 265.8 nm. Fourth and fifth methods adopted for determination of VLG in the range of 4-24 μg mL-1; were ratio subtraction and mean centering spectrophotometric methods, respectively. All the results were statistically compared with the reported methods, using one-way analysis of variance (ANOVA). The developed methods were satisfactorily applied to analysis of the investigated drugs and proved to be specific and accurate for quality control of them in pharmaceutical dosage forms.
Breast segmentation in MR images using three-dimensional spiral scanning and dynamic programming
NASA Astrophysics Data System (ADS)
Jiang, Luan; Lian, Yanyun; Gu, Yajia; Li, Qiang
2013-03-01
Magnetic resonance (MR) imaging has been widely used for risk assessment and diagnosis of breast cancer in clinic. To develop a computer-aided diagnosis (CAD) system, breast segmentation is the first important and challenging task. The accuracy of subsequent quantitative measurement of breast density and abnormalities depends on accurate definition of the breast area in the images. The purpose of this study is to develop and evaluate a fully automated method for accurate segmentation of breast in three-dimensional (3-D) MR images. A fast method was developed to identify bounding box, i.e., the volume of interest (VOI), for breasts. A 3-D spiral scanning method was used to transform the VOI of each breast into a single two-dimensional (2-D) generalized polar-coordinate image. Dynamic programming technique was applied to the transformed 2-D image for delineating the "optimal" contour of the breast. The contour of the breast in the transformed 2-D image was utilized to reconstruct the segmentation results in the 3-D MR images using interpolation and lookup table. The preliminary results on 17 cases show that the proposed method can obtain accurate segmentation of the breast based on subjective observation. By comparing with the manually delineated region of 16 breasts in 8 cases, an overlap index of 87.6% +/- 3.8% (mean +/- SD), and a volume agreement of 93.4% +/- 4.5% (mean +/- SD) were achieved, respectively. It took approximately 3 minutes for our method to segment the breast in an MR scan of 256 slices.
Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications
NASA Astrophysics Data System (ADS)
He, K.; Zhu, W. D.
2011-07-01
A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.
Wang, Shunhai; Bobst, Cedric E.; Kaltashov, Igor A.
2018-01-01
Transferrin (Tf) is an 80 kDa iron-binding protein which is viewed as a promising drug carrier to target the central nervous system due to its ability to penetrate the blood-brain barrier (BBB). Among the many challenges during the development of Tf-based therapeutics, sensitive and accurate quantitation of the administered Tf in cerebrospinal fluid (CSF) remains particularly difficult due to the presence of abundant endogenous Tf. Herein, we describe the development of a new LC-MS based method for sensitive and accurate quantitation of exogenous recombinant human Tf in rat CSF. By taking advantage of a His-tag present in recombinant Tf and applying Ni affinity purification, the exogenous hTf can be greatly enriched from rat CSF, despite the presence of the abundant endogenous protein. Additionally, we applied a newly developed O18-labeling technique that can generate internal standards at the protein level, which greatly improved the accuracy and robustness of quantitation. The developed method was investigated for linearity, accuracy, precision and lower limit of quantitation, all of which met the commonly accepted criteria for bioanalytical method validation. PMID:26307718
Salivary biomarker development using genomic, proteomic and metabolomic approaches
2012-01-01
The use of saliva as a diagnostic sample provides a non-invasive, cost-efficient method of sample collection for disease screening without the need for highly trained professionals. Saliva collection is far more practical and safe compared with invasive methods of sample collection, because of the infection risk from contaminated needles during, for example, blood sampling. Furthermore, the use of saliva could increase the availability of accurate diagnostics for remote and impoverished regions. However, the development of salivary diagnostics has required technical innovation to allow stabilization and detection of analytes in the complex molecular mixture that is saliva. The recent development of cost-effective room temperature analyte stabilization methods, nucleic acid pre-amplification techniques and direct saliva transcriptomic analysis have allowed accurate detection and quantification of transcripts found in saliva. Novel protein stabilization methods have also facilitated improved proteomic analyses. Although candidate biomarkers have been discovered using epigenetic, transcriptomic, proteomic and metabolomic approaches, transcriptomic analyses have so far achieved the most progress in terms of sensitivity and specificity, and progress towards clinical implementation. Here, we review recent developments in salivary diagnostics that have been accomplished using genomic, transcriptomic, proteomic and metabolomic approaches. PMID:23114182
Cheng, Xu-Dong; Jia, Xiao-Bin; Feng, Liang; Jiang, Jun
2013-12-01
The secondary development of major traditional Chinese medicine varieties is one of important links during the modernization, scientification and standardization of traditional Chinese medicines. How to accurately and effectively identify the pharmacodynamic material basis of original formulae becomes the primary problem in the secondary development, as well as the bottleneck in the modernization development of traditional Chinese medicines. On the basis of the existing experimental methods, and according to the study thought that the multi-component and complex effects of traditional Chinese medicine components need to combine multi-disciplinary methods and technologies, we propose the study thought of the material basis of secondary development of major traditional Chinese medicine varieties based on the combination of in vivo and in vitro experiments. It is believed that studies on material basis needs three links, namely identification, screening and verification, and in vivo and in vitro study method corresponding to each link is mutually complemented and verified. Finally, the accurate and reliable material basis is selected. This thought provides reference for the secondary development of major traditional Chinese medicine varieties and studies on compound material basis.
Zawada, Elzabieta; Pirianowicz-Chaber, Elzabieta; Somogi, Aleksander; Pawinski, Tomasz
2017-03-01
Three new methods were developed for the quantitative determination of mesalazine in the form of the pure substance or in the form of suppositories and tablets - accordingly: bromatometric, diazotization and visible light spectrophotometry method. Optimizing the time and the temperature of the bromination reaction (50⁰C, 50 min) 4-amino-2,3,5,6-tetrabromophenol was obtained. The results obtained were reproducible, accurate and precise. Developed methods were compared to the pharmacopoeial approach - alkalimetry in an aqueous medium. The validation parameters of all methods were comparable. Developed methods for quantification of mesalazine are a viable alternative to other more expensive approaches.
Efficient Methods of Estimating Switchgrass Biomass Supplies
USDA-ARS?s Scientific Manuscript database
Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...
EVALUATION OF METHODS FOR SAMPLING, RECOVERY, AND ENUMERATION OF BACTERIA APPLIED TO THE PHYLLOPANE
Determining the fate and survival of genetically engineered microorganisms released into the environment requires the development and application of accurate and practical methods of detection and enumeration. everal experiments were performed to examine quantitative recovery met...
Emergy Algebra: Improving Matrix Methods for Calculating Tranformities
Transformity is one of the core concepts in Energy Systems Theory and it is fundamental to the calculation of emergy. Accurate evaluation of transformities and other emergy per unit values is essential for the broad acceptance, application and further development of emergy method...
Lundegaard, Claus; Lund, Ole; Nielsen, Morten
2008-06-01
Several accurate prediction systems have been developed for prediction of class I major histocompatibility complex (MHC):peptide binding. Most of these are trained on binding affinity data of primarily 9mer peptides. Here, we show how prediction methods trained on 9mer data can be used for accurate binding affinity prediction of peptides of length 8, 10 and 11. The method gives the opportunity to predict peptides with a different length than nine for MHC alleles where no such peptides have been measured. As validation, the performance of this approach is compared to predictors trained on peptides of the peptide length in question. In this validation, the approximation method has an accuracy that is comparable to or better than methods trained on a peptide length identical to the predicted peptides. The algorithm has been implemented in the web-accessible servers NetMHC-3.0: http://www.cbs.dtu.dk/services/NetMHC-3.0, and NetMHCpan-1.1: http://www.cbs.dtu.dk/services/NetMHCpan-1.1
Estimation of Temporal Gait Parameters Using a Human Body Electrostatic Sensing-Based Method.
Li, Mengxuan; Li, Pengfei; Tian, Shanshan; Tang, Kai; Chen, Xi
2018-05-28
Accurate estimation of gait parameters is essential for obtaining quantitative information on motor deficits in Parkinson's disease and other neurodegenerative diseases, which helps determine disease progression and therapeutic interventions. Due to the demand for high accuracy, unobtrusive measurement methods such as optical motion capture systems, foot pressure plates, and other systems have been commonly used in clinical environments. However, the high cost of existing lab-based methods greatly hinders their wider usage, especially in developing countries. In this study, we present a low-cost, noncontact, and an accurate temporal gait parameters estimation method by sensing and analyzing the electrostatic field generated from human foot stepping. The proposed method achieved an average 97% accuracy on gait phase detection and was further validated by comparison to the foot pressure system in 10 healthy subjects. Two results were compared using the Pearson coefficient r and obtained an excellent consistency ( r = 0.99, p < 0.05). The repeatability of the purposed method was calculated between days by intraclass correlation coefficients (ICC), and showed good test-retest reliability (ICC = 0.87, p < 0.01). The proposed method could be an affordable and accurate tool to measure temporal gait parameters in hospital laboratories and in patients' home environments.
Overview of Aerothermodynamic Loads Definition Study
NASA Technical Reports Server (NTRS)
Povinelli, L. A.
1985-01-01
The Aerothermodynamic Loads Definition were studied to develop methods to more accurately predict the operating environment in the space shuttle main engine (SSME) components. Development of steady and time-dependent, three-dimensional viscous computer codes and experimental verification and engine diagnostic testing are considered. The steady, nonsteady, and transient operating loads are defined to accurately predict powerhead life. Improvements in the structural durability of the SSME turbine drive systems depends on the knowledge of the aerothermodynamic behavior of the flow through the preburner, turbine, turnaround duct, gas manifold, and injector post regions.
Real-time, haptics-enabled simulator for probing ex vivo liver tissue.
Lister, Kevin; Gao, Zhan; Desai, Jaydev P
2009-01-01
The advent of complex surgical procedures has driven the need for realistic surgical training simulators. Comprehensive simulators that provide realistic visual and haptic feedback during surgical tasks are required to familiarize surgeons with the procedures they are to perform. Complex organ geometry inherent to biological tissues and intricate material properties drive the need for finite element methods to assure accurate tissue displacement and force calculations. Advances in real-time finite element methods have not reached the state where they are applicable to soft tissue surgical simulation. Therefore a real-time, haptics-enabled simulator for probing of soft tissue has been developed which utilizes preprocessed finite element data (derived from accurate constitutive model of the soft-tissue obtained from carefully collected experimental data) to accurately replicate the probing task in real-time.
Some problems of the calculation of three-dimensional boundary layer flows on general configurations
NASA Technical Reports Server (NTRS)
Cebeci, T.; Kaups, K.; Mosinskis, G. J.; Rehn, J. A.
1973-01-01
An accurate solution of the three-dimensional boundary layer equations over general configurations such as those encountered in aircraft and space shuttle design requires a very efficient, fast, and accurate numerical method with suitable turbulence models for the Reynolds stresses. The efficiency, speed, and accuracy of a three-dimensional numerical method together with the turbulence models for the Reynolds stresses are examined. The numerical method is the implicit two-point finite difference approach (Box Method) developed by Keller and applied to the boundary layer equations by Keller and Cebeci. In addition, a study of some of the problems that may arise in the solution of these equations for three-dimensional boundary layer flows over general configurations.
DEVELOPING SITE-SPECIFIC MODELS FOR FORECASTING BACTERIA LEVELS AT COASTAL BEACHES
The U.S.Beaches Environmental Assessment and Coastal Health Act of 2000 authorizes studies of pathogen indicators in coastal recreation waters that develop appropriate, accurate, expeditious, and cost-effective methods (including predictive models) for quantifying pathogens in co...
Gasometric Determination of CO[subscript 2] Released from Carbonate Materials
ERIC Educational Resources Information Center
Fagerlund, Johan; Zevenhoven, Ron; Hulden, Stig-Goran; Sodergard, Berndt
2010-01-01
To determine the carbonation degree of materials used in mineral carbonation experiments, a fast, simple, and sufficiently accurate method is required. For this purpose, a method based on the reaction between carbonates and hydrochloric acid was developed. It was noted that this method could also be used to teach undergraduate students some basic…
Zhao, Haixiang; Wang, Yongli; Xu, Xiuli; Ren, Heling; Li, Li; Xiang, Li; Zhong, Weike
2015-01-01
A simple and accurate authentication method for the detection of adulterated vegetable oils that contain waste cooking oil (WCO) was developed. This method is based on the determination of cholesterol, β-sitosterol, and campesterol in vegetable oils and WCO by GC/MS without any derivatization. A total of 148 samples involving 12 types of vegetable oil and WCO were analyzed. According to the results, the contents and ratios of cholesterol, β-sitosterol, and campesterol were found to be criteria for detecting vegetable oils adulterated with WCO. This method could accurately detect adulterated vegetable oils containing 5% refined WCO. The developed method has been successfully applied to multilaboratory analysis of 81 oil samples. Seventy-five samples were analyzed correctly, and only six adulterated samples could not be detected. This method could not yet be used for detection of vegetable oils adulterated with WCO that are used for frying non-animal foods. It provides a quick method for detecting adulterated edible vegetable oils containing WCO.
Toward structure prediction of cyclic peptides.
Yu, Hongtao; Lin, Yu-Shan
2015-02-14
Cyclic peptides are a promising class of molecules that can be used to target specific protein-protein interactions. A computational method to accurately predict their structures would substantially advance the development of cyclic peptides as modulators of protein-protein interactions. Here, we develop a computational method that integrates bias-exchange metadynamics simulations, a Boltzmann reweighting scheme, dihedral principal component analysis and a modified density peak-based cluster analysis to provide a converged structural description for cyclic peptides. Using this method, we evaluate the performance of a number of popular protein force fields on a model cyclic peptide. All the tested force fields seem to over-stabilize the α-helix and PPII/β regions in the Ramachandran plot, commonly populated by linear peptides and proteins. Our findings suggest that re-parameterization of a force field that well describes the full Ramachandran plot is necessary to accurately model cyclic peptides.
DNAzyme based gap-LCR detection of single-nucleotide polymorphism.
Zhou, Li; Du, Feng; Zhao, Yongyun; Yameen, Afshan; Chen, Haodong; Tang, Zhuo
2013-07-15
Fast and accurate detection of single-nucleotide polymorphism (SNP) is thought more and more important for understanding of human physiology and elucidating the molecular based diseases. A great deal of effort has been devoted to developing accurate, rapid, and cost-effective technologies for SNP analysis. However most of those methods developed to date incorporate complicated probe labeling and depend on advanced equipment. The DNAzyme based Gap-LCR detection method averts any chemical modification on probes and circumvents those problems by incorporating a short functional DNA sequence into one of LCR primers. Two kinds of exonuclease are utilized in our strategy to digest all the unreacted probes and release the DNAzymes embedded in the LCR product. The DNAzyme applied in our method is a versatile tool to report the result of SNP detection in colorimetric or fluorometric ways for different detection purposes. Copyright © 2013 Elsevier B.V. All rights reserved.
Using leaf optical properties to detect ozone effects on foliar biochemistry
USDA-ARS?s Scientific Manuscript database
Efficient methods for accurate and meaningful high-throughput plant phenotyping are limiting the development and breeding of stress-tolerant crops. A number of emerging techniques, specifically remote sensing methods, have been identified as promising tools for plant phenotyping. These remote-sensin...
El-Bagary, Ramzia I.; Elkady, Ehab F.; Ayoub, Bassam M.
2011-01-01
Simple, accurate and precise spectrophotometric methods have been developed for the determination of sitagliptin and vildagliptin in bulk and dosage forms. The proposed methods are based on the charge transfer complexes of sitagliptin phosphate and vildagliptin with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ), 7,7,8,8-tetracyanoquinodimethane (TCNQ) and tetrachloro-1,4-benzoquinone (p-chloranil). All the variables were studied to optimize the reactions conditions. For sitagliptin, Beer’s law was obeyed in the concentration ranges of 50-300 μg/ml, 20-120 μg/ml and 100-900 μg/ml with DDQ, TCNQ and p-chloranil, respectively. For vildagliptin, Beer’s law was obeyed in the concentration ranges of 50-300 μg/ml, 10-85 μg/ml and 50-350 μg/ml with DDQ, TCNQ and p-chloranil, respectively. The developed methods were validated and proved to be specific and accurate for the quality control of the cited drugs in pharmaceutical dosage forms. PMID:23675221
Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng
2017-01-01
Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.
Preface: Special Topic: From Quantum Mechanics to Force Fields.
Piquemal, Jean-Philip; Jordan, Kenneth D
2017-10-28
This Special Topic issue entitled "From Quantum Mechanics to Force Fields" is dedicated to the ongoing efforts of the theoretical chemistry community to develop a new generation of accurate force fields based on data from high-level electronic structure calculations and to develop faster electronic structure methods for testing and designing force fields as well as for carrying out simulations. This issue includes a collection of 35 original research articles that illustrate recent theoretical advances in the field. It provides a timely snapshot of recent developments in the generation of approaches to enable more accurate molecular simulations of processes important in chemistry, physics, biophysics, and materials science.
Preface: Special Topic: From Quantum Mechanics to Force Fields
NASA Astrophysics Data System (ADS)
Piquemal, Jean-Philip; Jordan, Kenneth D.
2017-10-01
This Special Topic issue entitled "From Quantum Mechanics to Force Fields" is dedicated to the ongoing efforts of the theoretical chemistry community to develop a new generation of accurate force fields based on data from high-level electronic structure calculations and to develop faster electronic structure methods for testing and designing force fields as well as for carrying out simulations. This issue includes a collection of 35 original research articles that illustrate recent theoretical advances in the field. It provides a timely snapshot of recent developments in the generation of approaches to enable more accurate molecular simulations of processes important in chemistry, physics, biophysics, and materials science.
2017-01-01
Mapping gene expression as a quantitative trait using whole genome-sequencing and transcriptome analysis allows to discover the functional consequences of genetic variation. We developed a novel method and ultra-fast software Findr for higly accurate causal inference between gene expression traits using cis-regulatory DNA variations as causal anchors, which improves current methods by taking into consideration hidden confounders and weak regulations. Findr outperformed existing methods on the DREAM5 Systems Genetics challenge and on the prediction of microRNA and transcription factor targets in human lymphoblastoid cells, while being nearly a million times faster. Findr is publicly available at https://github.com/lingfeiwang/findr. PMID:28821014
Wang, Xue-Yong; Liao, Cai-Li; Liu, Si-Qi; Liu, Chun-Sheng; Shao, Ai-Juan; Huang, Lu-Qi
2013-05-01
This paper put forward a more accurate identification method for identification of Chinese materia medica (CMM), the systematic identification of Chinese materia medica (SICMM) , which might solve difficulties in CMM identification used the ordinary traditional ways. Concepts, mechanisms and methods of SICMM were systematically introduced and possibility was proved by experiments. The establishment of SICMM will solve problems in identification of Chinese materia medica not only in phenotypic characters like the mnorphous, microstructure, chemical constituents, but also further discovery evolution and classification of species, subspecies and population in medical plants. The establishment of SICMM will improve the development of identification of CMM and create a more extensive study space.
Madhavan, Dinesh B; Baldock, Jeff A; Read, Zoe J; Murphy, Simon C; Cunningham, Shaun C; Perring, Michael P; Herrmann, Tim; Lewis, Tom; Cavagnaro, Timothy R; England, Jacqueline R; Paul, Keryn I; Weston, Christopher J; Baker, Thomas G
2017-05-15
Reforestation of agricultural lands with mixed-species environmental plantings can effectively sequester C. While accurate and efficient methods for predicting soil organic C content and composition have recently been developed for soils under agricultural land uses, such methods under forested land uses are currently lacking. This study aimed to develop a method using infrared spectroscopy for accurately predicting total organic C (TOC) and its fractions (particulate, POC; humus, HOC; and resistant, ROC organic C) in soils under environmental plantings. Soils were collected from 117 paired agricultural-reforestation sites across Australia. TOC fractions were determined in a subset of 38 reforested soils using physical fractionation by automated wet-sieving and 13 C nuclear magnetic resonance (NMR) spectroscopy. Mid- and near-infrared spectra (MNIRS, 6000-450 cm -1 ) were acquired from finely-ground soils from environmental plantings and agricultural land. Satisfactory prediction models based on MNIRS and partial least squares regression (PLSR) were developed for TOC and its fractions. Leave-one-out cross-validations of MNIRS-PLSR models indicated accurate predictions (R 2 > 0.90, negligible bias, ratio of performance to deviation > 3) and fraction-specific functional group contributions to beta coefficients in the models. TOC and its fractions were predicted using the cross-validated models and soil spectra for 3109 reforested and agricultural soils. The reliability of predictions determined using k-nearest neighbour score distance indicated that >80% of predictions were within the satisfactory inlier limit. The study demonstrated the utility of infrared spectroscopy (MNIRS-PLSR) to rapidly and economically determine TOC and its fractions and thereby accurately describe the effects of land use change such as reforestation on agricultural soils. Copyright © 2017 Elsevier Ltd. All rights reserved.
Accuracy and Calibration of High Explosive Thermodynamic Equations of State
NASA Astrophysics Data System (ADS)
Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack
2010-10-01
The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.
Accurate phase measurements for thick spherical objects using optical quadrature microscopy
NASA Astrophysics Data System (ADS)
Warger, William C., II; DiMarzio, Charles A.
2009-02-01
In vitro fertilization (IVF) procedures have resulted in the birth of over three million babies since 1978. Yet the live birth rate in the United States was only 34% in 2005, with 32% of the successful pregnancies resulting in multiple births. These multiple pregnancies were directly attributed to the transfer of multiple embryos to increase the probability that a single, healthy embryo was included. Current viability markers used for IVF, such as the cell number, symmetry, size, and fragmentation, are analyzed qualitatively with differential interference contrast (DIC) microscopy. However, this method is not ideal for quantitative measures beyond the 8-cell stage of development because the cells overlap and obstruct the view within and below the cluster of cells. We have developed the phase-subtraction cell-counting method that uses the combination of DIC and optical quadrature microscopy (OQM) to count the number of cells accurately in live mouse embryos beyond the 8-cell stage. We have also created a preliminary analysis to measure the cell symmetry, size, and fragmentation quantitatively by analyzing the relative dry mass from the OQM image in conjunction with the phase-subtraction count. In this paper, we will discuss the characterization of OQM with respect to measuring the phase accurately for spherical samples that are much larger than the depth of field. Once fully characterized and verified with human embryos, this methodology could provide the means for a more accurate method to score embryo viability.
Moitessier, N; Englebienne, P; Lee, D; Lawandi, J; Corbeil, C R
2008-01-01
Accelerating the drug discovery process requires predictive computational protocols capable of reducing or simplifying the synthetic and/or combinatorial challenge. Docking-based virtual screening methods have been developed and successfully applied to a number of pharmaceutical targets. In this review, we first present the current status of docking and scoring methods, with exhaustive lists of these. We next discuss reported comparative studies, outlining criteria for their interpretation. In the final section, we describe some of the remaining developments that would potentially lead to a universally applicable docking/scoring method. PMID:18037925
NASA Technical Reports Server (NTRS)
Rosenfeld, Moshe
1990-01-01
The development, validation and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems are discussed. A solution method that combines a finite-volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries was previously developed for fixed-grids. In the present research effort, this solution method is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.
Shrinkage regression-based methods for microarray missing value imputation.
Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng
2013-01-01
Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.
Xia, Hao; Wang, Xiaogang; Qiao, Yanyou; Jian, Jun; Chang, Yuanfei
2015-01-01
Following the popularity of smart phones and the development of mobile Internet, the demands for accurate indoor positioning have grown rapidly in recent years. Previous indoor positioning methods focused on plane locations on a floor and did not provide accurate floor positioning. In this paper, we propose a method that uses multiple barometers as references for the floor positioning of smart phones with built-in barometric sensors. Some related studies used barometric formula to investigate the altitude of mobile devices and compared the altitude with the height of the floors in a building to obtain the floor number. These studies assume that the accurate height of each floor is known, which is not always the case. They also did not consider the difference in the barometric-pressure pattern at different floors, which may lead to errors in the altitude computation. Our method does not require knowledge of the accurate heights of buildings and stories. It is robust and less sensitive to factors such as temperature and humidity and considers the difference in the barometric-pressure change trends at different floors. We performed a series of experiments to validate the effectiveness of this method. The results are encouraging. PMID:25835189
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tong, Dudu; Yang, Sichun; Lu, Lanyuan
2016-06-20
Structure modellingviasmall-angle X-ray scattering (SAXS) data generally requires intensive computations of scattering intensity from any given biomolecular structure, where the accurate evaluation of SAXS profiles using coarse-grained (CG) methods is vital to improve computational efficiency. To date, most CG SAXS computing methods have been based on a single-bead-per-residue approximation but have neglected structural correlations between amino acids. To improve the accuracy of scattering calculations, accurate CG form factors of amino acids are now derived using a rigorous optimization strategy, termed electron-density matching (EDM), to best fit electron-density distributions of protein structures. This EDM method is compared with and tested againstmore » other CG SAXS computing methods, and the resulting CG SAXS profiles from EDM agree better with all-atom theoretical SAXS data. By including the protein hydration shell represented by explicit CG water molecules and the correction of protein excluded volume, the developed CG form factors also reproduce the selected experimental SAXS profiles with very small deviations. Taken together, these EDM-derived CG form factors present an accurate and efficient computational approach for SAXS computing, especially when higher molecular details (represented by theqrange of the SAXS data) become necessary for effective structure modelling.« less
NASA Astrophysics Data System (ADS)
Yu, Jieqing; Wu, Lixin; Hu, Qingsong; Yan, Zhigang; Zhang, Shaoliang
2017-12-01
Visibility computation is of great interest to location optimization, environmental planning, ecology, and tourism. Many algorithms have been developed for visibility computation. In this paper, we propose a novel method of visibility computation, called synthetic visual plane (SVP), to achieve better performance with respect to efficiency, accuracy, or both. The method uses a global horizon, which is a synthesis of line-of-sight information of all nearer points, to determine the visibility of a point, which makes it an accurate visibility method. We used discretization of horizon to gain a good performance in efficiency. After discretization, the accuracy and efficiency of SVP depends on the scale of discretization (i.e., zone width). The method is more accurate at smaller zone widths, but this requires a longer operating time. Users must strike a balance between accuracy and efficiency at their discretion. According to our experiments, SVP is less accurate but more efficient than R2 if the zone width is set to one grid. However, SVP becomes more accurate than R2 when the zone width is set to 1/24 grid, while it continues to perform as fast or faster than R2. Although SVP performs worse than reference plane and depth map with respect to efficiency, it is superior in accuracy to these other two algorithms.
Development of a Small Chamber Method to Study SVOCs Sink Effect
The transport mechanisms of semivolatile organic compounds (SVOCs) between sources, air,house dust, and interior surfaces in the residential environment needs to be better understood in order to more accurately estimate indoor SVOC exposure and develop risk management strategies ...
NASA Technical Reports Server (NTRS)
Gloss, B. B.; Johnson, F. T.
1976-01-01
The Boeing Commercial Airplane Company developed an inviscid three-dimensional lifting surface method that shows promise in being able to accurately predict loads, subsonic and supersonic, on wings with leading-edge separation and reattachment.
Ara, Perzila; Cheng, Shaokoon; Heimlich, Michael; Dutkiewicz, Eryk
2015-01-01
Recent developments in capsule endoscopy have highlighted the need for accurate techniques to estimate the location of a capsule endoscope. A highly accurate location estimation of a capsule endoscope in the gastrointestinal (GI) tract in the range of several millimeters is a challenging task. This is mainly because the radio-frequency signals encounter high loss and a highly dynamic channel propagation environment. Therefore, an accurate path-loss model is required for the development of accurate localization algorithms. This paper presents an in-body path-loss model for the human abdomen region at 2.4 GHz frequency. To develop the path-loss model, electromagnetic simulations using the Finite-Difference Time-Domain (FDTD) method were carried out on two different anatomical human models. A mathematical expression for the path-loss model was proposed based on analysis of the measured loss at different capsule locations inside the small intestine. The proposed path-loss model is a good approximation to model in-body RF propagation, since the real measurements are quite infeasible for the capsule endoscopy subject.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.
Novel Automated Blood Separations Validate Whole Cell Biomarkers
Burger, Douglas E.; Wang, Limei; Ban, Liqin; Okubo, Yoshiaki; Kühtreiber, Willem M.; Leichliter, Ashley K.; Faustman, Denise L.
2011-01-01
Background Progress in clinical trials in infectious disease, autoimmunity, and cancer is stymied by a dearth of successful whole cell biomarkers for peripheral blood lymphocytes (PBLs). Successful biomarkers could help to track drug effects at early time points in clinical trials to prevent costly trial failures late in development. One major obstacle is the inaccuracy of Ficoll density centrifugation, the decades-old method of separating PBLs from the abundant red blood cells (RBCs) of fresh blood samples. Methods and Findings To replace the Ficoll method, we developed and studied a novel blood-based magnetic separation method. The magnetic method strikingly surpassed Ficoll in viability, purity and yield of PBLs. To reduce labor, we developed an automated platform and compared two magnet configurations for cell separations. These more accurate and labor-saving magnet configurations allowed the lymphocytes to be tested in bioassays for rare antigen-specific T cells. The automated method succeeded at identifying 79% of patients with the rare PBLs of interest as compared with Ficoll's uniform failure. We validated improved upfront blood processing and show accurate detection of rare antigen-specific lymphocytes. Conclusions Improving, automating and standardizing lymphocyte detections from whole blood may facilitate development of new cell-based biomarkers for human diseases. Improved upfront blood processes may lead to broad improvements in monitoring early trial outcome measurements in human clinical trials. PMID:21799852
A novel navigation method used in a ballistic missile
NASA Astrophysics Data System (ADS)
Qian, Hua-ming; Sun, Long; Cai, Jia-nan; Peng, Yu
2013-10-01
The traditional strapdown inertial/celestial integrated navigation method used in a ballistic missile cannot accurately estimate the accelerometer bias. It might cause a divergence of navigation errors. To solve this problem, a new navigation method named strapdown inertial/starlight refractive celestial integrated navigation is proposed. To verify the feasibility of the proposed method, a simulated program of a ballistic missile is presented. The simulation results indicated that, when multiple refraction stars are used, the proposed method can accurately estimate the accelerometer bias, and suppress the divergence of navigation errors completely. Specifically, in order to apply this method to a ballistic missile, a novel measurement equation based on stellar refraction was developed. Furthermore a method to calculate the number of refraction stars observed by the stellar sensor was given. Finally, the relationship between the number of refraction stars used and the navigation accuracy is analysed.
Fast Construction of Near Parsimonious Hybridization Networks for Multiple Phylogenetic Trees.
Mirzaei, Sajad; Wu, Yufeng
2016-01-01
Hybridization networks represent plausible evolutionary histories of species that are affected by reticulate evolutionary processes. An established computational problem on hybridization networks is constructing the most parsimonious hybridization network such that each of the given phylogenetic trees (called gene trees) is "displayed" in the network. There have been several previous approaches, including an exact method and several heuristics, for this NP-hard problem. However, the exact method is only applicable to a limited range of data, and heuristic methods can be less accurate and also slow sometimes. In this paper, we develop a new algorithm for constructing near parsimonious networks for multiple binary gene trees. This method is more efficient for large numbers of gene trees than previous heuristics. This new method also produces more parsimonious results on many simulated datasets as well as a real biological dataset than a previous method. We also show that our method produces topologically more accurate networks for many datasets.
Daniell method for power spectral density estimation in atomic force microscopy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labuda, Aleksander
An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to amore » more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.« less
Simple method for quick estimation of aquifer hydrogeological parameters
NASA Astrophysics Data System (ADS)
Ma, C.; Li, Y. Y.
2017-08-01
Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.
NASA Technical Reports Server (NTRS)
Ross, B. E.
1971-01-01
The Moire method experimental stress analysis is similar to a problem encountered in astrometry. It is necessary to extract accurate coordinates from images on photographic plates. The solution to the mutual problem found applicable to the field of experimental stress analysis is presented to outline the measurement problem. A discussion of the photo-reading device developed to make the measurements follows.
Reddy, M Rami; Singh, U C; Erion, Mark D
2004-05-26
Free-energy perturbation (FEP) is considered the most accurate computational method for calculating relative solvation and binding free-energy differences. Despite some success in applying FEP methods to both drug design and lead optimization, FEP calculations are rarely used in the pharmaceutical industry. One factor limiting the use of FEP is its low throughput, which is attributed in part to the dependence of conventional methods on the user's ability to develop accurate molecular mechanics (MM) force field parameters for individual drug candidates and the time required to complete the process. In an attempt to find an FEP method that could eventually be automated, we developed a method that uses quantum mechanics (QM) for treating the solute, MM for treating the solute surroundings, and the FEP method for computing free-energy differences. The thread technique was used in all transformations and proved to be essential for the successful completion of the calculations. Relative solvation free energies for 10 structurally diverse molecular pairs were calculated, and the results were in close agreement with both the calculated results generated by conventional FEP methods and the experimentally derived values. While considerably more CPU demanding than conventional FEP methods, this method (QM/MM-based FEP) alleviates the need for development of molecule-specific MM force field parameters and therefore may enable future automation of FEP-based calculations. Moreover, calculation accuracy should be improved over conventional methods, especially for calculations reliant on MM parameters derived in the absence of experimental data.
NASA Technical Reports Server (NTRS)
Venkataraman, T. S.; Eidson, W. W.; Cohen, L. D.; Farina, J. D.; Acquista, C.
1983-01-01
The position and velocity of optically levitated glass spheres (radii 10-20 microns) movng in a gas are measured accurately, rapidly, and continuously using a high-speed rotating polygon mirror. The experimental technique developed here has repeatable position accuracies better than 20 microns. Each measurement takes less than 1 microsec and can be repeated every 100 microsec. The position of the levitated glass spheres can be manipulated accurately by modulating the laser power with an acoustic optic modulator. The technique provides a fast and accurate method to study general particle dynamics in a fluid.
Mcclellan, James H.; Ravichandran, Lakshminarayan; Tridandapani, Srini
2013-01-01
Two novel methods for detecting cardiac quiescent phases from B-mode echocardiography using a correlation-based frame-to-frame deviation measure were developed. Accurate knowledge of cardiac quiescence is crucial to the performance of many imaging modalities, including computed tomography coronary angiography (CTCA). Synchronous electrocardiography (ECG) and echocardiography data were obtained from 10 healthy human subjects (four male, six female, 23–45 years) and the interventricular septum (IVS) was observed using the apical four-chamber echocardiographic view. The velocity of the IVS was derived from active contour tracking and verified using tissue Doppler imaging echocardiography methods. In turn, the frame-to-frame deviation methods for identifying quiescence of the IVS were verified using active contour tracking. The timing of the diastolic quiescent phase was found to exhibit both inter- and intra-subject variability, suggesting that the current method of CTCA gating based on the ECG is suboptimal and that gating based on signals derived from cardiac motion are likely more accurate in predicting quiescence for cardiac imaging. Two robust and efficient methods for identifying cardiac quiescent phases from B-mode echocardiographic data were developed and verified. The methods presented in this paper will be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality. PMID:26609501
Methods for assessing Phytophthora ramorum chlamydospore germination
Joyce Eberhart; Elilzabeth Stamm; Jennifer Parke
2013-01-01
Germination of chlamydospores is difficult to accurately assess when chlamydospores are attached to remnants of supporting hyphae. We developed two approaches for closely observing and rigorously quantifying the frequency of chlamydospore germination in vitro. The plate marking and scanning method was useful for quantifying germination of large...
Multiplexed microsatellite recovery using massively parallel sequencing
T.N. Jennings; B.J. Knaus; T.D. Mullins; S.M. Haig; R.C. Cronn
2011-01-01
Conservation and management of natural populations requires accurate and inexpensive genotyping methods. Traditional microsatellite, or simple sequence repeat (SSR), marker analysis remains a popular genotyping method because of the comparatively low cost of marker development, ease of analysis and high power of genotype discrimination. With the availability of...
ERIC Educational Resources Information Center
Mills, Myron L.
1988-01-01
A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)
Application of an energy balance method for estimating evapotranspiration in cropping systems
USDA-ARS?s Scientific Manuscript database
Accurate quantification of evapotranspiration (ET, consumptive water use) from planting through harvest is critical for managing the limited water resources for crop irrigation. Our objective was to develop and apply an improved land-crop surface residual energy balance (EB) method for quantifying E...
Wan, Xiaomin; Peng, Liubao; Li, Yuanjian
2015-01-01
Background In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. Methods A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. Results All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. Conclusions The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method. PMID:25803659
Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.
Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J
1993-05-01
This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)
Cell-accurate optical mapping across the entire developing heart.
Weber, Michael; Scherf, Nico; Meyer, Alexander M; Panáková, Daniela; Kohl, Peter; Huisken, Jan
2017-12-29
Organogenesis depends on orchestrated interactions between individual cells and morphogenetically relevant cues at the tissue level. This is true for the heart, whose function critically relies on well-ordered communication between neighboring cells, which is established and fine-tuned during embryonic development. For an integrated understanding of the development of structure and function, we need to move from isolated snap-shot observations of either microscopic or macroscopic parameters to simultaneous and, ideally continuous, cell-to-organ scale imaging. We introduce cell-accurate three-dimensional Ca 2+ -mapping of all cells in the entire electro-mechanically uncoupled heart during the looping stage of live embryonic zebrafish, using high-speed light sheet microscopy and tailored image processing and analysis. We show how myocardial region-specific heterogeneity in cell function emerges during early development and how structural patterning goes hand-in-hand with functional maturation of the entire heart. Our method opens the way to systematic, scale-bridging, in vivo studies of vertebrate organogenesis by cell-accurate structure-function mapping across entire organs.
Cell-accurate optical mapping across the entire developing heart
Meyer, Alexander M; Panáková, Daniela; Kohl, Peter
2017-01-01
Organogenesis depends on orchestrated interactions between individual cells and morphogenetically relevant cues at the tissue level. This is true for the heart, whose function critically relies on well-ordered communication between neighboring cells, which is established and fine-tuned during embryonic development. For an integrated understanding of the development of structure and function, we need to move from isolated snap-shot observations of either microscopic or macroscopic parameters to simultaneous and, ideally continuous, cell-to-organ scale imaging. We introduce cell-accurate three-dimensional Ca2+-mapping of all cells in the entire electro-mechanically uncoupled heart during the looping stage of live embryonic zebrafish, using high-speed light sheet microscopy and tailored image processing and analysis. We show how myocardial region-specific heterogeneity in cell function emerges during early development and how structural patterning goes hand-in-hand with functional maturation of the entire heart. Our method opens the way to systematic, scale-bridging, in vivo studies of vertebrate organogenesis by cell-accurate structure-function mapping across entire organs. PMID:29286002
Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu
2015-02-01
To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.
Using structure to explore the sequence alignment space of remote homologs.
Kuziemko, Andrew; Honig, Barry; Petrey, Donald
2011-10-01
Protein structure modeling by homology requires an accurate sequence alignment between the query protein and its structural template. However, sequence alignment methods based on dynamic programming (DP) are typically unable to generate accurate alignments for remote sequence homologs, thus limiting the applicability of modeling methods. A central problem is that the alignment that is "optimal" in terms of the DP score does not necessarily correspond to the alignment that produces the most accurate structural model. That is, the correct alignment based on structural superposition will generally have a lower score than the optimal alignment obtained from sequence. Variations of the DP algorithm have been developed that generate alternative alignments that are "suboptimal" in terms of the DP score, but these still encounter difficulties in detecting the correct structural alignment. We present here a new alternative sequence alignment method that relies heavily on the structure of the template. By initially aligning the query sequence to individual fragments in secondary structure elements and combining high-scoring fragments that pass basic tests for "modelability", we can generate accurate alignments within a small ensemble. Our results suggest that the set of sequences that can currently be modeled by homology can be greatly extended.
Wan, Xiaomin; Peng, Liubao; Li, Yuanjian
2015-01-01
In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.
Accuracy of quadrat sampling in studying forest reproduction on cut-over areas
I. T. Haig
1929-01-01
The quadrat method, first introduced into ecological studies by Pound and Clements in i898, has been adopted by both foresters and ecologists as one of the most accurate means of studying the occurrence, distribution, and development of vegetation (Clements, '05; Weaver, '18). This method is unquestionably more precise than the descriptive method which it...
Statistical methods for analysing responses of wildlife to human disturbance.
Haiganoush K. Preisler; Alan A. Ager; Michael J. Wisdom
2006-01-01
1. Off-road recreation is increasing rapidly in many areas of the world, and effects on wildlife can be highly detrimental. Consequently, we have developed methods for studying wildlife responses to off-road recreation with the use of new technologies that allow frequent and accurate monitoring of human-wildlife interactions. To illustrate these methods, we studied the...
Mukherjee, Ramtanu; Ghosh, Sanchita; Gupta, Bharat; Chakravarty, Tapas
2018-01-22
The effectiveness of any remote healthcare monitoring system depends on how much accurate, patient-friendly, versatile, and cost-effective measurement it is delivering. There has always been a huge demand for such a long-term noninvasive remote blood pressure (BP) measurement system, which could be used worldwide in the remote healthcare industry. Thus, noninvasive continuous BP measurement and remote monitoring have become an emerging area in the remote healthcare industry. Photoplethysmography-based (PPG) BP measurement is a continuous, unobtrusive, patient-friendly, and cost-effective solution. However, BP measurements through PPG sensors are not much reliable and accurate due to some major limitations like pressure disturbance, motion artifacts, and variations in human skin tone. A novel reflective PPG sensor has been developed to eliminate the abovementioned pressure disturbance and motion artifacts during the BP measurement. Considering the variations of the human skin tone across demography, a novel algorithm has been developed to make the BP measurement accurate and reliable. The training dataset captured 186 subjects' data and the trial dataset captured another new 102 subjects' data. The overall accuracy achieved by using the proposed method is nearly 98%. Thus, demonstrating the efficacy of the proposed method. The developed BP monitoring system is quite accurate, reliable, cost-effective, handy, and user friendly. It is also expected that this system would be quite useful to monitor the BP of infants, elderly people, patients having wounds, burn injury, or in the intensive care unit environment.
Accurate modelling of unsteady flows in collapsible tubes.
Marchandise, Emilie; Flaud, Patrice
2010-01-01
The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.
NASA Astrophysics Data System (ADS)
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
Monte Carlo method for photon heating using temperature-dependent optical properties.
Slade, Adam Broadbent; Aguilar, Guillermo
2015-02-01
The Monte Carlo method for photon transport is often used to predict the volumetric heating that an optical source will induce inside a tissue or material. This method relies on constant (with respect to temperature) optical properties, specifically the coefficients of scattering and absorption. In reality, optical coefficients are typically temperature-dependent, leading to error in simulation results. The purpose of this study is to develop a method that can incorporate variable properties and accurately simulate systems where the temperature will greatly vary, such as in the case of laser-thawing of frozen tissues. A numerical simulation was developed that utilizes the Monte Carlo method for photon transport to simulate the thermal response of a system that allows temperature-dependent optical and thermal properties. This was done by combining traditional Monte Carlo photon transport with a heat transfer simulation to provide a feedback loop that selects local properties based on current temperatures, for each moment in time. Additionally, photon steps are segmented to accurately obtain path lengths within a homogenous (but not isothermal) material. Validation of the simulation was done using comparisons to established Monte Carlo simulations using constant properties, and a comparison to the Beer-Lambert law for temperature-variable properties. The simulation is able to accurately predict the thermal response of a system whose properties can vary with temperature. The difference in results between variable-property and constant property methods for the representative system of laser-heated silicon can become larger than 100K. This simulation will return more accurate results of optical irradiation absorption in a material which undergoes a large change in temperature. This increased accuracy in simulated results leads to better thermal predictions in living tissues and can provide enhanced planning and improved experimental and procedural outcomes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
An adaptive discontinuous Galerkin solver for aerodynamic flows
NASA Astrophysics Data System (ADS)
Burgess, Nicholas K.
This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.
Quantifying Error in Survey Measures of School and Classroom Environments
ERIC Educational Resources Information Center
Schweig, Jonathan David
2014-01-01
Developing indicators that reflect important aspects of school and classroom environments has become central in a nationwide effort to develop comprehensive programs that measure teacher quality and effectiveness. Formulating teacher evaluation policy necessitates accurate and reliable methods for measuring these environmental variables. This…
Kleijn, Roelco J.; van Winden, Wouter A.; Ras, Cor; van Gulik, Walter M.; Schipper, Dick; Heijnen, Joseph J.
2006-01-01
In this study we developed a new method for accurately determining the pentose phosphate pathway (PPP) split ratio, an important metabolic parameter in the primary metabolism of a cell. This method is based on simultaneous feeding of unlabeled glucose and trace amounts of [U-13C]gluconate, followed by measurement of the mass isotopomers of the intracellular metabolites surrounding the 6-phosphogluconate node. The gluconate tracer method was used with a penicillin G-producing chemostat culture of the filamentous fungus Penicillium chrysogenum. For comparison, a 13C-labeling-based metabolic flux analysis (MFA) was performed for glycolysis and the PPP of P. chrysogenum. For the first time mass isotopomer measurements of 13C-labeled primary metabolites are reported for P. chrysogenum and used for a 13C-based MFA. Estimation of the PPP split ratio of P. chrysogenum at a growth rate of 0.02 h−1 yielded comparable values for the gluconate tracer method and the 13C-based MFA method, 51.8% and 51.1%, respectively. A sensitivity analysis of the estimated PPP split ratios showed that the 95% confidence interval was almost threefold smaller for the gluconate tracer method than for the 13C-based MFA method (40.0 to 63.5% and 46.0 to 56.5%, respectively). From these results we concluded that the gluconate tracer method permits accurate determination of the PPP split ratio but provides no information about the remaining cellular metabolism, while the 13C-based MFA method permits estimation of multiple fluxes but provides a less accurate estimate of the PPP split ratio. PMID:16820467
Wahlen, Raimund
2004-04-01
A high-performance liquid chromatography-inductively coupled plasma-mass spectrometry (HPLC-ICP-MS) method has been developed for the fast and accurate analysis of arsenobetaine (AsB) in fish samples extracted by accelerated solvent extraction. The combined extraction and analysis approach is validated using certified reference materials for AsB in fish and during a European intercomparison exercise with a blind sample. Up to six species of arsenic (As) can be separated and quantitated in the extracts within a 10-min isocratic elution. The method is optimized so as to minimize time-consuming sample preparation steps and allow for automated extraction and analysis of large sample batches. A comparison of standard addition and external calibration show no significant difference in the results obtained, which indicates that the LC-ICP-MS method is not influenced by severe matrix effects. The extraction procedure can process up to 24 samples in an automated manner, yet the robustness of the developed HPLC-ICP-MS approach is highlighted by the capability to run more than 50 injections per sequence, which equates to a total run-time of more than 12 h. The method can therefore be used to rapidly and accurately assess the proportion of nontoxic AsB in fish samples with high total As content during toxicological screening studies.
Development of a PCR Diagnostic System for Iris yellow spot tospovirus in Quarantine
Shin, Yong-Gil; Rho, Jae-Young
2014-01-01
Iris yellow spot virus (IYSV) is a plant pathogenic virus which has been reported to continuously occur in onion bulbs, allium field crops, seed crops, lisianthus, and irises. In South Korea, IYSV is a “controlled” virus that has not been reported, and inspection is performed when crops of the genus Iris are imported into South Korea. In this study, reverse-transcription polymerase chain reaction (RT-PCR) and nested PCR inspection methods, which can detect IYSV, from imported crops of the genus Iris at quarantine sites, were developed. In addition, a modified positive plasmid, which can be used as a positive control during inspection, was developed. This modified plasmid can facilitate a more accurate inspection by enabling the examination of a laboratory contamination in an inspection system. The inspection methods that were developed in this study are expected to contribute, through the prompt and accurate inspection of IYSV at quarantine sites to the plant quarantine in South Korea. PMID:25506310
G3X-K theory: A composite theoretical method for thermochemical kinetics
NASA Astrophysics Data System (ADS)
da Silva, Gabriel
2013-02-01
A composite theoretical method for accurate thermochemical kinetics, G3X-K, is described. This method is accurate to around 0.5 kcal mol-1 for barrier heights and 0.8 kcal mol-1 for enthalpies of formation. G3X-K is a modification of G3SX theory using the M06-2X density functional for structures and zero-point energies and parameterized for a test set of 223 heats of formation and 23 barrier heights. A reduced perturbation-order variant, G3X(MP3)-K, is also developed, providing around 0.7 kcal mol-1 accuracy for barrier heights and 0.9 kcal mol-1 accuracy for enthalpies, at reduced computational cost. Some opportunities to further improve Gn composite methods are identified and briefly discussed.
Can the electronegativity equalization method predict spectroscopic properties?
Verstraelen, T; Bultinck, P
2015-02-05
The electronegativity equalization method is classically used as a method allowing the fast generation of atomic charges using a set of calibrated parameters and provided knowledge of the molecular structure. Recently, it has started being used for the calculation of other reactivity descriptors and for the development of polarizable and reactive force fields. For such applications, it is of interest to know whether the method, through the inclusion of the molecular geometry in the Taylor expansion of the energy, would also allow sufficiently accurate predictions of spectroscopic data. In this work, relevant quantities for IR spectroscopy are considered, namely the dipole derivatives and the Cartesian Hessian. Despite careful calibration of parameters for this specific task, it is shown that the current models yield insufficiently accurate results. Copyright © 2013 Elsevier B.V. All rights reserved.
Evaluation and comparison of predictive individual-level general surrogates.
Gabriel, Erin E; Sachs, Michael C; Halloran, M Elizabeth
2018-07-01
An intermediate response measure that accurately predicts efficacy in a new setting at the individual level could be used both for prediction and personalized medical decisions. In this article, we define a predictive individual-level general surrogate (PIGS), which is an individual-level intermediate response that can be used to accurately predict individual efficacy in a new setting. While methods for evaluating trial-level general surrogates, which are predictors of trial-level efficacy, have been developed previously, few, if any, methods have been developed to evaluate individual-level general surrogates, and no methods have formalized the use of cross-validation to quantify the expected prediction error. Our proposed method uses existing methods of individual-level surrogate evaluation within a given clinical trial setting in combination with cross-validation over a set of clinical trials to evaluate surrogate quality and to estimate the absolute prediction error that is expected in a new trial setting when using a PIGS. Simulations show that our method performs well across a variety of scenarios. We use our method to evaluate and to compare candidate individual-level general surrogates over a set of multi-national trials of a pentavalent rotavirus vaccine.
NASA Astrophysics Data System (ADS)
Zhang, Weiai; Ma, Caijuan; Su, Zhengquan; Bai, Yan
2016-11-01
This paper describes a highly sensitive and accurate approach using aniline blue (AB) (water soluble) as a probe to determine chitosan (CTS) through Resonance Rayleigh scattering (RRS). Under optimum experimental conditions, the intensities of RRS were linearly proportional to the concentration of CTS in the range from 0.01 to 3.5 μg/mL, and the limit of detection (LOD) was 6.94 ng/mL. Therefore, a new and highly sensitive method based on RRS for the determination of CTS has been developed. Furthermore, the effect of molecular weight of CTS and the effect of the degree of deacetylation of CTS on the accurate quantification of CTS was studied. The experimental data was analyzed by linear regression analysis, which indicated that the molecular weight and the degree of deacetylation of CTS had no statistical significance and this method could be used to determine CTS accurately. Meanwhile, this assay was applied for CTS determination in health products with satisfactory results.
Dury, Alain Y; Ke, Yuyong; Labrie, Fernand
2016-09-01
A series of steroids present in the brain have been named "neurosteroids" following the possibility of their role in the central nervous system impairments such as anxiety disorders, depression, premenstrual dysphoric disorder (PMDD), addiction, or even neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. Study of their potential role requires a sensitive and accurate assay of their concentration in the monkey brain, the closest model to the human. We have thus developed a robust, precise and accurate liquid chromatography-tandem mass spectrometry method for the assay of pregnenolone, pregnanolone, epipregnanolone, allopregnanolone, epiallopregnanolone, and androsterone in the cynomolgus monkey brain. The extraction method includes a thorough sample cleanup using protein precipitation and phospholipid removal, followed by hexane liquid-liquid extraction and a Girard T ketone-specific derivatization. This method opens the possibility of investigating the potential implication of these six steroids in the most suitable animal model for neurosteroid-related research. Copyright © 2016 Elsevier Inc. All rights reserved.
Some recent developments of the immersed interface method for flow simulation
NASA Astrophysics Data System (ADS)
Xu, Sheng
2017-11-01
The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.
NASA Astrophysics Data System (ADS)
Du, Zhifang; Li, Jiequan
2018-02-01
This paper develops a new fifth order accurate Hermite WENO (HWENO) reconstruction method for hyperbolic conservation schemes in the framework of the two-stage fourth order accurate temporal discretization in Li and Du (2016) [13]. Instead of computing the first moment of the solution additionally in the conventional HWENO or DG approach, we can directly take the interface values, which are already available in the numerical flux construction using the generalized Riemann problem (GRP) solver, to approximate the first moment. The resulting scheme is fourth order temporal accurate by only invoking the HWENO reconstruction twice so that it becomes more compact. Numerical experiments show that such compactness makes significant impact on the resolution of nonlinear waves.
Measuring Distances Using Digital Cameras
ERIC Educational Resources Information Center
Kendal, Dave
2007-01-01
This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…
Wheat mill stream properties for discrete element method modeling
USDA-ARS?s Scientific Manuscript database
A discrete phase approach based on individual wheat kernel characteristics is needed to overcome the limitations of previous statistical models and accurately predict the milling behavior of wheat. As a first step to develop a discrete element method (DEM) model for the wheat milling process, this s...
NASA Astrophysics Data System (ADS)
Ashour, Safwan; Bayram, Roula
2015-04-01
New, accurate, sensitive and reliable kinetic spectrophotometric method for the assay of moxifloxacin hydrochloride (MOXF) in pure form and pharmaceutical formulations has been developed. The method involves the oxidative coupling reaction of MOXF with 3-methyl-2-benzothiazolinone hydrazone hydrochloride monohydrate (MBTH) in the presence of Ce(IV) in an acidic medium to form colored product with lambda max at 623 and 660 nm. The reaction is followed spectrophotometrically by measuring the increase in absorbance at 623 nm as a function of time. The initial rate and fixed time methods were adopted for constructing the calibration curves. The linearity range was found to be 1.89-40.0 μg mL-1 for initial rate and fixed time methods. The limit of detection for initial rate and fixed time methods is 0.644 and 0.043 μg mL-1, respectively. Molar absorptivity for the method was found to be 0.89 × 104 L mol-1 cm-1. Statistical treatment of the experimental results indicates that the methods are precise and accurate. The proposed method has been applied successfully for the estimation of moxifloxacin hydrochloride in tablet dosage form with no interference from the excipients. The results are compared with the official method.
Development of higher-order modal methods for transient thermal and structural analysis
NASA Technical Reports Server (NTRS)
Camarda, Charles J.; Haftka, Raphael T.
1989-01-01
A force-derivative method which produces higher-order modal solutions to transient problems is evaluated. These higher-order solutions converge to an accurate response using fewer degrees-of-freedom (eigenmodes) than lower-order methods such as the mode-displacement or mode-acceleration methods. Results are presented for non-proportionally damped structural problems as well as thermal problems modeled by finite elements.
NASA Technical Reports Server (NTRS)
Rosenfeld, Moshe
1990-01-01
The main goals are the development, validation, and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems. A solution method that combines a finite volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.
A new flux-conserving numerical scheme for the steady, incompressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Scott, James R.
1994-01-01
This paper is concerned with the continued development of a new numerical method, the space-time solution element (STS) method, for solving conservation laws. The present work focuses on the two-dimensional, steady, incompressible Navier-Stokes equations. Using first an integral approach, and then a differential approach, the discrete flux conservation equations presented in a recent paper are rederived. Here a simpler method for determining the flux expressions at cell interfaces is given; a systematic and rigorous derivation of the conditions used to simulate the differential form of the governing conservation law(s) is provided; necessary and sufficient conditions for a discrete approximation to satisfy a conservation law in E2 are derived; and an estimate of the local truncation error is given. A specific scheme is then constructed for the solution of the thin airfoil boundary layer problem. Numerical results are presented which demonstrate the ability of the scheme to accurately resolve the developing boundary layer and wake regions using grids which are much coarser than those employed by other numerical methods. It is shown that ten cells in the cross-stream direction are sufficient to accurately resolve the developing airfoil boundary layer.
Development of Star Tracker System for Accurate Estimation of Spacecraft Attitude
2009-12-01
For a high- cost spacecraft with accurate pointing requirements, the use of a star tracker is the preferred method for attitude determination. The...solutions, however there are certain costs with using this algorithm. There are significantly more features a triangle can provide when compared to an...to the other. The non-rotating geocentric equatorial frame provides an inertial frame for the two-body problem of a satellite in orbit. In this
Wang, Shunhai; Bobst, Cedric E; Kaltashov, Igor A
2015-01-01
Transferrin (Tf) is an 80 kDa iron-binding protein that is viewed as a promising drug carrier to target the central nervous system as a result of its ability to penetrate the blood-brain barrier. Among the many challenges during the development of Tf-based therapeutics, the sensitive and accurate quantitation of the administered Tf in cerebrospinal fluid (CSF) remains particularly difficult because of the presence of abundant endogenous Tf. Herein, we describe the development of a new liquid chromatography-mass spectrometry-based method for the sensitive and accurate quantitation of exogenous recombinant human Tf in rat CSF. By taking advantage of a His-tag present in recombinant Tf and applying Ni affinity purification, the exogenous human serum Tf can be greatly enriched from rat CSF, despite the presence of the abundant endogenous protein. Additionally, we applied a newly developed (18)O-labeling technique that can generate internal standards at the protein level, which greatly improved the accuracy and robustness of quantitation. The developed method was investigated for linearity, accuracy, precision, and lower limit of quantitation, all of which met the commonly accepted criteria for bioanalytical method validation.
NASA Astrophysics Data System (ADS)
Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille C.; Moxley, Katherine M.; Moore, Kathleen; Mannel, Robert S.; Cheng, Samuel; Liu, Hong; Zheng, Bin; Qiu, Yuchen
2017-02-01
Accurate tumor segmentation is a critical step in the development of the computer-aided detection (CAD) based quantitative image analysis scheme for early stage prognostic evaluation of ovarian cancer patients. The purpose of this investigation is to assess the efficacy of several different methods to segment the metastatic tumors occurred in different organs of ovarian cancer patients. In this study, we developed a segmentation scheme consisting of eight different algorithms, which can be divided into three groups: 1) Region growth based methods; 2) Canny operator based methods; and 3) Partial differential equation (PDE) based methods. A number of 138 tumors acquired from 30 ovarian cancer patients were used to test the performance of these eight segmentation algorithms. The results demonstrate each of the tested tumors can be successfully segmented by at least one of the eight algorithms without the manual boundary correction. Furthermore, modified region growth, classical Canny detector, and fast marching, and threshold level set algorithms are suggested in the future development of the ovarian cancer related CAD schemes. This study may provide meaningful reference for developing novel quantitative image feature analysis scheme to more accurately predict the response of ovarian cancer patients to the chemotherapy at early stage.
CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM
NASA Astrophysics Data System (ADS)
Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang
2014-06-01
Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.
An adaptive state of charge estimation approach for lithium-ion series-connected battery system
NASA Astrophysics Data System (ADS)
Peng, Simin; Zhu, Xuelai; Xing, Yinjiao; Shi, Hongbing; Cai, Xu; Pecht, Michael
2018-07-01
Due to the incorrect or unknown noise statistics of a battery system and its cell-to-cell variations, state of charge (SOC) estimation of a lithium-ion series-connected battery system is usually inaccurate or even divergent using model-based methods, such as extended Kalman filter (EKF) and unscented Kalman filter (UKF). To resolve this problem, an adaptive unscented Kalman filter (AUKF) based on a noise statistics estimator and a model parameter regulator is developed to accurately estimate the SOC of a series-connected battery system. An equivalent circuit model is first built based on the model parameter regulator that illustrates the influence of cell-to-cell variation on the battery system. A noise statistics estimator is then used to attain adaptively the estimated noise statistics for the AUKF when its prior noise statistics are not accurate or exactly Gaussian. The accuracy and effectiveness of the SOC estimation method is validated by comparing the developed AUKF and UKF when model and measurement statistics noises are inaccurate, respectively. Compared with the UKF and EKF, the developed method shows the highest SOC estimation accuracy.
Ian T. Schmidt; John F. O' Leary; Douglas A. Stow; Kellie A. Uyeda; Philip Riggan
2016-01-01
Development of methods that more accurately estimate spatial distributions of fuel loads in shrublands allows for improved understanding of ecological processes such as wildfire behavior and postburn recovery. The goal of this study is to develop and test
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections.
Zhu, Xiangbin; Qiu, Huiling
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved.
High Accuracy Human Activity Recognition Based on Sparse Locality Preserving Projections
2016-01-01
Human activity recognition(HAR) from the temporal streams of sensory data has been applied to many fields, such as healthcare services, intelligent environments and cyber security. However, the classification accuracy of most existed methods is not enough in some applications, especially for healthcare services. In order to improving accuracy, it is necessary to develop a novel method which will take full account of the intrinsic sequential characteristics for time-series sensory data. Moreover, each human activity may has correlated feature relationship at different levels. Therefore, in this paper, we propose a three-stage continuous hidden Markov model (TSCHMM) approach to recognize human activities. The proposed method contains coarse, fine and accurate classification. The feature reduction is an important step in classification processing. In this paper, sparse locality preserving projections (SpLPP) is exploited to determine the optimal feature subsets for accurate classification of the stationary-activity data. It can extract more discriminative activities features from the sensor data compared with locality preserving projections. Furthermore, all of the gyro-based features are used for accurate classification of the moving data. Compared with other methods, our method uses significantly less number of features, and the over-all accuracy has been obviously improved. PMID:27893761
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan; Clegg, Samuel M.; Frydenvang, Jens; Wiens, Roger C.; McLennan, Scott M.; Morris, Richard V.; Ehlmann, Bethany L.; Dyar, M. Darby
2017-01-01
Accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response of an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “sub-model” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. The sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.
An Economical Semi-Analytical Orbit Theory for Retarded Satellite Motion About an Oblate Planet
NASA Technical Reports Server (NTRS)
Gordon, R. A.
1980-01-01
Brouwer and Brouwer-Lyddanes' use of the Von Zeipel-Delaunay method is employed to develop an efficient analytical orbit theory suitable for microcomputers. A succinctly simple pseudo-phenomenologically conceptualized algorithm is introduced which accurately and economically synthesizes modeling of drag effects. The method epitomizes and manifests effortless efficient computer mechanization. Simulated trajectory data is employed to illustrate the theory's ability to accurately accommodate oblateness and drag effects for microcomputer ground based or onboard predicted orbital representation. Real tracking data is used to demonstrate that the theory's orbit determination and orbit prediction capabilities are favorably adaptable to and are comparable with results obtained utilizing complex definitive Cowell method solutions on satellites experiencing significant drag effects.
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
Development of a non-invasive LED based device for adipose tissue thickness measurements in vivo
NASA Astrophysics Data System (ADS)
Volceka, K.; Jakovels, D.; Arina, Z.; Zaharans, J.; Kviesis, E.; Strode, A.; Svampe, E.; Ozolina-Moll, L.; Butnere, M. M.
2012-06-01
There are a number of techniques for body composition assessment in clinics and in field-surveys, but in all cases the applied methods have advantages and disadvantages. High precision imaging methods are available, though expensive and non-portable, however, the methods devised for the mass population, often suffer from the lack of precision. Therefore, the development of a safe, mobile, non-invasive, optical method that would be easy to perform, precise and low-cost, but also would offer an accurate assessment of subcutaneous adipose tissue (SAT) both in lean and in obese persons is required. Thereof, the diffuse optical spectroscopy is advantageous over the aforementioned techniques. A prototype device using an optical method for measurement of the SAT thickness in vivo has been developed. The probe contained multiple LEDs (660nm) distributed at various distances from the photo-detector which allow different light penetration depths into the subcutaneous tissue. The differences of the reflected light intensities were used to create a non-linear model, and the computed values were compared with the corresponding thicknesses of SAT, assessed by B-mode ultrasonography. The results show that with the optical system used in this study, accurate results of different SAT thicknesses can be obtained, and imply a further potential for development of multispectral optical system to observe changes of SAT thickness as well as to determine the percentage of total body fat.
NASA Astrophysics Data System (ADS)
Goddard, William
2013-03-01
For soft materials applications it is essential to obtain accurate descriptions of the weak (London dispersion, electrostatic) interactions between nonbond units, to include interactions with and stabilization by solvent, and to obtain accurate free energies and entropic changes during chemical, physical, and thermal processing. We will describe some of the advances being made in first principles based methods for treating soft materials with applications selected from new organic electrodes and electrolytes for batteries and fuel cells, forward osmosis for water cleanup, extended matter stable at ambient conditions, and drugs for modulating activation of GCPR membrane proteins,
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peterson, Charles; Penchoff, Deborah A.; Wilson, Angela K., E-mail: wilson@chemistry.msu.edu
2015-11-21
An effective approach for the determination of lanthanide energetics, as demonstrated by application to the third ionization energy (in the gas phase) for the first half of the lanthanide series, has been developed. This approach uses a combination of highly correlated and fully relativistic ab initio methods to accurately describe the electronic structure of heavy elements. Both scalar and fully relativistic methods are used to achieve an approach that is both computationally feasible and accurate. The impact of basis set choice and the number of electrons included in the correlation space has also been examined.
A novel knowledge-based potential for RNA 3D structure evaluation
NASA Astrophysics Data System (ADS)
Yang, Yi; Gu, Qi; Zhang, Ben-Gong; Shi, Ya-Zhou; Shao, Zhi-Gang
2018-03-01
Ribonucleic acids (RNAs) play a vital role in biology, and knowledge of their three-dimensional (3D) structure is required to understand their biological functions. Recently structural prediction methods have been developed to address this issue, but a series of RNA 3D structures are generally predicted by most existing methods. Therefore, the evaluation of the predicted structures is generally indispensable. Although several methods have been proposed to assess RNA 3D structures, the existing methods are not precise enough. In this work, a new all-atom knowledge-based potential is developed for more accurately evaluating RNA 3D structures. The potential not only includes local and nonlocal interactions but also fully considers the specificity of each RNA by introducing a retraining mechanism. Based on extensive test sets generated from independent methods, the proposed potential correctly distinguished the native state and ranked near-native conformations to effectively select the best. Furthermore, the proposed potential precisely captured RNA structural features such as base-stacking and base-pairing. Comparisons with existing potential methods show that the proposed potential is very reliable and accurate in RNA 3D structure evaluation. Project supported by the National Science Foundation of China (Grants Nos. 11605125, 11105054, 11274124, and 11401448).
A Peroxidase-linked Spectrophotometric Assay for the Detection of Monoamine Oxidase Inhibitors
Zhi, Kangkang; Yang, Zhongduo; Sheng, Jie; Shu, Zongmei; Shi, Yin
2016-01-01
To develop a new more accurate spectrophotometric method for detecting monoamine oxidase inhibitors from plant extracts, a series of amine substrates were selected and their ability to be oxidized by monoamine oxidase was evaluated by the HPLC method and a new substrate was used to develop a peroxidase-linked spectrophotometric assay. 4-(Trifluoromethyl) benzylamine (11) was proved to be an excellent substrate for peroxidase-linked spectrophotometric assay. Therefore, a new peroxidase-linked spectrophotometric assay was set up. The principle of the method is that the MAO converts 11 into aldehyde, ammonia and hydrogen peroxide. In the presence of peroxidase, the hydrogen peroxide will oxidize 4-aminoantipyrine into oxidised 4-aminoantipyrine which can condense with vanillic acid to give a red quinoneimine dye. The production of the quinoneimine dye was detected at 490 nm by a microplate reader. The ⊿OD value between the blank group and blank negative control group in this new method is twice as much as that in Holt’s method, which enables the procedure to be more accurate and avoids the produce of false positive results. The new method will be helpful for researchers to screening monoamine oxidase inhibitors from deep-color plant extracts. PMID:27610153
A Peroxidase-linked Spectrophotometric Assay for the Detection of Monoamine Oxidase Inhibitors.
Zhi, Kangkang; Yang, Zhongduo; Sheng, Jie; Shu, Zongmei; Shi, Yin
2016-01-01
To develop a new more accurate spectrophotometric method for detecting monoamine oxidase inhibitors from plant extracts, a series of amine substrates were selected and their ability to be oxidized by monoamine oxidase was evaluated by the HPLC method and a new substrate was used to develop a peroxidase-linked spectrophotometric assay. 4-(Trifluoromethyl) benzylamine (11) was proved to be an excellent substrate for peroxidase-linked spectrophotometric assay. Therefore, a new peroxidase-linked spectrophotometric assay was set up. The principle of the method is that the MAO converts 11 into aldehyde, ammonia and hydrogen peroxide. In the presence of peroxidase, the hydrogen peroxide will oxidize 4-aminoantipyrine into oxidised 4-aminoantipyrine which can condense with vanillic acid to give a red quinoneimine dye. The production of the quinoneimine dye was detected at 490 nm by a microplate reader. The ⊿OD value between the blank group and blank negative control group in this new method is twice as much as that in Holt's method, which enables the procedure to be more accurate and avoids the produce of false positive results. The new method will be helpful for researchers to screening monoamine oxidase inhibitors from deep-color plant extracts.
Developing a method for estimating AADT on all Louisiana roads.
DOT National Transportation Integrated Search
2015-07-01
Traffic flow volumes present key information needed for making transportation engineering and planning decisions. : Accurate traffic volume count has many applications including: roadway planning, design, air quality compliance, travel : model valida...
NASA Astrophysics Data System (ADS)
Date, Kumi; Ishigure, Takaaki
2017-02-01
Polymer optical waveguides with graded-index (GI) circular cores are fabricated using the Mosquito method, in which the positions of parallel cores are accurately controlled. Such an accurate arrangement is of great importance for a high optical coupling efficiency with other optical components such as fiber ribbons. In the Mosquito method that we developed, a core monomer with a viscous liquid state is dispensed into another liquid state monomer for cladding via a syringe needle. Hence, the core positions are likely to shift during or after the dispensing process due to several factors. We investigate the factors, specifically affecting the core height. When the core and cladding monomers are selected appropriately, the effect of the gravity could be negligible, so the core height is maintained uniform, resulting in accurate core heights. The height variance is controlled in +/-2 micrometers for the 12 cores. Meanwhile, larger shift in the core height is observed when the needle-tip position is apart from the substrate surface. One of the possible reasons of the needle-tip height dependence is the asymmetric volume contraction during the monomer curing. We find a linear relationship between the original needle-tip height and the core-height observed. This relationship is implemented in the needle-scan program to stabilize the core height in different layers. Finally, the core heights are accurately controlled even if the cores are aligned on various heights. These results indicate that the Mosquito method enables to fabricate waveguides in which the cores are 3-dimensionally aligned with a high position accuracy.
NASA Astrophysics Data System (ADS)
Saye, Robert
2017-09-01
In this two-part paper, a high-order accurate implicit mesh discontinuous Galerkin (dG) framework is developed for fluid interface dynamics, facilitating precise computation of interfacial fluid flow in evolving geometries. The framework uses implicitly defined meshes-wherein a reference quadtree or octree grid is combined with an implicit representation of evolving interfaces and moving domain boundaries-and allows physically prescribed interfacial jump conditions to be imposed or captured with high-order accuracy. Part one discusses the design of the framework, including: (i) high-order quadrature for implicitly defined elements and faces; (ii) high-order accurate discretisation of scalar and vector-valued elliptic partial differential equations with interfacial jumps in ellipticity coefficient, leading to optimal-order accuracy in the maximum norm and discrete linear systems that are symmetric positive (semi)definite; (iii) the design of incompressible fluid flow projection operators, which except for the influence of small penalty parameters, are discretely idempotent; and (iv) the design of geometric multigrid methods for elliptic interface problems on implicitly defined meshes and their use as preconditioners for the conjugate gradient method. Also discussed is a variety of aspects relating to moving interfaces, including: (v) dG discretisations of the level set method on implicitly defined meshes; (vi) transferring state between evolving implicit meshes; (vii) preserving mesh topology to accurately compute temporal derivatives; (viii) high-order accurate reinitialisation of level set functions; and (ix) the integration of adaptive mesh refinement. In part two, several applications of the implicit mesh dG framework in two and three dimensions are presented, including examples of single phase flow in nontrivial geometry, surface tension-driven two phase flow with phase-dependent fluid density and viscosity, rigid body fluid-structure interaction, and free surface flow. A class of techniques known as interfacial gauge methods is adopted to solve the corresponding incompressible Navier-Stokes equations, which, compared to archetypical projection methods, have a weaker coupling between fluid velocity, pressure, and interface position, and allow high-order accurate numerical methods to be developed more easily. Convergence analyses conducted throughout the work demonstrate high-order accuracy in the maximum norm for all of the applications considered; for example, fourth-order spatial accuracy in fluid velocity, pressure, and interface location is demonstrated for surface tension-driven two phase flow in 2D and 3D. Specific application examples include: vortex shedding in nontrivial geometry, capillary wave dynamics revealing fine-scale flow features, falling rigid bodies tumbling in unsteady flow, and free surface flow over a submersed obstacle, as well as high Reynolds number soap bubble oscillation dynamics and vortex shedding induced by a type of Plateau-Rayleigh instability in water ripple free surface flow. These last two examples compare numerical results with experimental data and serve as an additional means of validation; they also reveal physical phenomena not visible in the experiments, highlight how small-scale interfacial features develop and affect macroscopic dynamics, and demonstrate the wide range of spatial scales often at play in interfacial fluid flow.
NASA Astrophysics Data System (ADS)
Saye, Robert
2017-09-01
In this two-part paper, a high-order accurate implicit mesh discontinuous Galerkin (dG) framework is developed for fluid interface dynamics, facilitating precise computation of interfacial fluid flow in evolving geometries. The framework uses implicitly defined meshes-wherein a reference quadtree or octree grid is combined with an implicit representation of evolving interfaces and moving domain boundaries-and allows physically prescribed interfacial jump conditions to be imposed or captured with high-order accuracy. Part one discusses the design of the framework, including: (i) high-order quadrature for implicitly defined elements and faces; (ii) high-order accurate discretisation of scalar and vector-valued elliptic partial differential equations with interfacial jumps in ellipticity coefficient, leading to optimal-order accuracy in the maximum norm and discrete linear systems that are symmetric positive (semi)definite; (iii) the design of incompressible fluid flow projection operators, which except for the influence of small penalty parameters, are discretely idempotent; and (iv) the design of geometric multigrid methods for elliptic interface problems on implicitly defined meshes and their use as preconditioners for the conjugate gradient method. Also discussed is a variety of aspects relating to moving interfaces, including: (v) dG discretisations of the level set method on implicitly defined meshes; (vi) transferring state between evolving implicit meshes; (vii) preserving mesh topology to accurately compute temporal derivatives; (viii) high-order accurate reinitialisation of level set functions; and (ix) the integration of adaptive mesh refinement. In part two, several applications of the implicit mesh dG framework in two and three dimensions are presented, including examples of single phase flow in nontrivial geometry, surface tension-driven two phase flow with phase-dependent fluid density and viscosity, rigid body fluid-structure interaction, and free surface flow. A class of techniques known as interfacial gauge methods is adopted to solve the corresponding incompressible Navier-Stokes equations, which, compared to archetypical projection methods, have a weaker coupling between fluid velocity, pressure, and interface position, and allow high-order accurate numerical methods to be developed more easily. Convergence analyses conducted throughout the work demonstrate high-order accuracy in the maximum norm for all of the applications considered; for example, fourth-order spatial accuracy in fluid velocity, pressure, and interface location is demonstrated for surface tension-driven two phase flow in 2D and 3D. Specific application examples include: vortex shedding in nontrivial geometry, capillary wave dynamics revealing fine-scale flow features, falling rigid bodies tumbling in unsteady flow, and free surface flow over a submersed obstacle, as well as high Reynolds number soap bubble oscillation dynamics and vortex shedding induced by a type of Plateau-Rayleigh instability in water ripple free surface flow. These last two examples compare numerical results with experimental data and serve as an additional means of validation; they also reveal physical phenomena not visible in the experiments, highlight how small-scale interfacial features develop and affect macroscopic dynamics, and demonstrate the wide range of spatial scales often at play in interfacial fluid flow.
Laser triangulation method for measuring the size of parking claw
NASA Astrophysics Data System (ADS)
Liu, Bo; Zhang, Ming; Pang, Ying
2017-10-01
With the development of science and technology and the maturity of measurement technology, the 3D profile measurement technology has been developed rapidly. Three dimensional measurement technology is widely used in mold manufacturing, industrial inspection, automatic processing and manufacturing, etc. There are many kinds of situations in scientific research and industrial production. It is necessary to transform the original mechanical parts into the 3D data model on the computer quickly and accurately. At present, many methods have been developed to measure the contour size, laser triangulation is one of the most widely used methods.
Seo, Miyeong; Kim, Byungjoo; Baek, Song-Yee
2015-07-01
Patulin, a mycotoxin produced by several molds in fruits, has been frequently detected in apple products. Therefore, regulatory bodies have established recommended maximum permitted patulin concentrations for each type of apple product. Although several analytical methods have been adopted to determine patulin in food, quality control of patulin analysis is not easy, as reliable certified reference materials (CRMs) are not available. In this study, as a part of a project for developing CRMs for patulin analysis, we developed isotope dilution liquid chromatography-tandem mass spectrometry (ID-LC/MS/MS) as a higher-order reference method for the accurate value-assignment of CRMs. (13)C7-patulin was used as internal standard. Samples were extracted with ethyl acetate to improve recovery. For further sample cleanup with solid-phase extraction (SPE), the HLB SPE cartridge was chosen after comparing with several other types of SPE cartridges. High-performance liquid chromatography was performed on a multimode column for proper retention and separation of highly polar and water-soluble patulin from sample interferences. Sample extracts were analyzed by LC/MS/MS with electrospray ionization in negative ion mode with selected reaction monitoring of patulin and (13)C7-patulin at m/z 153→m/z 109 and m/z 160→m/z 115, respectively. The validity of the method was tested by measuring gravimetrically fortified samples of various apple products. In addition, the repeatability and the reproducibility of the method were tested to evaluate the performance of the method. The method was shown to provide accurate measurements in the 3-40 μg/kg range with a relative expanded uncertainty of around 1%.
Development of a real-time chemical injection system for air-assisted variable-rate sprayers
USDA-ARS?s Scientific Manuscript database
A chemical injection system is an effective method to minimize chemical waste and reduce the environmental pollution in pesticide spray applications. A microprocessor controlled injection system implementing a ceramic piston metering pump was developed to accurately dispense chemicals to be mixed wi...
El-Yazbi, Amira F
2017-07-01
Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virus infection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with P-value > 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.
Scanning moiré and spatial-offset phase-stepping for surface inspection of structures
NASA Astrophysics Data System (ADS)
Yoneyama, S.; Morimoto, Y.; Fujigaki, M.; Ikeda, Y.
2005-06-01
In order to develop a high-speed and accurate surface inspection system of structures such as tunnels, a new surface profile measurement method using linear array sensors is studied. The sinusoidal grating is projected on a structure surface. Then, the deformed grating is scanned by linear array sensors that move together with the grating projector. The phase of the grating is analyzed by a spatial offset phase-stepping method to perform accurate measurement. The surface profile measurements of the wall with bricks and the concrete surface of a structure are demonstrated using the proposed method. The change of geometry or fabric of structures and the defects on structure surfaces can be detected by the proposed method. It is expected that the surface profile inspection system of tunnels measuring from a running train can be constructed based on the proposed method.
Enhanced dual-frequency pattern scheme based on spatial-temporal fringes method
NASA Astrophysics Data System (ADS)
Wang, Minmin; Zhou, Canlin; Si, Shuchun; Lei, Zhenkun; Li, Xiaolei; Li, Hui; Li, YanJie
2018-07-01
One of the major challenges of employing a dual-frequency phase-shifting algorithm for phase retrieval is its sensitivity to noise. Yun et al proposed a dual-frequency method based on the Fourier transform profilometry, yet the low-frequency lobes are close to each other for accurate band-pass filtering. In the light of this problem, a novel dual-frequency pattern based on the spatial-temporal fringes (STF) method is developed in this paper. Three fringe patterns with two different frequencies are required. The low-frequency phase is obtained from two low-frequency fringe patterns by the STF method, so the signal lobes can be extracted accurately as they are far away from each other. The high-frequency phase is retrieved from another fringe pattern without the impact of the DC component. Simulations and experiments are conducted to demonstrate the excellent precision of the proposed method.
A review of virtual cutting methods and technology in deformable objects.
Wang, Monan; Ma, Yuzheng
2018-06-05
Virtual cutting of deformable objects has been a research topic for more than a decade and has been used in many areas, especially in surgery simulation. We refer to the relevant literature and briefly describe the related research. The virtual cutting method is introduced, and we discuss the benefits and limitations of these methods and explore possible research directions. Virtual cutting is a category of object deformation. It needs to represent the deformation of models in real time as accurately, robustly and efficiently as possible. To accurately represent models, the method must be able to: (1) model objects with different material properties; (2) handle collision detection and collision response; and (3) update the geometry and topology of the deformable model that is caused by cutting. Virtual cutting is widely used in surgery simulation, and research of the cutting method is important to the development of surgery simulation. Copyright © 2018 John Wiley & Sons, Ltd.
Kim, Kwang Baek; Park, Hyun Jun; Song, Doo Heon; Han, Sang-suk
2015-01-01
Ultrasound examination (US) does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases) in extracting appendix.
Bishop, Michael Jason; Crow, Brian S; Kovalcik, Kasey D; George, Joe; Bralley, James A
2007-04-01
A rapid and accurate quantitative method was developed and validated for the analysis of four urinary organic acids with nitrogen containing functional groups, formiminoglutamic acid (FIGLU), pyroglutamic acid (PYRGLU), 5-hydroxyindoleacetic acid (5-HIAA), and 2-methylhippuric acid (2-METHIP) by liquid chromatography tandem mass spectrometry (LC/MS/MS). The chromatography was developed using a weak anion-exchange amino column that provided mixed-mode retention of the analytes. The elution gradient relied on changes in mobile phase pH over a concave gradient, without the use of counter-ions or concentrated salt buffers. A simple sample preparation was used, only requiring the dilution of urine prior to instrumental analysis. The method was validated based on linearity (r2>or=0.995), accuracy (85-115%), precision (C.V.<12%), sample preparation stability (
NASA Technical Reports Server (NTRS)
Brown, Andrew M.
2014-01-01
Numerical and Analytical methods developed to determine damage accumulation in specific engine components when speed variation included. Dither Life Ratio shown to be well over factor of 2 for specific example. Steady-State assumption shown to be accurate for most turbopump cases, allowing rapid calculation of DLR. If hot-fire speed data unknown, Monte Carlo method developed that uses speed statistics for similar engines. Application of techniques allow analyst to reduce both uncertainty and excess conservatism. High values of DLR could allow previously unacceptable part to pass HCF criteria without redesign. Given benefit and ease of implementation, recommend that any finite life turbomachine component analysis adopt these techniques. Probability Values calculated, compared, and evaluated for several industry-proposed methods for combining random and harmonic loads. Two new excel macros written to calculate combined load for any specific probability level. Closed form Curve fits generated for widely used 3(sigma) and 2(sigma) probability levels. For design of lightweight aerospace components, obtaining accurate, reproducible, statistically meaningful answer critical.
Estimation of Wheat Plant Density at Early Stages Using High Resolution Imagery
Liu, Shouyang; Baret, Fred; Andrieu, Bruno; Burger, Philippe; Hemmerlé, Matthieu
2017-01-01
Crop density is a key agronomical trait used to manage wheat crops and estimate yield. Visual counting of plants in the field is currently the most common method used. However, it is tedious and time consuming. The main objective of this work is to develop a machine vision based method to automate the density survey of wheat at early stages. RGB images taken with a high resolution RGB camera are classified to identify the green pixels corresponding to the plants. Crop rows are extracted and the connected components (objects) are identified. A neural network is then trained to estimate the number of plants in the objects using the object features. The method was evaluated over three experiments showing contrasted conditions with sowing densities ranging from 100 to 600 seeds⋅m-2. Results demonstrate that the density is accurately estimated with an average relative error of 12%. The pipeline developed here provides an efficient and accurate estimate of wheat plant density at early stages. PMID:28559901
Analytical procedures for water-soluble vitamins in foods and dietary supplements: a review.
Blake, Christopher J
2007-09-01
Water-soluble vitamins include the B-group vitamins and vitamin C. In order to correctly monitor water-soluble vitamin content in fortified foods for compliance monitoring as well as to establish accurate data banks, an accurate and precise analytical method is a prerequisite. For many years microbiological assays have been used for analysis of B vitamins. However they are no longer considered to be the gold standard in vitamins analysis as many studies have shown up their deficiencies. This review describes the current status of analytical methods, including microbiological assays and spectrophotometric, biosensor and chromatographic techniques. In particular it describes the current status of the official methods and highlights some new developments in chromatographic procedures and detection methods. An overview is made of multivitamin extractions and analyses for foods and supplements.
Modal identification of dynamic mechanical systems
NASA Astrophysics Data System (ADS)
Srivastava, R. K.; Kundra, T. K.
1992-07-01
This paper reviews modal identification techniques which are now helping designers all over the world to improve the dynamic behavior of vibrating engineering systems. In this context the need to develop more accurate and faster parameter identification is ever increasing. A new dynamic stiffness matrix based identification method which is highly accurate, fast and system-dynamic-modification compatible is presented. The technique is applicable to all those multidegree-of-freedom systems where full receptance matrix can be experimentally measured.
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
2001-01-01
For interpreting observational data, and in particular for use in inverse methods, accurate and realistic chemical transport models are essential. Toward this end we have, in recent years, helped develop and utilize a number of three-dimensional models including the Model for Atmospheric Transport and Chemistry (MATCH).
USDA-ARS?s Scientific Manuscript database
Various technologies have been developed for pathogen detection using optical, electrochemical, biochemical and physical properties. Conventional microbiological methods need time from days to week to get the result. Though this method is very sensitive and accurate, a rapid detection of pathogens i...
Statistical methods for analysing responses of wildlife to human disturbance
Haiganoush K. Preisler; Alan A. Ager; Michael J. Wisdom
2006-01-01
Off-road recreation is increasing rapidly in many areas of the world, and effects on wildlife can be highly detrimental. Consequently, we have developed methods for studying wildlife responses to off-road recreation with the use of new technologies that allow frequent and accurate monitoring of human-wildlife interactions. To...
Genie M. Fleming; Joseph M. Wunderle; David N. Ewert; Joseph O' Brien
2014-01-01
Aim: Non-destructive methods for quantifying above-ground plant biomass are important tools in many ecological studies and management endeavours, but estimation methods can be labour intensive and particularly difficult in structurally diverse vegetation types. We aimed to develop a low-cost, but reasonably accurate, estimation technique within early-successional...
Ion mobility spectrometry: A personal view of its development at UCSB
2014-09-15
molecules. As we progressed we realized that new, more accurate algorithms were needed to augment our early projection approximation (PA) for determining...required. The goal was to maintain some of the speed of the projection approximation and retain the accuracy of the trajectory method. Christian...Bleiholder, while a postdoc in my group, did just that by development of the projection superposition approximation (PSA) [31–35]. This new method is 100
Establishment of a high accuracy geoid correction model and geodata edge match
NASA Astrophysics Data System (ADS)
Xi, Ruifeng
This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.
NASA Technical Reports Server (NTRS)
Liever, Peter A.; West, Jeffrey S.
2016-01-01
A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed for launch vehicle liftoff acoustic environment predictions. The framework couples the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate discontinuous Galerkin solver developed in the same production framework, Loci/THRUST, to accurately resolve and propagate acoustic physics across the entire launch environment. Time-accurate, Hybrid RANS/LES CFD modeling is applied for predicting the acoustic generation physics at the plume source, and a high-order accurate unstructured discontinuous Galerkin (DG) method is employed to propagate acoustic waves away from the source across large distances using high-order accurate schemes. The DG solver is capable of solving 2nd, 3rd, and 4th order Euler solutions for non-linear, conservative acoustic field propagation. Initial application testing and validation has been carried out against high resolution acoustic data from the Ares Scale Model Acoustic Test (ASMAT) series to evaluate the capabilities and production readiness of the CFD/CAA system to resolve the observed spectrum of acoustic frequency content. This paper presents results from this validation and outlines efforts to mature and improve the computational simulation framework.
Efficient alignment-free DNA barcode analytics.
Kuksa, Pavel; Pavlovic, Vladimir
2009-11-10
In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.
Automated Tumor Volumetry Using Computer-Aided Image Segmentation
Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A.; Ali, Zarina S.; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M.; Davatzikos, Christos
2015-01-01
Rationale and Objectives Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. Materials and Methods A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Results Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0–5 rating scale where 5 indicated perfect segmentation. Conclusions The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. PMID:25770633
Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S
2017-02-01
Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.
Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter
NASA Astrophysics Data System (ADS)
Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei
2017-10-01
To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.
Microseismic imaging using Geometric-mean Reverse-Time Migration in Hydraulic Fracturing Monitoring
NASA Astrophysics Data System (ADS)
Yin, J.; Ng, R.; Nakata, N.
2017-12-01
Unconventional oil and gas exploration techniques such as hydraulic fracturing are associated with microseismic events related to the generation and development of fractures. For example, hydraulic fracturing, which is popular in Southern Oklahoma, produces earthquakes that are greater than magnitude 2.0. Finding the accurate locations, and mechanisms, of these events provides important information of local stress conditions, fracture distribution, hazard assessment, and economical impact. The accurate source location is also important to separate fracking-induced and wastewater disposal induced seismicity. Here, we implement a wavefield-based imaging method called Geometric-mean Reverse-Time Migration (GmRTM), which takes the advantage of accurate microseismic location based on wavefield back projection. We apply GmRTM to microseismic data collected during hydraulic fracturing for imaging microseismic source locations, and potentially, fractures. Assuming an accurate velocity model, GmRTM can improve the spatial resolution of source locations compared to HypoDD or P/S travel-time based methods. We will discuss the results from GmRTM and HypoDD using this field dataset and synthetic data.
A Unified Development of Basis Reduction Methods for Rotor Blade Analysis
NASA Technical Reports Server (NTRS)
Ruzicka, Gene C.; Hodges, Dewey H.; Rutkowski, Michael (Technical Monitor)
2001-01-01
The axial foreshortening effect plays a key role in rotor blade dynamics, but approximating it accurately in reduced basis models has long posed a difficult problem for analysts. Recently, though, several methods have been shown to be effective in obtaining accurate,reduced basis models for rotor blades. These methods are the axial elongation method,the mixed finite element method, and the nonlinear normal mode method. The main objective of this paper is to demonstrate the close relationships among these methods, which are seemingly disparate at first glance. First, the difficulties inherent in obtaining reduced basis models of rotor blades are illustrated by examining the modal reduction accuracy of several blade analysis formulations. It is shown that classical, displacement-based finite elements are ill-suited for rotor blade analysis because they can't accurately represent the axial strain in modal space, and that this problem may be solved by employing the axial force as a variable in the analysis. It is shown that the mixed finite element method is a convenient means for accomplishing this, and the derivation of a mixed finite element for rotor blade analysis is outlined. A shortcoming of the mixed finite element method is that is that it increases the number of variables in the analysis. It is demonstrated that this problem may be rectified by solving for the axial displacements in terms of the axial forces and the bending displacements. Effectively, this procedure constitutes a generalization of the widely used axial elongation method to blades of arbitrary topology. The procedure is developed first for a single element, and then extended to an arbitrary assemblage of elements of arbitrary type. Finally, it is shown that the generalized axial elongation method is essentially an approximate solution for an invariant manifold that can be used as the basis for a nonlinear normal mode.
Ford, Jennifer Lynn; Green, Joanne Balmer; Lietz, Georg; Oxley, Anthony; Green, Michael H
2017-09-01
Background: Provitamin A carotenoids are an important source of dietary vitamin A for many populations. Thus, accurate and simple methods for estimating carotenoid bioefficacy are needed to evaluate the vitamin A value of test solutions and plant sources. β-Carotene bioefficacy is often estimated from the ratio of the areas under plasma isotope response curves after subjects ingest labeled β-carotene and a labeled retinyl acetate reference dose [isotope reference method (IRM)], but to our knowledge, the method has not yet been evaluated for accuracy. Objectives: Our objectives were to develop and test a physiologically based compartmental model that includes both absorptive and postabsorptive β-carotene bioconversion and to use the model to evaluate the accuracy of the IRM and a simple plasma retinol isotope ratio [(RIR), labeled β-carotene-derived retinol/labeled reference-dose-derived retinol in one plasma sample] for estimating relative bioefficacy. Methods: We used model-based compartmental analysis (Simulation, Analysis and Modeling software) to develop and apply a model that provided known values for β-carotene bioefficacy. Theoretical data for 10 subjects were generated by the model and used to determine bioefficacy by RIR and IRM; predictions were compared with known values. We also applied RIR and IRM to previously published data. Results: Plasma RIR accurately predicted β-carotene relative bioefficacy at 14 d or later. IRM also accurately predicted bioefficacy by 14 d, except that, when there was substantial postabsorptive bioconversion, IRM underestimated bioefficacy. Based on our model, 1-d predictions of relative bioefficacy include absorptive plus a portion of early postabsorptive conversion. Conclusion: The plasma RIR is a simple tracer method that accurately predicts β-carotene relative bioefficacy based on analysis of one blood sample obtained at ≥14 d after co-ingestion of labeled β-carotene and retinyl acetate. The method also provides information about the contributions of absorptive and postabsorptive conversion to total bioefficacy if an additional sample is taken at 1 d. © 2017 American Society for Nutrition.
Alcohol-related hot-spot analysis and prediction : final report.
DOT National Transportation Integrated Search
2017-05-01
This project developed methods to more accurately identify alcohol-related crash hot spots, ultimately allowing for more effective and efficient enforcement and safety campaigns. Advancements in accuracy came from improving the calculation of spatial...
Wei, Cong; Grace, James E; Zvyaga, Tatyana A; Drexler, Dieter M
2012-08-01
The polar nucleoside drug ribavirin (RBV) combined with IFN-α is a front-line treatment for chronic hepatitis C virus infection. RBV acts as a prodrug and exerts its broad antiviral activity primarily through its active phosphorylated metabolite ribavirin 5´-triphosphate (RTP), and also possibly through ribavirin 5´-monophosphate (RMP). To study RBV transport, diffusion, metabolic clearance and its impact on drug-metabolizing enzymes, a LC-MS method is needed to simultaneously quantify RBV and its phosphorylated metabolites (RTP, ribavirin 5´-diphosphate and RMP). In a recombinant human UGT1A1 assay, the assay buffer components uridine and its phosphorylated derivatives are isobaric with RBV and its phosphorylated metabolites, leading to significant interference when analyzed by LC-MS with the nominal mass resolution mode. Presented here is a LC-MS method employing LC coupled with full-scan high-resolution accurate MS analysis for the simultaneous quantitative determination of RBV, RMP, ribavirin 5´-diphosphate and RTP by differentiating RBV and its phosphorylated metabolites from uridine and its phosphorylated derivatives by accurate mass, thus avoiding interference. The developed LC-high-resolution accurate MS method allows for quantitation of RBV and its phosphorylated metabolites, eliminating the interferences from uridine and its phosphorylated derivatives in recombinant human UGT1A1 assays.
NASA Astrophysics Data System (ADS)
Sellers, Michael; Lisal, Martin; Brennan, John
2015-06-01
Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.
Electricity Markets, Smart Grids and Smart Buildings
NASA Astrophysics Data System (ADS)
Falcey, Jonathan M.
A smart grid is an electricity network that accommodates two-way power flows, and utilizes two-way communications and increased measurement, in order to provide more information to customers and aid in the development of a more efficient electricity market. The current electrical network is outdated and has many shortcomings relating to power flows, inefficient electricity markets, generation/supply balance, a lack of information for the consumer and insufficient consumer interaction with electricity markets. Many of these challenges can be addressed with a smart grid, but there remain significant barriers to the implementation of a smart grid. This paper proposes a novel method for the development of a smart grid utilizing a bottom up approach (starting with smart buildings/campuses) with the goal of providing the framework and infrastructure necessary for a smart grid instead of the more traditional approach (installing many smart meters and hoping a smart grid emerges). This novel approach involves combining deterministic and statistical methods in order to accurately estimate building electricity use down to the device level. It provides model users with a cheaper alternative to energy audits and extensive sensor networks (the current methods of quantifying electrical use at this level) which increases their ability to modify energy consumption and respond to price signals The results of this method are promising, but they are still preliminary. As a result, there is still room for improvement. On days when there were no missing or inaccurate data, this approach has R2 of about 0.84, sometimes as high as 0.94 when compared to measured results. However, there were many days where missing data brought overall accuracy down significantly. In addition, the development and implementation of the calibration process is still underway and some functional additions must be made in order to maximize accuracy. The calibration process must be completed before a reliable accuracy can be determined. While this work shows that a combination of a deterministic and statistical methods can accurately forecast building energy usage, the ability to produce accurate results is heavily dependent upon software availability, accurate data and the proper calibration of the model. Creating the software required for a smart building model is time consuming and expensive. Bad or missing data have significant negative impacts on the accuracy of the results and can be caused by a hodgepodge of equipment and communication protocols. Proper calibration of the model is essential to ensure that the device level estimations are sufficiently accurate. Any building model which is to be successful at creating a smart building must be able to overcome these challenges.
NASA Technical Reports Server (NTRS)
Morduchow, Morris
1955-01-01
A survey of integral methods in laminar-boundary-layer analysis is first given. A simple and sufficiently accurate method for practical purposes of calculating the properties (including stability) of the laminar compressible boundary layer in an axial pressure gradient with heat transfer at the wall is presented. For flow over a flat plate, the method is applicable for an arbitrarily prescribed distribution of temperature along the surface and for any given constant Prandtl number close to unity. For flow in a pressure gradient, the method is based on a Prandtl number of unity and a uniform wall temperature. A simple and accurate method of determining the separation point in a compressible flow with an adverse pressure gradient over a surface at a given uniform wall temperature is developed. The analysis is based on an extension of the Karman-Pohlhausen method to the momentum and the thermal energy equations in conjunction with fourth- and especially higher degree velocity and stagnation-enthalpy profiles.
A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface
NASA Astrophysics Data System (ADS)
Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo
2016-09-01
The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.
Nakatsuji, Hiroshi
2012-09-18
Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement functions because they are the elements of the complete functions for the system under consideration. We extended this idea to solve the relativistic DE and applied it to the hydrogen and helium atoms, without observing any problems such as variational collapse. Thereafter, we obtained very accurate solutions of the SE for the ground and excited states of the Born-Oppenheimer (BO) and non-BO states of very small systems like He, H(2)(+), H(2), and their analogues. For larger systems, however, the overlap and Hamiltonian integrals over the complement functions are not always known mathematically (integration difficulty); therefore we formulated the local SE (LSE) method as an integral-free method. Without any integration, the LSE method gave fairly accurate energies and wave functions for small atoms and molecules. We also calculated continuous potential curves of the ground and excited states of small diatomic molecules by introducing the transferable local sampling method. Although the FC-LSE method is simple, the achievement of chemical accuracy in the absolute energy of larger systems remains time-consuming. The development of more efficient methods for the calculations of ordinary molecules would allow researchers to make these calculations more easily.
Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.
2018-01-01
The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.
Multicomponent, Tumor-Homing Chitosan Nanoparticles for Cancer Imaging and Therapy
Key, Jaehong; Park, Kyeongsoon
2017-01-01
Current clinical methods for cancer diagnosis and therapy have limitations, although survival periods are increasing as medical technologies develop. In most cancer cases, patient survival is closely related to cancer stage. Late-stage cancer after metastasis is very challenging to cure because current surgical removal of cancer is not precise enough and significantly affects bystander normal tissues. Moreover, the subsequent chemotherapy and radiation therapy affect not only malignant tumors, but also healthy tissues. Nanotechnologies for cancer treatment have the clear objective of solving these issues. Nanoparticles have been developed to more accurately differentiate early-stage malignant tumors and to treat only the tumors while dramatically minimizing side effects. In this review, we focus on recent chitosan-based nanoparticles developed with the goal of accurate cancer imaging and effective treatment. Regarding imaging applications, we review optical and magnetic resonance cancer imaging in particular. Regarding cancer treatments, we review various therapeutic methods that use chitosan-based nanoparticles, including chemo-, gene, photothermal, photodynamic and magnetic therapies. PMID:28282891
Justification of Estimates for Fiscal Year 1984 Submitted to Congress.
1983-01-01
sponsoring different aspects related to unique manufacturing methods than those pursued by DARPA, and duplication of effort is prevented by direct...weapons systems. Rapid and economical methods of satisfying these requirements must significantly precede weapons systems developments to prevent... methods for obtaining accurate and efficient geodetic measurements. Also, a major advanced sensor/G&G data collection capability is being urdertaken by DNA
A. M. S. Smith; N. A. Drake; M. J. Wooster; A. T. Hudak; Z. A. Holden; C. J. Gibbons
2007-01-01
Accurate production of regional burned area maps are necessary to reduce uncertainty in emission estimates from African savannah fires. Numerous methods have been developed that map burned and unburned surfaces. These methods are typically applied to coarse spatial resolution (1 km) data to produce regional estimates of the area burned, while higher spatial resolution...
Improved Modeling of Finite-Rate Turbulent Combustion Processes in Research Combustors
NASA Technical Reports Server (NTRS)
VanOverbeke, Thomas J.
1998-01-01
The objective of this thesis is to further develop and test a stochastic model of turbulent combustion in recirculating flows. There is a requirement to increase the accuracy of multi-dimensional combustion predictions. As turbulence affects reaction rates, this interaction must be more accurately evaluated. In this work a more physically correct way of handling the interaction of turbulence on combustion is further developed and tested. As turbulence involves randomness, stochastic modeling is used. Averaged values such as temperature and species concentration are found by integrating the probability density function (pdf) over the range of the scalar. The model in this work does not assume the pdf type, but solves for the evolution of the pdf using the Monte Carlo solution technique. The model is further developed by including a more robust reaction solver, by using accurate thermodynamics and by more accurate transport elements. The stochastic method is used with Semi-Implicit Method for Pressure-Linked Equations. The SIMPLE method is used to solve for velocity, pressure, turbulent kinetic energy and dissipation. The pdf solver solves for temperature and species concentration. Thus, the method is partially familiar to combustor engineers. The method is compared to benchmark experimental data and baseline calculations. The baseline method was tested on isothermal flows, evaporating sprays and combusting sprays. Pdf and baseline predictions were performed for three diffusion flames and one premixed flame. The pdf method predicted lower combustion rates than the baseline method in agreement with the data, except for the premixed flame. The baseline and stochastic predictions bounded the experimental data for the premixed flame. The use of a continuous mixing model or relax to mean mixing model had little effect on the prediction of average temperature. Two grids were used in a hydrogen diffusion flame simulation. Grid density did not effect the predictions except for peak temperature and tangential velocity. The hybrid pdf method did take longer and required more memory, but has a theoretical basis to extend to many reaction steps which cannot be said of current turbulent combustion models.
Shen, Xiaomeng; Hu, Qiang; Li, Jun; Wang, Jianmin; Qu, Jun
2015-10-02
Comprehensive and accurate evaluation of data quality and false-positive biomarker discovery is critical to direct the method development/optimization for quantitative proteomics, which nonetheless remains challenging largely due to the high complexity and unique features of proteomic data. Here we describe an experimental null (EN) method to address this need. Because the method experimentally measures the null distribution (either technical or biological replicates) using the same proteomic samples, the same procedures and the same batch as the case-vs-contol experiment, it correctly reflects the collective effects of technical variability (e.g., variation/bias in sample preparation, LC-MS analysis, and data processing) and project-specific features (e.g., characteristics of the proteome and biological variation) on the performances of quantitative analysis. To show a proof of concept, we employed the EN method to assess the quantitative accuracy and precision and the ability to quantify subtle ratio changes between groups using different experimental and data-processing approaches and in various cellular and tissue proteomes. It was found that choices of quantitative features, sample size, experimental design, data-processing strategies, and quality of chromatographic separation can profoundly affect quantitative precision and accuracy of label-free quantification. The EN method was also demonstrated as a practical tool to determine the optimal experimental parameters and rational ratio cutoff for reliable protein quantification in specific proteomic experiments, for example, to identify the necessary number of technical/biological replicates per group that affords sufficient power for discovery. Furthermore, we assessed the ability of EN method to estimate levels of false-positives in the discovery of altered proteins, using two concocted sample sets mimicking proteomic profiling using technical and biological replicates, respectively, where the true-positives/negatives are known and span a wide concentration range. It was observed that the EN method correctly reflects the null distribution in a proteomic system and accurately measures false altered proteins discovery rate (FADR). In summary, the EN method provides a straightforward, practical, and accurate alternative to statistics-based approaches for the development and evaluation of proteomic experiments and can be universally adapted to various types of quantitative techniques.
Analysis of New Composite Architectures
NASA Technical Reports Server (NTRS)
Whitcomb, John D.
1996-01-01
Efficient and accurate specialty finite elements methods to analyze textile composites were developed and are described. Textile composites present unique challenges to the analyst because of the large, complex 'microstructure'. The geometry of the microstructure is difficult to model and it introduces unusual free surface effects. The size of the microstructure complicates the use of traditional homogenization methods. The methods developed constitute considerable progress in addressing the modeling difficulties. The details of the methods and attended results obtained therefrom, are described in the various chapters included in Part 1 of the report. Specific conclusions and computer codes generated are included in Part 2 of the report.
Luo, Jun; Li, Junhua; Yang, Hang; Yu, Junping; Wei, Hongping
2017-10-01
Accurate and rapid identification of methicillin-resistant Staphylococcus aureus (MRSA) is needed to screen MRSA carriers and improve treatment. The current widely used duplex PCR methods are not able to differentiate MRSA from coexisting methicillin-susceptible S. aureus (MSSA) or other methicillin-resistant staphylococci. In this study, we aimed to develop a direct method for accurate and rapid detection of MRSA in clinical samples from open environments, such as nasal swabs. The new molecular assay is based on detecting the cooccurrence of nuc and mecA markers in a single bacterial cell by utilizing droplet digital PCR (ddPCR) with the chimeric lysin ClyH for cell lysis. The method consists of (i) dispersion of an intact single bacterium into nanoliter droplets, (ii) temperature-controlled release of genomic DNA (gDNA) by ClyH at 37°C, and (iii) amplification and detection of the markers ( nuc and mecA ) using standard TaqMan chemistries with ddPCR. Results were analyzed based on MRSA index ratios used for indicating the presence of the duplex-positive markers in droplets. The method was able to achieve an absolute limit of detection (LOD) of 2,900 CFU/ml for MRSA in nasal swabs spiked with excess amounts of Escherichia coli , MSSA, and other mecA -positive bacteria within 4 h. Initial testing of 104 nasal swabs showed that the method had 100% agreement with the standard culture method, while the normal duplex qPCR method had only about 87.5% agreement. The single-bacterium duplex ddPCR assay is rapid and powerful for more accurate detection of MRSA directly from clinical specimens. Copyright © 2017 American Society for Microbiology.
Luo, Jun; Li, Junhua; Yang, Hang; Yu, Junping
2017-01-01
ABSTRACT Accurate and rapid identification of methicillin-resistant Staphylococcus aureus (MRSA) is needed to screen MRSA carriers and improve treatment. The current widely used duplex PCR methods are not able to differentiate MRSA from coexisting methicillin-susceptible S. aureus (MSSA) or other methicillin-resistant staphylococci. In this study, we aimed to develop a direct method for accurate and rapid detection of MRSA in clinical samples from open environments, such as nasal swabs. The new molecular assay is based on detecting the cooccurrence of nuc and mecA markers in a single bacterial cell by utilizing droplet digital PCR (ddPCR) with the chimeric lysin ClyH for cell lysis. The method consists of (i) dispersion of an intact single bacterium into nanoliter droplets, (ii) temperature-controlled release of genomic DNA (gDNA) by ClyH at 37°C, and (iii) amplification and detection of the markers (nuc and mecA) using standard TaqMan chemistries with ddPCR. Results were analyzed based on MRSA index ratios used for indicating the presence of the duplex-positive markers in droplets. The method was able to achieve an absolute limit of detection (LOD) of 2,900 CFU/ml for MRSA in nasal swabs spiked with excess amounts of Escherichia coli, MSSA, and other mecA-positive bacteria within 4 h. Initial testing of 104 nasal swabs showed that the method had 100% agreement with the standard culture method, while the normal duplex qPCR method had only about 87.5% agreement. The single-bacterium duplex ddPCR assay is rapid and powerful for more accurate detection of MRSA directly from clinical specimens. PMID:28724560
Arienzo, Alyexandra; Sobze, Martin Sanou; Wadoum, Raoul Emeric Guetiya; Losito, Francesca; Colizzi, Vittorio; Antonini, Giovanni
2015-01-01
According to the World Health Organization (WHO) guidelines, “safe drinking-water must not represent any significant risk to health over a lifetime of consumption, including different sensitivities that may occur between life stages”. Traditional methods of water analysis are usually complex, time consuming and require an appropriately equipped laboratory, specialized personnel and expensive instrumentation. The aim of this work was to apply an alternative method, the Micro Biological Survey (MBS), to analyse for contaminants in drinking water. Preliminary experiments were carried out to demonstrate the linearity and accuracy of the MBS method and to verify the possibility of using the evaluation of total coliforms in 1 mL of water as a sufficient parameter to roughly though accurately determine water microbiological quality. The MBS method was then tested “on field” to assess the microbiological quality of water sources in the city of Douala (Cameroon, Central Africa). Analyses were performed on both dug and drilled wells in different periods of the year. Results confirm that the MBS method appears to be a valid and accurate method to evaluate the microbiological quality of many water sources and it can be of valuable aid in developing countries. PMID:26308038
Radiation Heat Transfer Between Diffuse-Gray Surfaces Using Higher Order Finite Elements
NASA Technical Reports Server (NTRS)
Gould, Dana C.
2000-01-01
This paper presents recent work on developing methods for analyzing radiation heat transfer between diffuse-gray surfaces using p-version finite elements. The work was motivated by a thermal analysis of a High Speed Civil Transport (HSCT) wing structure which showed the importance of radiation heat transfer throughout the structure. The analysis also showed that refining the finite element mesh to accurately capture the temperature distribution on the internal structure led to very large meshes with unacceptably long execution times. Traditional methods for calculating surface-to-surface radiation are based on assumptions that are not appropriate for p-version finite elements. Two methods for determining internal radiation heat transfer are developed for one and two-dimensional p-version finite elements. In the first method, higher-order elements are divided into a number of sub-elements. Traditional methods are used to determine radiation heat flux along each sub-element and then mapped back to the parent element. In the second method, the radiation heat transfer equations are numerically integrated over the higher-order element. Comparisons with analytical solutions show that the integration scheme is generally more accurate than the sub-element method. Comparison to results from traditional finite elements shows that significant reduction in the number of elements in the mesh is possible using higher-order (p-version) finite elements.
Arienzo, Alyexandra; Sobze, Martin Sanou; Wadoum, Raoul Emeric Guetiya; Losito, Francesca; Colizzi, Vittorio; Antonini, Giovanni
2015-08-25
According to the World Health Organization (WHO) guidelines, "safe drinking-water must not represent any significant risk to health over a lifetime of consumption, including different sensitivities that may occur between life stages". Traditional methods of water analysis are usually complex, time consuming and require an appropriately equipped laboratory, specialized personnel and expensive instrumentation. The aim of this work was to apply an alternative method, the Micro Biological Survey (MBS), to analyse for contaminants in drinking water. Preliminary experiments were carried out to demonstrate the linearity and accuracy of the MBS method and to verify the possibility of using the evaluation of total coliforms in 1 mL of water as a sufficient parameter to roughly though accurately determine water microbiological quality. The MBS method was then tested "on field" to assess the microbiological quality of water sources in the city of Douala (Cameroon, Central Africa). Analyses were performed on both dug and drilled wells in different periods of the year. Results confirm that the MBS method appears to be a valid and accurate method to evaluate the microbiological quality of many water sources and it can be of valuable aid in developing countries.
Fast and accurate quantum molecular dynamics of dense plasmas across temperature regimes
Sjostrom, Travis; Daligault, Jerome
2014-10-10
Here, we develop and implement a new quantum molecular dynamics approximation that allows fast and accurate simulations of dense plasmas from cold to hot conditions. The method is based on a carefully designed orbital-free implementation of density functional theory. The results for hydrogen and aluminum are in very good agreement with Kohn-Sham (orbital-based) density functional theory and path integral Monte Carlo calculations for microscopic features such as the electron density as well as the equation of state. The present approach does not scale with temperature and hence extends to higher temperatures than is accessible in the Kohn-Sham method and lowermore » temperatures than is accessible by path integral Monte Carlo calculations, while being significantly less computationally expensive than either of those two methods.« less
NASA Technical Reports Server (NTRS)
Leser, William P.; Yuan, Fuh-Gwo; Leser, William P.
2013-01-01
A method of numerically estimating dynamic Green's functions using the finite element method is proposed. These Green's functions are accurate in a limited frequency range dependent on the mesh size used to generate them. This range can often match or exceed the frequency sensitivity of the traditional acoustic emission sensors. An algorithm is also developed to characterize an acoustic emission source by obtaining information about its strength and temporal dependence. This information can then be used to reproduce the source in a finite element model for further analysis. Numerical examples are presented that demonstrate the ability of the band-limited Green's functions approach to determine the moment tensor coefficients of several reference signals to within seven percent, as well as accurately reproduce the source-time function.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Z J
2012-12-06
The overriding objective for this project is to develop an efficient and accurate method for capturing strong discontinuities and fine smooth flow structures of disparate length scales with unstructured grids, and demonstrate its potentials for problems relevant to DOE. More specifically, we plan to achieve the following objectives: 1. Extend the SV method to three dimensions, and develop a fourth-order accurate SV scheme for tetrahedral grids. Optimize the SV partition by minimizing a form of the Lebesgue constant. Verify the order of accuracy using the scalar conservation laws with an analytical solution; 2. Extend the SV method to Navier-Stokes equationsmore » for the simulation of viscous flow problems. Two promising approaches to compute the viscous fluxes will be tested and analyzed; 3. Parallelize the 3D viscous SV flow solver using domain decomposition and message passing. Optimize the cache performance of the flow solver by designing data structures minimizing data access times; 4. Demonstrate the SV method with a wide range of flow problems including both discontinuities and complex smooth structures. The objectives remain the same as those outlines in the original proposal. We anticipate no technical obstacles in meeting these objectives.« less
Geologic Carbon Sequestration Leakage Detection: A Physics-Guided Machine Learning Approach
NASA Astrophysics Data System (ADS)
Lin, Y.; Harp, D. R.; Chen, B.; Pawar, R.
2017-12-01
One of the risks of large-scale geologic carbon sequestration is the potential migration of fluids out of the storage formations. Accurate and fast detection of this fluids migration is not only important but also challenging, due to the large subsurface uncertainty and complex governing physics. Traditional leakage detection and monitoring techniques rely on geophysical observations including pressure. However, the resulting accuracy of these methods is limited because of indirect information they provide requiring expert interpretation, therefore yielding in-accurate estimates of leakage rates and locations. In this work, we develop a novel machine-learning technique based on support vector regression to effectively and efficiently predict the leakage locations and leakage rates based on limited number of pressure observations. Compared to the conventional data-driven approaches, which can be usually seem as a "black box" procedure, we develop a physics-guided machine learning method to incorporate the governing physics into the learning procedure. To validate the performance of our proposed leakage detection method, we employ our method to both 2D and 3D synthetic subsurface models. Our novel CO2 leakage detection method has shown high detection accuracy in the example problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millis, Andrew
Understanding the behavior of interacting electrons in molecules and solids so that one can predict new superconductors, catalysts, light harvesters, energy and battery materials and optimize existing ones is the ``quantum many-body problem’’. This is one of the scientific grand challenges of the 21 st century. A complete solution to the problem has been proven to be exponentially hard, meaning that straightforward numerical approaches fail. New insights and new methods are needed to provide accurate yet feasible approximate solutions. This CMSCN project brought together chemists and physicists to combine insights from the two disciplines to develop innovative new approaches. Outcomesmore » included the Density Matrix Embedding method, a new, computationally inexpensive and extremely accurate approach that may enable first principles treatment of superconducting and magnetic properties of strongly correlated materials, new techniques for existing methods including an Adaptively Truncated Hilbert Space approach that will vastly expand the capabilities of the dynamical mean field method, a self-energy embedding theory and a new memory-function based approach to the calculations of the behavior of driven systems. The methods developed under this project are now being applied to improve our understanding of superconductivity, to calculate novel topological properties of materials and to characterize and improve the properties of nanoscale devices.« less
NASA Technical Reports Server (NTRS)
Stremel, Paul M.
1995-01-01
A method has been developed to accurately compute the viscous flow in three-dimensional (3-D) enclosures. This method is the 3-D extension of a two-dimensional (2-D) method developed for the calculation of flow over airfoils. The 2-D method has been tested extensively and has been shown to accurately reproduce experimental results. As in the 2-D method, the 3-D method provides for the non-iterative solution of the incompressible Navier-Stokes equations by means of a fully coupled implicit technique. The solution is calculated on a body fitted computational mesh incorporating a staggered grid methodology. In the staggered grid method, the three components of vorticity are defined at the centers of the computational cell sides, while the velocity components are defined as normal vectors at the centers of the computational cell faces. The staggered grid orientation provides for the accurate definition of the vorticity components at the vorticity locations, the divergence of vorticity at the mesh cell nodes and the conservation of mass at the mesh cell centers. The solution is obtained by utilizing a fractional step solution technique in the three coordinate directions. The boundary conditions for the vorticity and velocity are calculated implicitly as part of the solution. The method provides for the non-iterative solution of the flow field and satisfies the conservation of mass and divergence of vorticity to machine zero at each time step. To test the method, the calculation of simple driven cavity flows have been computed. The driven cavity flow is defined as the flow in an enclosure driven by a moving upper plate at the top of the enclosure. To demonstrate the ability of the method to predict the flow in arbitrary cavities, results will he shown for both cubic and curved cavities.
Calculation of transonic flows using an extended integral equation method
NASA Technical Reports Server (NTRS)
Nixon, D.
1976-01-01
An extended integral equation method for transonic flows is developed. In the extended integral equation method velocities in the flow field are calculated in addition to values on the aerofoil surface, in contrast with the less accurate 'standard' integral equation method in which only surface velocities are calculated. The results obtained for aerofoils in subcritical flow and in supercritical flow when shock waves are present compare satisfactorily with the results of recent finite difference methods.
Kamal, Abid; Khan, Washim; Ahmad, Sayeed; Ahmad, F. J.; Saleem, Kishwar
2015-01-01
Objective: The present study was used to design simple, accurate and sensitive reversed phase-high-performance liquid chromatography RP-HPLC and high-performance thin-layer chromatography (HPTLC) methods for the development of quantification of khellin present in the seeds of Ammi visnaga. Materials and Methods: RP-HPLC analysis was performed on a C18 column with methanol: Water (75: 25, v/v) as a mobile phase. The HPTLC method involved densitometric evaluation of khellin after resolving it on silica gel plate using ethyl acetate: Toluene: Formic acid (5.5:4.0:0.5, v/v/v) as a mobile phase. Results: The developed HPLC and HPTLC methods were validated for precision (interday, intraday and intersystem), robustness and accuracy, limit of detection and limit of quantification. The relationship between the concentration of standard solutions and the peak response was linear in both HPLC and HPTLC methods with the concentration range of 10–80 μg/mL in HPLC and 25–1,000 ng/spot in HPTLC for khellin. The % relative standard deviation values for method precision was found to be 0.63–1.97%, 0.62–2.05% in HPLC and HPTLC for khellin respectively. Accuracy of the method was checked by recovery studies conducted at three different concentration levels and the average percentage recovery was found to be 100.53% in HPLC and 100.08% in HPTLC for khellin. Conclusions: The developed HPLC and HPTLC methods for the quantification of khellin were found simple, precise, specific, sensitive and accurate which can be used for routine analysis and quality control of A. visnaga and several formulations containing it as an ingredient. PMID:26681890
Automated seeding-based nuclei segmentation in nonlinear optical microscopy.
Medyukhina, Anna; Meyer, Tobias; Heuke, Sandro; Vogler, Nadine; Dietzek, Benjamin; Popp, Jürgen
2013-10-01
Nonlinear optical (NLO) microscopy based, e.g., on coherent anti-Stokes Raman scattering (CARS) or two-photon-excited fluorescence (TPEF) is a fast label-free imaging technique, with a great potential for biomedical applications. However, NLO microscopy as a diagnostic tool is still in its infancy; there is a lack of robust and durable nuclei segmentation methods capable of accurate image processing in cases of variable image contrast, nuclear density, and type of investigated tissue. Nonetheless, such algorithms specifically adapted to NLO microscopy present one prerequisite for the technology to be routinely used, e.g., in pathology or intraoperatively for surgical guidance. In this paper, we compare the applicability of different seeding and boundary detection methods to NLO microscopic images in order to develop an optimal seeding-based approach capable of accurate segmentation of both TPEF and CARS images. Among different methods, the Laplacian of Gaussian filter showed the best accuracy for the seeding of the image, while a modified seeded watershed segmentation was the most accurate in the task of boundary detection. The resulting combination of these methods followed by the verification of the detected nuclei performs high average sensitivity and specificity when applied to various types of NLO microscopy images.
A robust recognition and accurate locating method for circular coded diagonal target
NASA Astrophysics Data System (ADS)
Bao, Yunna; Shang, Yang; Sun, Xiaoliang; Zhou, Jiexin
2017-10-01
As a category of special control points which can be automatically identified, artificial coded targets have been widely developed in the field of computer vision, photogrammetry, augmented reality, etc. In this paper, a new circular coded target designed by RockeTech technology Corp. Ltd is analyzed and studied, which is called circular coded diagonal target (CCDT). A novel detection and recognition method with good robustness is proposed in the paper, and implemented on Visual Studio. In this algorithm, firstly, the ellipse features of the center circle are used for rough positioning. Then, according to the characteristics of the center diagonal target, a circular frequency filter is designed to choose the correct center circle and eliminates non-target noise. The precise positioning of the coded target is done by the correlation coefficient fitting extreme value method. Finally, the coded target recognition is achieved by decoding the binary sequence in the outer ring of the extracted target. To test the proposed algorithm, this paper has carried out simulation experiments and real experiments. The results show that the CCDT recognition and accurate locating method proposed in this paper can robustly recognize and accurately locate the targets in complex and noisy background.
Nonnegative methods for bilinear discontinuous differencing of the S N equations on quadrilaterals
Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.
2016-12-22
Historically, matrix lumping and ad hoc flux fixups have been the only methods used to eliminate or suppress negative angular flux solutions associated with the unlumped bilinear discontinuous (UBLD) finite element spatial discretization of the two-dimensional S N equations. Though matrix lumping inhibits negative angular flux solutions of the S N equations, it does not guarantee strictly positive solutions. In this paper, we develop and define a strictly nonnegative, nonlinear, Petrov-Galerkin finite element method that fully preserves the bilinear discontinuous spatial moments of the transport equation. Additionally, we define two ad hoc fixups that maintain particle balance and explicitly setmore » negative nodes of the UBLD finite element solution to zero but use different auxiliary equations to fully define their respective solutions. We assess the ability to inhibit negative angular flux solutions and the accuracy of every spatial discretization that we consider using a glancing void test problem with a discontinuous solution known to stress numerical methods. Though significantly more computationally intense, the nonlinear Petrov-Galerkin scheme results in a strictly nonnegative solution and is a more accurate solution than all the other methods considered. One fixup, based on shape preserving, results in a strictly nonnegative final solution but has increased numerical diffusion relative to the Petrov-Galerkin scheme and is less accurate than the UBLD solution. The second fixup, which preserves as many spatial moments as possible while setting negative values of the unlumped solution to zero, is less accurate than the Petrov-Galerkin scheme but is more accurate than the other fixup. However, it fails to guarantee a strictly nonnegative final solution. As a result, the fully lumped bilinear discontinuous finite element solution is the least accurate method, with significantly more numerical diffusion than the Petrov-Galerkin scheme and both fixups.« less
NASA Astrophysics Data System (ADS)
Su, Wei; Lindsay, Scott; Liu, Haihu; Wu, Lei
2017-08-01
Rooted from the gas kinetics, the lattice Boltzmann method (LBM) is a powerful tool in modeling hydrodynamics. In the past decade, it has been extended to simulate rarefied gas flows beyond the Navier-Stokes level, either by using the high-order Gauss-Hermite quadrature, or by introducing the relaxation time that is a function of the gas-wall distance. While the former method, with a limited number of discrete velocities (e.g., D2Q36), is accurate up to the early transition flow regime, the latter method (especially the multiple relaxation time (MRT) LBM), with the same discrete velocities as those used in simulating hydrodynamics (i.e., D2Q9), is accurate up to the free-molecular flow regime in the planar Poiseuille flow. This is quite astonishing in the sense that less discrete velocities are more accurate. In this paper, by solving the Bhatnagar-Gross-Krook kinetic equation accurately via the discrete velocity method, we find that the high-order Gauss-Hermite quadrature cannot describe the large variation in the velocity distribution function when the rarefaction effect is strong, but the MRT-LBM can capture the flow velocity well because it is equivalent to solving the Navier-Stokes equations with an effective shear viscosity. Since the MRT-LBM has only been validated in simple channel flows, and for complex geometries it is difficult to find the effective viscosity, it is necessary to assess its performance for the simulation of rarefied gas flows. Our numerical simulations based on the accurate discrete velocity method suggest that the accuracy of the MRT-LBM is reduced significantly in the simulation of rarefied gas flows through the rough surface and porous media. Our simulation results could serve as benchmarking cases for future development of the LBM for modeling and simulation of rarefied gas flows in complex geometries.
Nonnegative methods for bilinear discontinuous differencing of the S N equations on quadrilaterals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maginot, Peter G.; Ragusa, Jean C.; Morel, Jim E.
Historically, matrix lumping and ad hoc flux fixups have been the only methods used to eliminate or suppress negative angular flux solutions associated with the unlumped bilinear discontinuous (UBLD) finite element spatial discretization of the two-dimensional S N equations. Though matrix lumping inhibits negative angular flux solutions of the S N equations, it does not guarantee strictly positive solutions. In this paper, we develop and define a strictly nonnegative, nonlinear, Petrov-Galerkin finite element method that fully preserves the bilinear discontinuous spatial moments of the transport equation. Additionally, we define two ad hoc fixups that maintain particle balance and explicitly setmore » negative nodes of the UBLD finite element solution to zero but use different auxiliary equations to fully define their respective solutions. We assess the ability to inhibit negative angular flux solutions and the accuracy of every spatial discretization that we consider using a glancing void test problem with a discontinuous solution known to stress numerical methods. Though significantly more computationally intense, the nonlinear Petrov-Galerkin scheme results in a strictly nonnegative solution and is a more accurate solution than all the other methods considered. One fixup, based on shape preserving, results in a strictly nonnegative final solution but has increased numerical diffusion relative to the Petrov-Galerkin scheme and is less accurate than the UBLD solution. The second fixup, which preserves as many spatial moments as possible while setting negative values of the unlumped solution to zero, is less accurate than the Petrov-Galerkin scheme but is more accurate than the other fixup. However, it fails to guarantee a strictly nonnegative final solution. As a result, the fully lumped bilinear discontinuous finite element solution is the least accurate method, with significantly more numerical diffusion than the Petrov-Galerkin scheme and both fixups.« less
Su, Wei; Lindsay, Scott; Liu, Haihu; Wu, Lei
2017-08-01
Rooted from the gas kinetics, the lattice Boltzmann method (LBM) is a powerful tool in modeling hydrodynamics. In the past decade, it has been extended to simulate rarefied gas flows beyond the Navier-Stokes level, either by using the high-order Gauss-Hermite quadrature, or by introducing the relaxation time that is a function of the gas-wall distance. While the former method, with a limited number of discrete velocities (e.g., D2Q36), is accurate up to the early transition flow regime, the latter method (especially the multiple relaxation time (MRT) LBM), with the same discrete velocities as those used in simulating hydrodynamics (i.e., D2Q9), is accurate up to the free-molecular flow regime in the planar Poiseuille flow. This is quite astonishing in the sense that less discrete velocities are more accurate. In this paper, by solving the Bhatnagar-Gross-Krook kinetic equation accurately via the discrete velocity method, we find that the high-order Gauss-Hermite quadrature cannot describe the large variation in the velocity distribution function when the rarefaction effect is strong, but the MRT-LBM can capture the flow velocity well because it is equivalent to solving the Navier-Stokes equations with an effective shear viscosity. Since the MRT-LBM has only been validated in simple channel flows, and for complex geometries it is difficult to find the effective viscosity, it is necessary to assess its performance for the simulation of rarefied gas flows. Our numerical simulations based on the accurate discrete velocity method suggest that the accuracy of the MRT-LBM is reduced significantly in the simulation of rarefied gas flows through the rough surface and porous media. Our simulation results could serve as benchmarking cases for future development of the LBM for modeling and simulation of rarefied gas flows in complex geometries.
NASA Astrophysics Data System (ADS)
An, Shengpei; Hu, Tianyue; Liu, Yimou; Peng, Gengxin; Liang, Xianghao
2017-12-01
Static correction is a crucial step of seismic data processing for onshore play, which frequently has a complex near-surface condition. The effectiveness of the static correction depends on an accurate determination of first-arrival traveltimes. However, it is difficult to accurately auto-pick the first arrivals for data with low signal-to-noise ratios (SNR), especially for those measured in the area of the complex near-surface. The technique of the super-virtual interferometry (SVI) has the potential to enhance the SNR of first arrivals. In this paper, we develop the extended SVI with (1) the application of the reverse correlation to improve the capability of SNR enhancement at near-offset, and (2) the usage of the multi-domain method to partially overcome the limitation of current method, given insufficient available source-receiver combinations. Compared to the standard SVI, the SNR enhancement of the extended SVI can be up to 40%. In addition, we propose a quality control procedure, which is based on the statistical characteristics of multichannel recordings of first arrivals. It can auto-correct the mispicks, which might be spurious events generated by the SVI. This procedure is very robust, highly automatic and it can accommodate large data in batches. Finally, we develop one automatic first-arrival picking method to combine the extended SVI and the quality control procedure. Both the synthetic and the field data examples demonstrate that the proposed method is able to accurately auto-pick first arrivals in seismic traces with low SNR. The quality of the stacked seismic sections obtained from this method is much better than those obtained from an auto-picking method, which is commonly employed by the commercial software.
Quantitative estimation of itopride hydrochloride and rabeprazole sodium from capsule formulation.
Pillai, S; Singhvi, I
2008-09-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C(18) column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies.
Quantitative Estimation of Itopride Hydrochloride and Rabeprazole Sodium from Capsule Formulation
Pillai, S.; Singhvi, I.
2008-01-01
Two simple, accurate, economical and reproducible UV spectrophotometric methods and one HPLC method for simultaneous estimation of two component drug mixture of itopride hydrochloride and rabeprazole sodium from combined capsule dosage form have been developed. First developed method involves formation and solving of simultaneous equations using 265.2 nm and 290.8 nm as two wavelengths. Second method is based on two wavelength calculation, wavelengths selected for estimation of itopride hydrochloride was 278.0 nm and 298.8 nm and for rabeprazole sodium 253.6 nm and 275.2 nm. Developed HPLC method is a reverse phase chromatographic method using phenomenex C18 column and acetonitrile: phosphate buffer (35:65 v/v) pH 7.0 as mobile phase. All developed methods obey Beer's law in concentration range employed for respective methods. Results of analysis were validated statistically and by recovery studies. PMID:21394269
Quantification of HTLV-1 Clonality and TCR Diversity
Laydon, Daniel J.; Melamed, Anat; Sim, Aaron; Gillet, Nicolas A.; Sim, Kathleen; Darko, Sam; Kroll, J. Simon; Douek, Daniel C.; Price, David A.; Bangham, Charles R. M.; Asquith, Becca
2014-01-01
Estimation of immunological and microbiological diversity is vital to our understanding of infection and the immune response. For instance, what is the diversity of the T cell repertoire? These questions are partially addressed by high-throughput sequencing techniques that enable identification of immunological and microbiological “species” in a sample. Estimators of the number of unseen species are needed to estimate population diversity from sample diversity. Here we test five widely used non-parametric estimators, and develop and validate a novel method, DivE, to estimate species richness and distribution. We used three independent datasets: (i) viral populations from subjects infected with human T-lymphotropic virus type 1; (ii) T cell antigen receptor clonotype repertoires; and (iii) microbial data from infant faecal samples. When applied to datasets with rarefaction curves that did not plateau, existing estimators systematically increased with sample size. In contrast, DivE consistently and accurately estimated diversity for all datasets. We identify conditions that limit the application of DivE. We also show that DivE can be used to accurately estimate the underlying population frequency distribution. We have developed a novel method that is significantly more accurate than commonly used biodiversity estimators in microbiological and immunological populations. PMID:24945836
El-Bagary, Ramzia I; Elkady, Ehab F; Ayoub, Bassam M
2011-03-01
Simple, accurate and precise spectroflourometric and spectrophotometric methods have been developed and validated for the determination of sitagliptin phosphate monohydrate (STG) and metformin HCL (MET). Zero order, first derivative, ratio derivative spectrophotometric methods and flourometric methods have been developed. The zero order spectrophotometric method was used for the determination of STG in the range of 50-300 μg mL(-1). The first derivative spectrophotometric method was used for the determination of MET in the range of 2-12 μg mL(-1) and STG in the range of 50-300 μg mL(-1) by measuring the peak amplitude at 246.5 nm and 275 nm, respectively. The first derivative of ratio spectra spectrophotometric method used the peak amplitudes at 232 nm and 239 nm for the determination of MET in the range of 2-12 μg mL(-1). The flourometric method was used for the determination of STG in the range of 0.25-110 μg mL(-1). The proposed methods used to determine each drug in binary mixture with metformin and ternary mixture with metformin and sitagliptin alkaline degradation product that is obtained after alkaline hydrolysis of sitagliptin. The results were statistically compared using one-way analysis of variance (ANOVA). The methods developed were satisfactorily applied to the analysis of the pharmaceutical formulations and proved to be specific and accurate for the quality control of the cited drugs in pharmaceutical dosage forms.
El-Bagary, Ramzia I.; Elkady, Ehab F.; Ayoub, Bassam M.
2011-01-01
Simple, accurate and precise spectroflourometric and spectrophotometric methods have been developed and validated for the determination of sitagliptin phosphate monohydrate (STG) and metformin HCL (MET). Zero order, first derivative, ratio derivative spectrophotometric methods and flourometric methods have been developed. The zero order spectrophotometric method was used for the determination of STG in the range of 50-300 μg mL-1. The first derivative spectrophotometric method was used for the determination of MET in the range of 2–12 μg mL-1 and STG in the range of 50-300 μg mL-1 by measuring the peak amplitude at 246.5 nm and 275 nm, respectively. The first derivative of ratio spectra spectrophotometric method used the peak amplitudes at 232 nm and 239 nm for the determination of MET in the range of 2–12 μg mL-1. The flourometric method was used for the determination of STG in the range of 0.25-110 μg mL-1. The proposed methods used to determine each drug in binary mixture with metformin and ternary mixture with metformin and sitagliptin alkaline degradation product that is obtained after alkaline hydrolysis of sitagliptin. The results were statistically compared using one-way analysis of variance (ANOVA). The methods developed were satisfactorily applied to the analysis of the pharmaceutical formulations and proved to be specific and accurate for the quality control of the cited drugs in pharmaceutical dosage forms. PMID:23675222
Automated tumor volumetry using computer-aided image segmentation.
Gaonkar, Bilwaj; Macyszyn, Luke; Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A; Ali, Zarina S; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M; Davatzikos, Christos
2015-05-01
Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0-5 rating scale where 5 indicated perfect segmentation. The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Ban, Yunyun; Chen, Tianqin; Yan, Jun; Lei, Tingwu
2017-04-01
The measurement of sediment concentration in water is of great importance in soil erosion research and soil and water loss monitoring systems. The traditional weighing method has long been the foundation of all the other measuring methods and instrument calibration. The development of a new method to replace the traditional oven-drying method is of interest in research and practice for the quick and efficient measurement of sediment concentration, especially field measurements. A new method is advanced in this study for accurately measuring the sediment concentration based on the accurate measurement of the mass of the sediment-water mixture in the confined constant volume container (CVC). A sediment-laden water sample is put into the CVC to determine its mass before the CVC is filled with water and weighed again for the total mass of the water and sediments in the container. The known volume of the CVC, the mass of sediment-laden water, and sediment particle density are used to calculate the mass of water, which is replaced by sediments, therefore sediment concentration of the sample is calculated. The influence of water temperature was corrected by measuring water density to determine the temperature of water before measurements were conducted. The CVC was used to eliminate the surface tension effect so as to obtain the accurate volume of water and sediment mixture. Experimental results showed that the method was capable of measuring the sediment concentration from 0.5 up to 1200 kg m-3. A good liner relationship existed between the designed and measured sediment concentrations with all the coefficients of determination greater than 0.999 and the averaged relative error less than 0.2%. All of these seem to indicate that the new method is capable of measuring a full range of sediment concentration above 0.5 kg m-3 to replace the traditional oven-drying method as a standard method for evaluating and calibrating other methods.
NASA Astrophysics Data System (ADS)
Ansory, Achmad; Prajitno, Prawito; Wijaya, Sastra Kusuma
2018-02-01
Electrical Impedance Tomography (EIT) is an imaging method that is able to estimate electrical impedance distribution inside an object. This EIT system is developed by using 32 electrodes and microcontroller based module. From a pair of electrodes, sinusoidal current of 3 mA is injected and the voltage differences between other pairs of electrodes are measured. Voltage measurement data are then sent to MATLAB and EIDORS software; the data are used to reconstruct two dimensions image. The system can detect and determine the position of a phantom in the tank. The object's position is accurately reconstructed and determined with the average shifting of 0.69 cm but object's area cannot be accurately reconstructed. The object's image is more accurately reconstructed when the object is located near to electrodes, has a larger size, and when the current injected to the system has a frequency of 100 kHz or 200kHz.
Next Generation of Leaching Tests
A corresponding abstract has been cleared for this presentation. The four methods comprising the Leaching Environmental Assessment Framework are described along with the tools to support implementation of the more rigorous and accurate source terms that are developed using LEAF ...
Development of a binder fracture test to determine fracture energy.
DOT National Transportation Integrated Search
2012-04-01
It has been found that binder testing methods in current specifications do not accurately predict cracking performance at intermediate temperatures. Fracture energy has been determined to be strongly correlated to fracture resistance of asphalt mixtu...
ERIC Educational Resources Information Center
Sullivan, Amanda L.; Kohli, Nidhi; Farnsworth, Elyse M.; Sadeh, Shanna; Jones, Leila
2017-01-01
Objective: Accurate estimation of developmental trajectories can inform instruction and intervention. We compared the fit of linear, quadratic, and piecewise mixed-effects models of reading development among students with learning disabilities relative to their typically developing peers. Method: We drew an analytic sample of 1,990 students from…
Development of advanced acreage estimation methods
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr. (Principal Investigator)
1982-01-01
The development of an accurate and efficient algorithm for analyzing the structure of MSS data, the application of the Akaiki information criterion to mixture models, and a research plan to delineate some of the technical issues and associated tasks in the area of rice scene radiation characterization are discussed. The AMOEBA clustering algorithm is refined and documented.
PROGRAMMED INSTRUCTION IN THE BRITISH ARMED FORCES, A REPORT ON RESEARCH AND DEVELOPMENT.
ERIC Educational Resources Information Center
WALLIS, D.; AND OTHERS
THE BRITISH ARMED SERVICES HAVE APPLIED PROGRAMING IN SCHOLASTIC SUBJECTS. A MARKED IMPROVEMENT IN THE TECHNOLOGY OF TRAINING HAS RESULTED IN THE DEVELOPMENT OF A MORE SYSTEMATIC DERIVATION OF TRAINING OBJECTIVES, CLOSER ASSESSMENT OF KNOWLEDGE AND ABILITY OF POTENTIAL STUDENTS, AND MORE ACCURATE SPECIFICATION OF CONTENTS, METHODS, AND MATERIALS…
Environmental dynamics at orbital altitudes
NASA Technical Reports Server (NTRS)
Karr, G. R.
1976-01-01
The influence of real satellite aerodynamics on the determination of upper atmospheric density was investigated. A method of analysis of satellite drag data is presented which includes the effect of satellite lift and the variation in aerodynamic properties around the orbit. The studies indicate that satellite lift may be responsible for the observed orbit precession rather than a super rotation of the upper atmosphere. The influence of simplifying assumptions concerning the aerodynamics of objects in falling sphere analysis were evaluated and an improved method of analysis was developed. Wind tunnel data was used to develop more accurate drag coefficient relationships for studying altitudes between 80 and 120 Km. The improved drag coefficient relationships revealed a considerable error in previous falling sphere drag interpretation. These data were reanalyzed using the more accurate relationships. Theoretical investigations of the drag coefficient in the very low speed ratio region were also conducted.
NASA Technical Reports Server (NTRS)
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
A Kosloff/Basal method, 3D migration program implemented on the CYBER 205 supercomputer
NASA Technical Reports Server (NTRS)
Pyle, L. D.; Wheat, S. R.
1984-01-01
Conventional finite difference migration has relied on approximations to the acoustic wave equation which allow energy to propagate only downwards. Although generally reliable, such approaches usually do not yield an accurate migration for geological structures with strong lateral velocity variations or with steeply dipping reflectors. An earlier study by D. Kosloff and E. Baysal (Migration with the Full Acoustic Wave Equation) examined an alternative approach based on the full acoustic wave equation. The 2D, Fourier type algorithm which was developed was tested by Kosloff and Baysal against synthetic data and against physical model data. The results indicated that such a scheme gives accurate migration for complicated structures. This paper describes the development and testing of a vectorized, 3D migration program for the CYBER 205 using the Kosloff/Baysal method. The program can accept as many as 65,536 zero offset (stacked) traces.
Spectral estimation of received phase in the presence of amplitude scintillation
NASA Technical Reports Server (NTRS)
Vilnrotter, V. A.; Brown, D. H.; Hurd, W. J.
1988-01-01
A technique is demonstrated for obtaining the spectral parameters of the received carrier phase in the presence of carrier amplitude scintillation, by means of a digital phased locked loop. Since the random amplitude fluctuations generate time-varying loop characteristics, straightforward processing of the phase detector output does not provide accurate results. The method developed here performs a time-varying inverse filtering operation on the corrupted observables, thus recovering the original phase process and enabling accurate estimation of its underlying parameters.
Hybrid Theory of Electron-Hydrogenic Systems Elastic Scattering
NASA Technical Reports Server (NTRS)
Bhatia, A. K.
2007-01-01
Accurate electron-hydrogen and electron-hydrogenic cross sections are required to interpret fusion experiments, laboratory plasma physics and properties of the solar and astrophysical plasmas. We have developed a method in which the short-range and long-range correlations can be included at the same time in the scattering equations. The phase shifts have rigorous lower bounds and the scattering lengths have rigorous upper bounds. The phase shifts in the resonance region can be used to calculate very accurately the resonance parameters.
NASA Technical Reports Server (NTRS)
Wu, S. T.
1987-01-01
The goal for the SAMEX magnetograph's optical system is to accurately measure the polarization state of sunlight in a narrow spectral bandwidth over the field of view of an active region to make an accurate determination of the magnetic field in that region. The instrumental polarization is characterized. The optics and coatings were designed to minimize this spurious polarization introduced by foreoptics. The method developed to calculate the instrumental polarization of the SAMEX optics is described.
Funnel metadynamics as accurate binding free-energy method
Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele
2013-01-01
A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839
System for routine surface anthropometry using reprojection registration
NASA Astrophysics Data System (ADS)
Sadleir, R. J.; Owens, R. A.; Hartmann, P. E.
2003-11-01
Range data measurement can be usefully applied to non-invasive monitoring of anthropometric changes due to disease, healing or during normal physiological processes. We have developed a computer vision system that allows routine capture of biological surface shapes and accurate measurement of anthropometric changes, using a structured light stripe triangulation system. In many applications involving relocation of soft tissue for image-guided surgery or anthropometry it is neither accurate nor practical to apply fiducial markers directly to the body. This system features a novel method of achieving subject re-registration that involves application of fiducials by a standard data projector. Calibration of this reprojector is achieved using a variation of structured lighting techniques. The method allows accurate and comparable repositioning of elastic surfaces. Tests of repositioning using the reprojector found a significant improvement in subject registration compared to an earlier method which used video overlay comparison only. It has a current application to the measurement of breast volume changes in lactating mothers, but may be extended to any application where repeatable positioning and measurement is required.
Monitoring for airborne allergens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burge, H.A.
1992-07-01
Monitoring for allergens can provide some information on the kinds and levels of exposure experienced by local patient populations, providing volumetric methods are used for sample collection and analysis is accurate and consistent. Such data can also be used to develop standards for the specific environment and to begin to develop predictive models. Comparing outdoor allergen aerosols between different monitoring sites requires identical collection and analysis methods and some kind of rational standard, whether arbitrary, or based on recognized health effects.32 references.
NASA Astrophysics Data System (ADS)
McLaughlin, P. W.; Kaihatu, J. M.; Irish, J. L.; Taylor, N. R.; Slinn, D.
2013-12-01
Recent hurricane activity in the Gulf of Mexico has led to a need for accurate, computationally efficient prediction of hurricane damage so that communities can better assess risk of local socio-economic disruption. This study focuses on developing robust, physics based non-dimensional equations that accurately predict maximum significant wave height at different locations near a given hurricane track. These equations (denoted as Wave Response Functions, or WRFs) were developed from presumed physical dependencies between wave heights and hurricane characteristics and fit with data from numerical models of waves and surge under hurricane conditions. After curve fitting, constraints which correct for fully developed sea state were used to limit the wind wave growth. When applied to the region near Gulfport, MS, back prediction of maximum significant wave height yielded root mean square errors between 0.22-0.42 (m) at open coast stations and 0.07-0.30 (m) at bay stations when compared to the numerical model data. The WRF method was also applied to Corpus Christi, TX and Panama City, FL with similar results. Back prediction errors will be included in uncertainty evaluations connected to risk calculations using joint probability methods. These methods require thousands of simulations to quantify extreme value statistics, thus requiring the use of reduced methods such as the WRF to represent the relevant physical processes.
Olson, Daniel D; Bissonette, John A; Cramer, Patricia C; Green, Ashley D; Davis, Scott T; Jackson, Patrick J; Coster, Daniel C
2014-01-01
Currently there is a critical need for accurate and standardized wildlife-vehicle collision data, because it is the underpinning of mitigation projects that protect both drivers and wildlife. Gathering data can be challenging because wildlife-vehicle collisions occur over broad areas, during all seasons of the year, and in large numbers. Collecting data of this magnitude requires an efficient data collection system. Presently there is no widely adopted system that is both efficient and accurate. Our objective was to develop and test an integrated smartphone-based system for reporting wildlife-vehicle collision data. The WVC Reporter system we developed consisted of a mobile web application for data collection, a database for centralized storage of data, and a desktop web application for viewing data. The smartphones that we tested for use with the application produced accurate locations (median error = 4.6-5.2 m), and reduced location error 99% versus reporting only the highway/marker. Additionally, mean times for data entry using the mobile web application (22.0-26.5 s) were substantially shorter than using the pen/paper method (52 s). We also found the pen/paper method had a data entry error rate of 10% and those errors were virtually eliminated using the mobile web application. During the first year of use, 6,822 animal carcasses were reported using WVC Reporter. The desktop web application improved access to WVC data and allowed users to easily visualize wildlife-vehicle collision patterns at multiple scales. The WVC Reporter integrated several modern technologies into a seamless method for collecting, managing, and using WVC data. As a result, the system increased efficiency in reporting, improved accuracy, and enhanced visualization of data. The development costs for the system were minor relative to the potential benefits of having spatially accurate and temporally current wildlife-vehicle collision data.
Olson, Daniel D.; Bissonette, John A.; Cramer, Patricia C.; Green, Ashley D.; Davis, Scott T.; Jackson, Patrick J.; Coster, Daniel C.
2014-01-01
Background Currently there is a critical need for accurate and standardized wildlife-vehicle collision data, because it is the underpinning of mitigation projects that protect both drivers and wildlife. Gathering data can be challenging because wildlife-vehicle collisions occur over broad areas, during all seasons of the year, and in large numbers. Collecting data of this magnitude requires an efficient data collection system. Presently there is no widely adopted system that is both efficient and accurate. Methodology/Principal Findings Our objective was to develop and test an integrated smartphone-based system for reporting wildlife-vehicle collision data. The WVC Reporter system we developed consisted of a mobile web application for data collection, a database for centralized storage of data, and a desktop web application for viewing data. The smartphones that we tested for use with the application produced accurate locations (median error = 4.6–5.2 m), and reduced location error 99% versus reporting only the highway/marker. Additionally, mean times for data entry using the mobile web application (22.0–26.5 s) were substantially shorter than using the pen/paper method (52 s). We also found the pen/paper method had a data entry error rate of 10% and those errors were virtually eliminated using the mobile web application. During the first year of use, 6,822 animal carcasses were reported using WVC Reporter. The desktop web application improved access to WVC data and allowed users to easily visualize wildlife-vehicle collision patterns at multiple scales. Conclusions/Significance The WVC Reporter integrated several modern technologies into a seamless method for collecting, managing, and using WVC data. As a result, the system increased efficiency in reporting, improved accuracy, and enhanced visualization of data. The development costs for the system were minor relative to the potential benefits of having spatially accurate and temporally current wildlife-vehicle collision data. PMID:24897502
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
Improved accuracy in quantitative laser-induced breakdown spectroscopy using sub-models
Anderson, Ryan B.; Clegg, Samuel M.; Frydenvang, Jens; ...
2016-12-15
We report that accurate quantitative analysis of diverse geologic materials is one of the primary challenges faced by the Laser-Induced Breakdown Spectroscopy (LIBS)-based ChemCam instrument on the Mars Science Laboratory (MSL) rover. The SuperCam instrument on the Mars 2020 rover, as well as other LIBS instruments developed for geochemical analysis on Earth or other planets, will face the same challenge. Consequently, part of the ChemCam science team has focused on the development of improved multivariate analysis calibrations methods. Developing a single regression model capable of accurately determining the composition of very different target materials is difficult because the response ofmore » an element’s emission lines in LIBS spectra can vary with the concentration of other elements. We demonstrate a conceptually simple “submodel” method for improving the accuracy of quantitative LIBS analysis of diverse target materials. The method is based on training several regression models on sets of targets with limited composition ranges and then “blending” these “sub-models” into a single final result. Tests of the sub-model method show improvement in test set root mean squared error of prediction (RMSEP) for almost all cases. Lastly, the sub-model method, using partial least squares regression (PLS), is being used as part of the current ChemCam quantitative calibration, but the sub-model method is applicable to any multivariate regression method and may yield similar improvements.« less
RapGene: a fast and accurate strategy for synthetic gene assembly in Escherichia coli
Zampini, Massimiliano; Stevens, Pauline Rees; Pachebat, Justin A.; Kingston-Smith, Alison; Mur, Luis A. J.; Hayes, Finbarr
2015-01-01
The ability to assemble DNA sequences de novo through efficient and powerful DNA fabrication methods is one of the foundational technologies of synthetic biology. Gene synthesis, in particular, has been considered the main driver for the emergence of this new scientific discipline. Here we describe RapGene, a rapid gene assembly technique which was successfully tested for the synthesis and cloning of both prokaryotic and eukaryotic genes through a ligation independent approach. The method developed in this study is a complete bacterial gene synthesis platform for the quick, accurate and cost effective fabrication and cloning of gene-length sequences that employ the widely used host Escherichia coli. PMID:26062748
Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys
NASA Astrophysics Data System (ADS)
Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.
2016-08-01
Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.
Zhang, Ying; Li, Yan; Zhu, Xiao-Juan; Li, Min; Chen, Hao-Yu; Lv, Xiao-Ling; Zhang, Jian
2017-07-01
A reliable and accurate method for the determination of seven biogenic amines (BAs) was developed and validated with Chinese rice wine samples. The BAs were derivatised with dansyl chloride, cleaned up using solid-phase extraction (SPE) and separated by high-performance liquid chromatography (HPLC) coupled with ultraviolet (UV) detection. The optimised derivatisation reaction, conducted at pH 9.6 and 60°C for 30 min, ensured baseline separation and peak symmetry for each BA. SPE clean-up using Oasis MCX cartridges yielded good recovery rates for all BAs and effectively reduced matrix effects. The developed method shows good linearity with determination coefficients of more than 0.9989 over a concentration range of 0.1-100 mg l -1 . The limits of detection (LODs) for the investigated BAs ranged from 2.07 to 5.56 µg l -1 . The intra- and inter-day relative standard deviations (RSDs) ranged from 0.86% to 3.81% and from 2.13% to 3.82%, respectively. Spiking experiments showed that the overall recovery rates ranged from 85% to 113%. Thus, the proposed method was demonstrated as being suitable for simultaneous detection, with accurate and precise quantification, of BAs in Chinese rice wine.
Rapid detection of potyviruses from crude plant extracts.
Silva, Gonçalo; Oyekanmi, Joshua; Nkere, Chukwuemeka K; Bömer, Moritz; Kumar, P Lava; Seal, Susan E
2018-04-01
Potyviruses (genus Potyvirus; family Potyviridae) are widely distributed and represent one of the most economically important genera of plant viruses. Therefore, their accurate detection is a key factor in developing efficient control strategies. However, this can sometimes be problematic particularly in plant species containing high amounts of polysaccharides and polyphenols such as yam (Dioscorea spp.). Here, we report the development of a reliable, rapid and cost-effective detection method for the two most important potyviruses infecting yam based on reverse transcription-recombinase polymerase amplification (RT-RPA). The developed method, named 'Direct RT-RPA', detects each target virus directly from plant leaf extracts prepared with a simple and inexpensive extraction method avoiding laborious extraction of high-quality RNA. Direct RT-RPA enables the detection of virus-positive samples in under 30 min at a single low operation temperature (37 °C) without the need for any expensive instrumentation. The Direct RT-RPA tests constitute robust, accurate, sensitive and quick methods for detection of potyviruses from recalcitrant plant species. The minimal sample preparation requirements and the possibility of storing RPA reagents without cold chain storage, allow Direct RT-RPA to be adopted in minimally equipped laboratories and with potential use in plant clinic laboratories and seed certification facilities worldwide. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lei, Huan; Yang, Xiu; Zheng, Bin
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
NASA Technical Reports Server (NTRS)
Maccormack, R. W.
1978-01-01
The calculation of flow fields past aircraft configuration at flight Reynolds numbers is considered. Progress in devising accurate and efficient numerical methods, in understanding and modeling the physics of turbulence, and in developing reliable and powerful computer hardware is discussed. Emphasis is placed on efficient solutions to the Navier-Stokes equations.
Highly Accurate Beam Torsion Solutions Using the p-Version Finite Element Method
NASA Technical Reports Server (NTRS)
Smith, James P.
1996-01-01
A new treatment of the classical beam torsion boundary value problem is applied. Using the p-version finite element method with shape functions based on Legendre polynomials, torsion solutions for generic cross-sections comprised of isotropic materials are developed. Element shape functions for quadrilateral and triangular elements are discussed, and numerical examples are provided.
Mapping ecological systems with a random foret model: tradeoffs between errors and bias
Emilie Grossmann; Janet Ohmann; James Kagan; Heather May; Matthew Gregory
2010-01-01
New methods for predictive vegetation mapping allow improved estimations of plant community composition across large regions. Random Forest (RF) models limit over-fitting problems of other methods, and are known for making accurate classification predictions from noisy, nonnormal data, but can be biased when plot samples are unbalanced. We developed two contrasting...
Methane emissions from underground pipeline leaks remain an ongoing issue in the development of accurate methane emission inventories for the natural gas supply chain. Application of mobile methods during routine street surveys would help address this issue, but there are large ...
Efficient and accurate adverse outcome pathway (AOP) based high-throughput screening (HTS) methods use a systems biology based approach to computationally model in vitro cellular and molecular data for rapid chemical prioritization; however, not all HTS assays are grounded by rel...
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
2004-01-01
This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. Other related issues in high order WENO finite difference and finite volume methods have also been investigated. methods are two classes of high order, high resolution methods suitable for convection dominated simulations with possible discontinuous or sharp gradient solutions. In [18], we first review these two classes of methods, pointing out their similarities and differences in algorithm formulation, theoretical properties, implementation issues, applicability, and relative advantages. We then present some quantitative comparisons of the third order finite volume WENO methods and discontinuous Galerkin methods for a series of test problems to assess their relative merits in accuracy and CPU timing. In [3], we review the development of the Runge-Kutta discontinuous Galerkin (RKDG) methods for non-linear convection-dominated problems. These robust and accurate methods have made their way into the main stream of computational fluid dynamics and are quickly finding use in a wide variety of applications. They combine a special class of Runge-Kutta time discretizations, that allows the method to be non-linearly stable regardless of its accuracy, with a finite element space discretization by discontinuous approximations, that incorporates the ideas of numerical fluxes and slope limiters coined during the remarkable development of the high-resolution finite difference and finite volume schemes. The resulting RKDG methods are stable, high-order accurate, and highly parallelizable schemes that can easily handle complicated geometries and boundary conditions. We review the theoretical and algorithmic aspects of these methods and show several applications including nonlinear conservation laws, the compressible and incompressible Navier-Stokes equations, and Hamilton-Jacobi-like equations.
A Hybrid On-line Verification Method of Relay Setting
NASA Astrophysics Data System (ADS)
Gao, Wangyuan; Chen, Qing; Si, Ji; Huang, Xin
2017-05-01
Along with the rapid development of the power industry, grid structure gets more sophisticated. The validity and rationality of protective relaying are vital to the security of power systems. To increase the security of power systems, it is essential to verify the setting values of relays online. Traditional verification methods mainly include the comparison of protection range and the comparison of calculated setting value. To realize on-line verification, the verifying speed is the key. The verifying result of comparing protection range is accurate, but the computation burden is heavy, and the verifying speed is slow. Comparing calculated setting value is much faster, but the verifying result is conservative and inaccurate. Taking the overcurrent protection as example, this paper analyses the advantages and disadvantages of the two traditional methods above, and proposes a hybrid method of on-line verification which synthesizes the advantages of the two traditional methods. This hybrid method can meet the requirements of accurate on-line verification.
Seo, Jung Hee; Mittal, Rajat
2010-01-01
A new sharp-interface immersed boundary method based approach for the computation of low-Mach number flow-induced sound around complex geometries is described. The underlying approach is based on a hydrodynamic/acoustic splitting technique where the incompressible flow is first computed using a second-order accurate immersed boundary solver. This is followed by the computation of sound using the linearized perturbed compressible equations (LPCE). The primary contribution of the current work is the development of a versatile, high-order accurate immersed boundary method for solving the LPCE in complex domains. This new method applies the boundary condition on the immersed boundary to a high-order by combining the ghost-cell approach with a weighted least-squares error method based on a high-order approximating polynomial. The method is validated for canonical acoustic wave scattering and flow-induced noise problems. Applications of this technique to relatively complex cases of practical interest are also presented. PMID:21318129
Chen, X.; Ashcroft, I. A.; Wildman, R. D.; Tuck, C. J.
2015-01-01
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic–viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic–viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance. PMID:26730216
Abou-Attia, F M; Issa, Y M; Abdel-Gawad, F M; Abdel-Hamid, S M
2003-08-01
A simple, accurate and sensitive spectrophotometric method has been developed for the determination of three pharmaceutical piperazine derivatives, namely ketoconazole (KC), trimetazidine hydrochloride (TMH) and piribedil (PD). This method is based on the formation of yellow orange complexes between iron(III) chloride and the investigated drugs. The optimum reaction conditions, spectral characteristics, conditional stability constants and composition of the water soluble complexes have been established. The method permits the determination of KC, TMH and PD over a concentration range 1-15, 1-12 and 1-12 microg ml(-1), respectively. Sandell sensitivity is found to be 0.016, 0.013 and 0.013 microg cm(-2) for KC, TMH and PD, respectively. The method was sensitive, simple, reproducible and accurate within +/-1.5%. The method is applicable to the assay of the three drugs under investigation in different dosage forms and the results are in good agreement with those obtained by the official methods (USP and JP).
Chen, X; Ashcroft, I A; Wildman, R D; Tuck, C J
2015-11-08
A method using experimental nanoindentation and inverse finite-element analysis (FEA) has been developed that enables the spatial variation of material constitutive properties to be accurately determined. The method was used to measure property variation in a three-dimensional printed (3DP) polymeric material. The accuracy of the method is dependent on the applicability of the constitutive model used in the inverse FEA, hence four potential material models: viscoelastic, viscoelastic-viscoplastic, nonlinear viscoelastic and nonlinear viscoelastic-viscoplastic were evaluated, with the latter enabling the best fit to experimental data. Significant changes in material properties were seen in the depth direction of the 3DP sample, which could be linked to the degree of cross-linking within the material, a feature inherent in a UV-cured layer-by-layer construction method. It is proposed that the method is a powerful tool in the analysis of manufacturing processes with potential spatial property variation that will also enable the accurate prediction of final manufactured part performance.
Bodiwala, Kunjan; Shah, Shailesh; Patel, Yogini; Prajapati, Pintu; Marolia, Bhavin; Kalyankar, Gajanan
2017-01-01
Two sensitive, accurate, and precise spectrophotometric methods have been developed and validated for the simultaneous estimation of ofloxacin (OFX), clotrimazole (CLZ), and lignocaine hydrochloride (LGN) in their combined dosage form (ear drops) without prior separation. The derivative ratio spectra method (method 1) includes the measurement of OFX and CLZ at zero-crossing points (ZCPs) of each other obtained from the ratio derivative spectra using standard LGN as a divisor, whereas the measurement of LGN at the ZCP of CLZ is obtained from the ratio derivative spectra using standard OFX as a divisor. The double divisor-ratio derivative method (method 2) includes the measurement of each drug at its amplitude in the double divisor-ratio spectra obtained using a standard mixture of the other two drugs as the divisor. Both methods were found to be linear (correlation coefficients of >0.996) over the ranges of 3-15, 10-50, and 20-100 μg/mL for OFX, CLZ, and LGN, respectively; precise (RSD of <2%); and accurate (recovery of >98%) for the estimation of each drug. The developed methods were successfully applied for the estimation of these drugs in a marketed ear-drop formulation. Excipients and other ingredients did not interfere with the estimation of these drugs. Both methods were statistically compared using the t-test.
Accurate evaluation and analysis of functional genomics data and methods
Greene, Casey S.; Troyanskaya, Olga G.
2016-01-01
The development of technology capable of inexpensively performing large-scale measurements of biological systems has generated a wealth of data. Integrative analysis of these data holds the promise of uncovering gene function, regulation, and, in the longer run, understanding complex disease. However, their analysis has proved very challenging, as it is difficult to quickly and effectively assess the relevance and accuracy of these data for individual biological questions. Here, we identify biases that present challenges for the assessment of functional genomics data and methods. We then discuss evaluation methods that, taken together, begin to address these issues. We also argue that the funding of systematic data-driven experiments and of high-quality curation efforts will further improve evaluation metrics so that they more-accurately assess functional genomics data and methods. Such metrics will allow researchers in the field of functional genomics to continue to answer important biological questions in a data-driven manner. PMID:22268703
Analysis of Glycosaminoglycans Using Mass Spectrometry
Staples, Gregory O.; Zaia, Joseph
2015-01-01
The glycosaminoglycans (GAGs) are linear polysaccharides expressed on animal cell surfaces and in extracellular matrices. Their biosynthesis is under complex control and confers a domain structure that is essential to their ability to bind to protein partners. Key to understanding the functions of GAGs are methods to determine accurately and rapidly patterns of sulfation, acetylation and uronic acid epimerization that correlate with protein binding or other biological activities. Mass spectrometry (MS) is particularly suitable for the analysis of GAGs for biomedical purposes. Using modern ionization techniques it is possible to accurately determine molecular weights of GAG oligosaccharides and their distributions within a mixture. Methods for direct interfacing with liquid chromatography have been developed to permit online mass spectrometric analysis of GAGs. New tandem mass spectrometric methods for fine structure determination of GAGs are emerging. This review summarizes MS-based approaches for analysis of GAGs, including tissue extraction and chromatographic methods compatible with LC/MS and tandem MS. PMID:25705143
Methods of Comprehensive Assessment for China’s Energy Sustainability
NASA Astrophysics Data System (ADS)
Xu, Zhijin; Song, Yankui
2018-02-01
In order to assess the sustainable development of China’s energy objectively and accurately, we need to establish a reasonable indicator system for energy sustainability and make a targeted comprehensive assessment with the scientific methods. This paper constructs a comprehensive indicator system for energy sustainability from five aspects of economy, society, environment, energy resources and energy technology based on the theory of sustainable development and the theory of symbiosis. On this basis, it establishes and discusses the assessment models and the general assessment methods for energy sustainability with the help of fuzzy mathematics. It is of some reference for promoting the sustainable development of China’s energy, economy and society.
A risk assessment method for multi-site damage
NASA Astrophysics Data System (ADS)
Millwater, Harry Russell, Jr.
This research focused on developing probabilistic methods suitable for computing small probabilities of failure, e.g., 10sp{-6}, of structures subject to multi-site damage (MSD). MSD is defined as the simultaneous development of fatigue cracks at multiple sites in the same structural element such that the fatigue cracks may coalesce to form one large crack. MSD is modeled as an array of collinear cracks with random initial crack lengths with the centers of the initial cracks spaced uniformly apart. The data used was chosen to be representative of aluminum structures. The structure is considered failed whenever any two adjacent cracks link up. A fatigue computer model is developed that can accurately and efficiently grow a collinear array of arbitrary length cracks from initial size until failure. An algorithm is developed to compute the stress intensity factors of all cracks considering all interaction effects. The probability of failure of two to 100 cracks is studied. Lower bounds on the probability of failure are developed based upon the probability of the largest crack exceeding a critical crack size. The critical crack size is based on the initial crack size that will grow across the ligament when the neighboring crack has zero length. The probability is evaluated using extreme value theory. An upper bound is based on the probability of the maximum sum of initial cracks being greater than a critical crack size. A weakest link sampling approach is developed that can accurately and efficiently compute small probabilities of failure. This methodology is based on predicting the weakest link, i.e., the two cracks to link up first, for a realization of initial crack sizes, and computing the cycles-to-failure using these two cracks. Criteria to determine the weakest link are discussed. Probability results using the weakest link sampling method are compared to Monte Carlo-based benchmark results. The results indicate that very small probabilities can be computed accurately in a few minutes using a Hewlett-Packard workstation.
[A new method of processing quantitative PCR data].
Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun
2003-05-01
Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.
New Developments in Cathodoluminescence Spectroscopy for the Study of Luminescent Materials
den Engelsen, Daniel; Fern, George R.; Harris, Paul G.; Ireland, Terry G.; Silver, Jack
2017-01-01
Herein, we describe three advanced techniques for cathodoluminescence (CL) spectroscopy that have recently been developed in our laboratories. The first is a new method to accurately determine the CL-efficiency of thin layers of phosphor powders. When a wide band phosphor with a band gap (Eg > 5 eV) is bombarded with electrons, charging of the phosphor particles will occur, which eventually leads to erroneous results in the determination of the luminous efficacy. To overcome this problem of charging, a comparison method has been developed, which enables accurate measurement of the current density of the electron beam. The study of CL from phosphor specimens in a scanning electron microscope (SEM) is the second subject to be treated. A detailed description of a measuring method to determine the overall decay time of single phosphor crystals in a SEM without beam blanking is presented. The third technique is based on the unique combination of microscopy and spectrometry in the transmission electron microscope (TEM) of Brunel University London (UK). This combination enables the recording of CL-spectra of nanometre-sized specimens and determining spatial variations in CL emission across individual particles by superimposing the scanning TEM and CL-images. PMID:28772671
NASA Astrophysics Data System (ADS)
Kim, Sungtae; Lee, Soogab; Kim, Kyu Hong
2008-04-01
A new numerical method toward accurate and efficient aeroacoustic computations of multi-dimensional compressible flows has been developed. The core idea of the developed scheme is to unite the advantages of the wavenumber-extended optimized scheme and M-AUSMPW+/MLP schemes by predicting a physical distribution of flow variables more accurately in multi-space dimensions. The wavenumber-extended optimization procedure for the finite volume approach based on the conservative requirement is newly proposed for accuracy enhancement, which is required to capture the acoustic portion of the solution in the smooth region. Furthermore, the new distinguishing mechanism which is based on the Gibbs phenomenon in discontinuity, between continuous and discontinuous regions is introduced to eliminate the excessive numerical dissipation in the continuous region by the restricted application of MLP according to the decision of the distinguishing function. To investigate the effectiveness of the developed method, a sequence of benchmark simulations such as spherical wave propagation, nonlinear wave propagation, shock tube problem and vortex preservation test problem are executed. Also, throughout more realistic shock-vortex interaction and muzzle blast flow problems, the utility of the new method for aeroacoustic applications is verified by comparing with the previous numerical or experimental results.
Prediction of essential oil content of oregano by hand-held and Fourier transform NIR spectroscopy.
Camps, Cédric; Gérard, Marianne; Quennoz, Mélanie; Brabant, Cécile; Oberson, Carine; Simonnet, Xavier
2014-05-01
In the framework of a breeding programme, the analysis of hundreds of oregano samples to determine their essential oil content (EOC) is time-consuming and expensive in terms of labour. Therefore developing a new method that is rapid, accurate and less expensive to use would be an asset to breeders. The aim of the present study was to develop a method based on near-inrared (NIR) spectroscopy to determine the EOC of oregano dried powder. Two spectroscopic approaches were compared, the first using a hand-held NIR device and the second a Fourier transform (FT) NIR spectrometer. Hand-held NIR (1000-1800 nm) measurements and partial least squares regression allowed the determination of EOC with R² and SEP values of 0.58 and 0.81 mL per 100 g dry matter (DM) respectively. Measurements with FT-NIR (1000-2500 nm) allowed the determination of EOC with R² and SEP values of 0.91 and 0.68 mL per 100 g DM respectively. RPD, RER and RPIQ values for the model implemented with FT-NIR data were satisfactory for screening application, while those obtained with hand-held NIR data were below the level required to consider the model as enough accurate for screening application. The FT-NIR approach allowed the development of an accurate model for EOC prediction. Although the hand-held NIR approach is promising, it needs additional development before it can be used in practice. © 2013 Society of Chemical Industry.
Adhikari, Puspa L; Wong, Roberto L; Overton, Edward B
2017-10-01
Accurate characterization of petroleum hydrocarbons in complex and weathered oil residues is analytically challenging. This is primarily due to chemical compositional complexity of both the oil residues and environmental matrices, and the lack of instrumental selectivity due to co-elution of interferences with the target analytes. To overcome these analytical selectivity issues, we used an enhanced resolution gas chromatography coupled with triple quadrupole mass spectrometry in Multiple Reaction Monitoring (MRM) mode (GC/MS/MS-MRM) to eliminate interferences within the ion chromatograms of target analytes found in environmental samples. This new GC/MS/MS-MRM method was developed and used for forensic fingerprinting of deep-water and marsh sediment samples containing oily residues from the Deepwater Horizon oil spill. The results showed that the GC/MS/MS-MRM method increases selectivity, eliminates interferences, and provides more accurate quantitation and characterization of trace levels of alkyl-PAHs and biomarker compounds, from weathered oil residues in complex sample matrices. The higher selectivity of the new method, even at low detection limits, provides greater insights on isomer and homolog compositional patterns and the extent of oil weathering under various environmental conditions. The method also provides flat chromatographic baselines for accurate and unambiguous calculation of petroleum forensic biomarker compound ratios. Thus, this GC/MS/MS-MRM method can be a reliable analytical strategy for more accurate and selective trace level analyses in petroleum forensic studies, and for tacking continuous weathering of oil residues. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ukwatta, Eranga; Arevalo, Hermenegild; Li, Kristina; Yuan, Jing; Qiu, Wu; Malamas, Peter; Wu, Katherine C.
2016-01-01
Accurate representation of myocardial infarct geometry is crucial to patient-specific computational modeling of the heart in ischemic cardiomyopathy. We have developed a methodology for segmentation of left ventricular (LV) infarct from clinically acquired, two-dimensional (2D), late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) images, for personalized modeling of ventricular electrophysiology. The infarct segmentation was expressed as a continuous min-cut optimization problem, which was solved using its dual formulation, the continuous max-flow (CMF). The optimization objective comprised of a smoothness term, and a data term that quantified the similarity between image intensity histograms of segmented regions and those of a set of training images. A manual segmentation of the LV myocardium was used to initialize and constrain the developed method. The three-dimensional geometry of infarct was reconstructed from its segmentation using an implicit, shape-based interpolation method. The proposed methodology was extensively evaluated using metrics based on geometry, and outcomes of individualized electrophysiological simulations of cardiac dys(function). Several existing LV infarct segmentation approaches were implemented, and compared with the proposed method. Our results demonstrated that the CMF method was more accurate than the existing approaches in reproducing expert manual LV infarct segmentations, and in electrophysiological simulations. The infarct segmentation method we have developed and comprehensively evaluated in this study constitutes an important step in advancing clinical applications of personalized simulations of cardiac electrophysiology. PMID:26731693
Method development and validation of potent pyrimidine derivative by UV-VIS spectrophotometer.
Chaudhary, Anshu; Singh, Anoop; Verma, Prabhakar Kumar
2014-12-01
A rapid and sensitive ultraviolet-visible (UV-VIS) spectroscopic method was developed for the estimation of pyrimidine derivative 6-Bromo-3-(6-(2,6-dichlorophenyl)-2-(morpolinomethylamino) pyrimidine4-yl) -2H-chromen-2-one (BT10M) in bulk form. Pyrimidine derivative was monitored at 275 nm with UV detection, and there is no interference of diluents at 275 nm. The method was found to be linear in the range of 50 to 150 μg/ml. The accuracy and precision were determined and validated statistically. The method was validated as a guideline. The results showed that the proposed method is suitable for the accurate, precise, and rapid determination of pyrimidine derivative. Graphical Abstract Method development and validation of potent pyrimidine derivative by UV spectroscopy.
Kangani, Cyrous O.; Kelley, David E.; DeLany, James P.
2008-01-01
A simple, direct and accurate method for the determination of concentration and enrichment of free fatty acids in human plasma was developed. The validation and comparison to a conventional method are reported. Three amide derivatives, dimethyl, diethyl and pyrrolidide, were investigated in order to achieve optimal resolution of the individual fatty acids. This method involves the use of dimethylamine/Deoxo-Fluor to derivatize plasma free fatty acids to their dimethylamides. This derivatization method is very mild and efficient, and is selective only towards free fatty acids so that no separation from a total lipid extract is required. The direct method gave lower concentrations for palmitic acid and stearic acid and increased concentrations for oleic acid and linoleic acid in plasma as compared to methylester derivative after thin-layer chromatography. The [13C]palmitate isotope enrichment measured using direct method was significantly higher than that observed with the BF3/MeOH-TLC method. The present method provided accurate and precise measures of concentration as well as enrichment when analyzed with gas chromatography combustion-isotope ratio-mass spectrometry. PMID:18757250
Kangani, Cyrous O; Kelley, David E; Delany, James P
2008-09-15
A simple, direct and accurate method for the determination of concentration and enrichment of free fatty acids (FFAs) in human plasma was developed. The validation and comparison to a conventional method are reported. Three amide derivatives, dimethyl, diethyl and pyrrolidide, were investigated in order to achieve optimal resolution of the individual fatty acids. This method involves the use of dimethylamine/Deoxo-Fluor to derivatize plasma free fatty acids to their dimethylamides. This derivatization method is very mild and efficient, and is selective only towards FFAs so that no separation from a total lipid extract is required. The direct method gave lower concentrations for palmitic acid and stearic acid and increased concentrations for oleic acid and linoleic acid in plasma as compared to methyl ester derivative after thin-layer chromatography. The [(13)C]palmitate isotope enrichment measured using direct method was significantly higher than that observed with the BF(3)/MeOH-TLC method. The present method provided accurate and precise measures of concentration as well as enrichment when analyzed with gas chromatography combustion-isotope ratio-mass spectrometry.
Free energy landscape for the binding process of Huperzine A to acetylcholinesterase
Bai, Fang; Xu, Yechun; Chen, Jing; Liu, Qiufeng; Gu, Junfeng; Wang, Xicheng; Ma, Jianpeng; Li, Honglin; Onuchic, José N.; Jiang, Hualiang
2013-01-01
Drug-target residence time (t = 1/koff, where koff is the dissociation rate constant) has become an important index in discovering better- or best-in-class drugs. However, little effort has been dedicated to developing computational methods that can accurately predict this kinetic parameter or related parameters, koff and activation free energy of dissociation (). In this paper, energy landscape theory that has been developed to understand protein folding and function is extended to develop a generally applicable computational framework that is able to construct a complete ligand-target binding free energy landscape. This enables both the binding affinity and the binding kinetics to be accurately estimated. We applied this method to simulate the binding event of the anti-Alzheimer’s disease drug (−)−Huperzine A to its target acetylcholinesterase (AChE). The computational results are in excellent agreement with our concurrent experimental measurements. All of the predicted values of binding free energy and activation free energies of association and dissociation deviate from the experimental data only by less than 1 kcal/mol. The method also provides atomic resolution information for the (−)−Huperzine A binding pathway, which may be useful in designing more potent AChE inhibitors. We expect this methodology to be widely applicable to drug discovery and development. PMID:23440190
Free energy landscape for the binding process of Huperzine A to acetylcholinesterase.
Bai, Fang; Xu, Yechun; Chen, Jing; Liu, Qiufeng; Gu, Junfeng; Wang, Xicheng; Ma, Jianpeng; Li, Honglin; Onuchic, José N; Jiang, Hualiang
2013-03-12
Drug-target residence time (t = 1/k(off), where k(off) is the dissociation rate constant) has become an important index in discovering better- or best-in-class drugs. However, little effort has been dedicated to developing computational methods that can accurately predict this kinetic parameter or related parameters, k(off) and activation free energy of dissociation (ΔG(off)≠). In this paper, energy landscape theory that has been developed to understand protein folding and function is extended to develop a generally applicable computational framework that is able to construct a complete ligand-target binding free energy landscape. This enables both the binding affinity and the binding kinetics to be accurately estimated. We applied this method to simulate the binding event of the anti-Alzheimer's disease drug (-)-Huperzine A to its target acetylcholinesterase (AChE). The computational results are in excellent agreement with our concurrent experimental measurements. All of the predicted values of binding free energy and activation free energies of association and dissociation deviate from the experimental data only by less than 1 kcal/mol. The method also provides atomic resolution information for the (-)-Huperzine A binding pathway, which may be useful in designing more potent AChE inhibitors. We expect this methodology to be widely applicable to drug discovery and development.
Theoretical research program to study chemical reactions in AOTV bow shock tubes
NASA Technical Reports Server (NTRS)
Taylor, Peter R.
1993-01-01
The main focus was the development, implementation, and calibration of methods for performing molecular electronic structure calculations to high accuracy. These various methods were then applied to a number of chemical reactions and species of interest to NASA, notably in the area of combustion chemistry. Among the development work undertaken was a collaborative effort to develop a program to efficiently predict molecular structures and vibrational frequencies using energy derivatives. Another major development effort involved the design of new atomic basis sets for use in chemical studies: these sets were considerably more accurate than those previously in use. Much effort was also devoted to calibrating methods for computing accurate molecular wave functions, including the first reliable calibrations for realistic molecules using full CI results. A wide variety of application calculations were undertaken. One area of interest was the spectroscopy and thermochemistry of small molecules, including establishing small molecule binding energies to an accuracy rivaling, or even on occasion surpassing, the experiment. Such binding energies are essential input to modeling chemical reaction processes, such as combustion. Studies of large molecules and processes important in both hydrogen and hydrocarbon combustion chemistry were also carried out. Finally, some effort was devoted to the structure and spectroscopy of small metal clusters, with applications to materials science problems.
NASA Astrophysics Data System (ADS)
Magdy, Nancy; Ayad, Miriam F.
2015-02-01
Two simple, accurate, precise, sensitive and economic spectrophotometric methods were developed for the simultaneous determination of Simvastatin and Ezetimibe in fixed dose combination products without prior separation. The first method depends on a new chemometrics-assisted ratio spectra derivative method using moving window polynomial least square fitting method (Savitzky-Golay filters). The second method is based on a simple modification for the ratio subtraction method. The suggested methods were validated according to USP guidelines and can be applied for routine quality control testing.
NASA Astrophysics Data System (ADS)
Guo, H.; Zhang, H.
2016-12-01
Relocating high-precision earthquakes is a central task for monitoring earthquakes and studying the structure of earth's interior. The most popular location method is the event-pair double-difference (DD) relative location method, which uses the catalog and/or more accurate waveform cross-correlation (WCC) differential times from event pairs with small inter-event separations to the common stations to reduce the effect of the velocity uncertainties outside the source region. Similarly, Zhang et al. [2010] developed a station-pair DD location method which uses the differential times from common events to pairs of stations to reduce the effect of the velocity uncertainties near the source region, to relocate the non-volcanic tremors (NVT) beneath the San Andreas Fault (SAF). To utilize advantages of both DD location methods, we have proposed and developed a new double-pair DD location method to use the differential times from pairs of events to pairs of stations. The new method can remove the event origin time and station correction terms from the inversion system and cancel out the effects of the velocity uncertainties near and outside the source region simultaneously. We tested and applied the new method on the northern California regular earthquakes to validate its performance. In comparison, among three DD location methods, the new double-pair DD method can determine more accurate relative locations and the station-pair DD method can better improve the absolute locations. Thus, we further proposed a new location strategy combining station-pair and double-pair differential times to determine accurate absolute and relative locations at the same time. For NVTs, it is difficult to pick the first arrivals and derive the WCC event-pair differential times, thus the general practice is to measure station-pair envelope WCC differential times. However, station-pair tremor locations are scattered due to the low-precision relative locations. The ability that double-pair data can be directly constructed from the station-pair data means that double-pair DD method can be used for improving NVT locations. We have applied the new method to the NVTs beneath the SAF near Cholame, California. Compared to the previous results, the new double-pair DD tremor locations are more concentrated and show more detailed structures.
A Method to Improve Electron Density Measurement of Cone-Beam CT Using Dual Energy Technique
Men, Kuo; Dai, Jian-Rong; Li, Ming-Hui; Chen, Xin-Yuan; Zhang, Ke; Tian, Yuan; Huang, Peng; Xu, Ying-Jie
2015-01-01
Purpose. To develop a dual energy imaging method to improve the accuracy of electron density measurement with a cone-beam CT (CBCT) device. Materials and Methods. The imaging system is the XVI CBCT system on Elekta Synergy linac. Projection data were acquired with the high and low energy X-ray, respectively, to set up a basis material decomposition model. Virtual phantom simulation and phantoms experiments were carried out for quantitative evaluation of the method. Phantoms were also scanned twice with the high and low energy X-ray, respectively. The data were decomposed into projections of the two basis material coefficients according to the model set up earlier. The two sets of decomposed projections were used to reconstruct CBCT images of the basis material coefficients. Then, the images of electron densities were calculated with these CBCT images. Results. The difference between the calculated and theoretical values was within 2% and the correlation coefficient of them was about 1.0. The dual energy imaging method obtained more accurate electron density values and reduced the beam hardening artifacts obviously. Conclusion. A novel dual energy CBCT imaging method to calculate the electron densities was developed. It can acquire more accurate values and provide a platform potentially for dose calculation. PMID:26346510
Eboigbodin, Kevin; Filén, Sanna; Ojalehto, Tuomas; Brummer, Mirko; Elf, Sonja; Pousi, Kirsi; Hoser, Mark
2016-06-01
Rapid and accurate diagnosis of influenza viruses plays an important role in infection control, as well as in preventing the misuse of antibiotics. Isothermal nucleic acid amplification methods offer significant advantages over the polymerase chain reaction (PCR), since they are more rapid and do not require the sophisticated instruments needed for thermal cycling. We previously described a novel isothermal nucleic acid amplification method, 'Strand Invasion Based Amplification' (SIBA®), with high analytical sensitivity and specificity, for the detection of DNA. In this study, we describe the development of a variant of the SIBA method, namely, reverse transcription SIBA (RT-SIBA), for the rapid detection of viral RNA targets. The RT-SIBA method includes a reverse transcriptase enzyme that allows one-step reverse transcription of RNA to complementary DNA (cDNA) and simultaneous amplification and detection of the cDNA by SIBA under isothermal reaction conditions. The RT-SIBA method was found to be more sensitive than PCR for the detection of influenza A and B and could detect 100 copies of influenza RNA within 15 min. The development of RT-SIBA will enable rapid and accurate diagnosis of viral RNA targets within point-of-care or central laboratory settings.
Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D
2014-03-01
Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.
A two dimensional power spectral estimate for some nonstationary processes. M.S. Thesis
NASA Technical Reports Server (NTRS)
Smith, Gregory L.
1989-01-01
A two dimensional estimate for the power spectral density of a nonstationary process is being developed. The estimate will be applied to helicopter noise data which is clearly nonstationary. The acoustic pressure from the isolated main rotor and isolated tail rotor is known to be periodically correlated (PC) and the combined noise from the main and tail rotors is assumed to be correlation autoregressive (CAR). The results of this nonstationary analysis will be compared with the current method of assuming that the data is stationary and analyzing it as such. Another method of analysis is to introduce a random phase shift into the data as shown by Papoulis to produce a time history which can then be accurately modeled as stationary. This method will also be investigated for the helicopter data. A method used to determine the period of a PC process when the period is not know is discussed. The period of a PC process must be known in order to produce an accurate spectral representation for the process. The spectral estimate is developed. The bias and variability of the estimate are also discussed. Finally, the current method for analyzing nonstationary data is compared to that of using a two dimensional spectral representation. In addition, the method of phase shifting the data is examined.
Proximal sensing for soil carbon accounting
NASA Astrophysics Data System (ADS)
England, Jacqueline R.; Viscarra Rossel, Raphael A.
2018-05-01
Maintaining or increasing soil organic carbon (C) is vital for securing food production and for mitigating greenhouse gas (GHG) emissions, climate change, and land degradation. Some land management practices in cropping, grazing, horticultural, and mixed farming systems can be used to increase organic C in soil, but to assess their effectiveness, we need accurate and cost-efficient methods for measuring and monitoring the change. To determine the stock of organic C in soil, one requires measurements of soil organic C concentration, bulk density, and gravel content, but using conventional laboratory-based analytical methods is expensive. Our aim here is to review the current state of proximal sensing for the development of new soil C accounting methods for emissions reporting and in emissions reduction schemes. We evaluated sensing techniques in terms of their rapidity, cost, accuracy, safety, readiness, and their state of development. The most suitable method for measuring soil organic C concentrations appears to be visible-near-infrared (vis-NIR) spectroscopy and, for bulk density, active gamma-ray attenuation. Sensors for measuring gravel have not been developed, but an interim solution with rapid wet sieving and automated measurement appears useful. Field-deployable, multi-sensor systems are needed for cost-efficient soil C accounting. Proximal sensing can be used for soil organic C accounting, but the methods need to be standardized and procedural guidelines need to be developed to ensure proficient measurement and accurate reporting and verification. These are particularly important if the schemes use financial incentives for landholders to adopt management practices to sequester soil organic C. We list and discuss requirements for developing new soil C accounting methods based on proximal sensing, including requirements for recording, verification, and auditing.
Saracevic, Andrea; Simundic, Ana-Maria; Celap, Ivana; Luzanic, Valentina
2013-07-01
Rigat and colleagues were the first ones to develop a rapid PCR-based assay for identifying the angiotensin converting enzyme insertion/deletion (I/D) polymorphism. Due to a big difference between the length of the wild-type and mute alleles the PCR method is prone to mistyping because of preferential amplification of the D allele causing depicting I/D heterozygotes as D/D homozygotes. The aim of this study was to investigate whether this preferential amplification can be repressed by amplifying a longer DNA fragment in a so called Long PCR protocol. We also aimed to compare the results of genotyping using five different PCR protocols and to estimate the mistyping rate. The study included 200 samples which were genotyped using standard method used in our laboratory, a stepdown PCR, PCR protocol with the inclusion of 4 % DMSO, PCR with the use of insertion specific primers and new Long PCR method. The results of this study have shown that accurate ACE I/D polymorphism genotyping can be accomplished with the standard and the Long PCR method. Also, as of our results, accurate ACE I/D polymorphism genotyping can be accomplished regardless of the method used. Therefore, if the standard method is optimized more cautiously, accurate results can be obtained by this simple, inexpensive and rapid PCR protocol.
Efficient alignment-free DNA barcode analytics
Kuksa, Pavel; Pavlovic, Vladimir
2009-01-01
Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305
Accelerated aging: prediction of chemical stability of pharmaceuticals.
Waterman, Kenneth C; Adami, Roger C
2005-04-11
Methods of rapidly and accurately assessing the chemical stability of pharmaceutical dosage forms are reviewed with respect to the major degradation mechanisms generally observed in pharmaceutical development. Methods are discussed, with the appropriate caveats, for accelerated aging of liquid and solid dosage forms, including small and large molecule active pharmaceutical ingredients. In particular, this review covers general thermal methods, as well as accelerated aging methods appropriate to oxidation, hydrolysis, reaction with reactive excipient impurities, photolysis and protein denaturation.
Cognitive task analysis-based design and authoring software for simulation training.
Munro, Allen; Clark, Richard E
2013-10-01
The development of more effective medical simulators requires a collaborative team effort where three kinds of expertise are carefully coordinated: (1) exceptional medical expertise focused on providing complete and accurate information about the medical challenges (i.e., critical skills and knowledge) to be simulated; (2) instructional expertise focused on the design of simulation-based training and assessment methods that produce maximum learning and transfer to patient care; and (3) software development expertise that permits the efficient design and development of the software required to capture expertise, present it in an engaging way, and assess student interactions with the simulator. In this discussion, we describe a method of capturing more complete and accurate medical information for simulators and combine it with new instructional design strategies that emphasize the learning of complex knowledge. Finally, we describe three different types of software support (Development/Authoring, Run Time, and Post Run Time) required at different stages in the development of medical simulations and the instructional design elements of the software required at each stage. We describe the contributions expected of each kind of software and the different instructional control authoring support required. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B.
Purpose: Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. Methods: The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients andmore » compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. Results: At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e − 3) on all calculi from 1 to 433 mm{sup 3} in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Conclusions: Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis.« less
Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters
NASA Astrophysics Data System (ADS)
Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.
2004-12-01
Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various large heterogeneous spatial-temporal datasets provide evidence that the benefits of the proposed methodology for efficient and accurate learning exist beyond the area of retrieval of geophysical parameters.
Kim, Jong-Oh; Kim, Wi-Sik; Kim, Si-Woo; Han, Hyun-Ja; Kim, Jin Woo; Park, Myoung Ae; Oh, Myung-Joo
2014-01-01
Viral hemorrhagic septicemia virus (VHSV) is a problematic pathogen in olive flounder (Paralichthys olivaceus) aquaculture farms in Korea. Thus, it is necessary to develop a rapid and accurate diagnostic method to detect this virus. We developed a quantitative RT-PCR (qRT-PCR) method based on the nucleocapsid (N) gene sequence of Korean VHSV isolate (Genogroup IVa). The slope and R2 values of the primer set developed in this study were −0.2928 (96% efficiency) and 0.9979, respectively. Its comparison with viral infectivity calculated by traditional quantifying method (TCID50) showed a similar pattern of kinetic changes in vitro and in vivo. The qRT-PCR method reduced detection time compared to that of TCID50, making it a very useful tool for VHSV diagnosis. PMID:24859343
Comparison of Predictive Modeling Methods of Aircraft Landing Speed
NASA Technical Reports Server (NTRS)
Diallo, Ousmane H.
2012-01-01
Expected increases in air traffic demand have stimulated the development of air traffic control tools intended to assist the air traffic controller in accurately and precisely spacing aircraft landing at congested airports. Such tools will require an accurate landing-speed prediction to increase throughput while decreasing necessary controller interventions for avoiding separation violations. There are many practical challenges to developing an accurate landing-speed model that has acceptable prediction errors. This paper discusses the development of a near-term implementation, using readily available information, to estimate/model final approach speed from the top of the descent phase of flight to the landing runway. As a first approach, all variables found to contribute directly to the landing-speed prediction model are used to build a multi-regression technique of the response surface equation (RSE). Data obtained from operations of a major airlines for a passenger transport aircraft type to the Dallas/Fort Worth International Airport are used to predict the landing speed. The approach was promising because it decreased the standard deviation of the landing-speed error prediction by at least 18% from the standard deviation of the baseline error, depending on the gust condition at the airport. However, when the number of variables is reduced to the most likely obtainable at other major airports, the RSE model shows little improvement over the existing methods. Consequently, a neural network that relies on a nonlinear regression technique is utilized as an alternative modeling approach. For the reduced number of variables cases, the standard deviation of the neural network models errors represent over 5% reduction compared to the RSE model errors, and at least 10% reduction over the baseline predicted landing-speed error standard deviation. Overall, the constructed models predict the landing-speed more accurately and precisely than the current state-of-the-art.
2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David Bradley; Waters, Jiajia
Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.
Young, Mariel; Johannesdottir, Fjola; Poole, Ken; Shaw, Colin; Stock, J T
2018-02-01
Femoral head diameter is commonly used to estimate body mass from the skeleton. The three most frequently employed methods, designed by Ruff, Grine, and McHenry, were developed using different populations to address different research questions. They were not specifically designed for application to female remains, and their accuracy for this purpose has rarely been assessed or compared in living populations. This study analyzes the accuracy of these methods using a sample of modern British women through the use of pelvic CT scans (n = 97) and corresponding information about the individuals' known height and weight. Results showed that all methods provided reasonably accurate body mass estimates (average percent prediction errors under 20%) for the normal weight and overweight subsamples, but were inaccurate for the obese and underweight subsamples (average percent prediction errors over 20%). When women of all body mass categories were combined, the methods provided reasonable estimates (average percent prediction errors between 16 and 18%). The results demonstrate that different methods provide more accurate results within specific body mass index (BMI) ranges. The McHenry Equation provided the most accurate estimation for women of small body size, while the original Ruff Equation is most likely to be accurate if the individual was obese or severely obese. The refined Ruff Equation was the most accurate predictor of body mass on average for the entire sample, indicating that it should be utilized when there is no knowledge of the individual's body size or if the individual is assumed to be of a normal body size. The study also revealed a correlation between pubis length and body mass, and an equation for body mass estimation using pubis length was accurate in a dummy sample, suggesting that pubis length can also be used to acquire reliable body mass estimates. This has implications for how we interpret body mass in fossil hominins and has particular relevance to the interpretation of the long pubic ramus that is characteristic of Neandertals. Copyright © 2017 Elsevier Ltd. All rights reserved.
Milne, Marjorie E; Steward, Christopher; Firestone, Simon M; Long, Sam N; O'Brien, Terrence J; Moffat, Bradford A
2016-04-01
To develop representative MRI atlases of the canine brain and to evaluate 3 methods of atlas-based segmentation (ABS). 62 dogs without clinical signs of epilepsy and without MRI evidence of structural brain disease. The MRI scans from 44 dogs were used to develop 4 templates on the basis of brain shape (brachycephalic, mesaticephalic, dolichocephalic, and combined mesaticephalic and dolichocephalic). Atlas labels were generated by segmenting the brain, ventricular system, hippocampal formation, and caudate nuclei. The MRI scans from the remaining 18 dogs were used to evaluate 3 methods of ABS (manual brain extraction and application of a brain shape-specific template [A], automatic brain extraction and application of a brain shape-specific template [B], and manual brain extraction and application of a combined template [C]). The performance of each ABS method was compared by calculation of the Dice and Jaccard coefficients, with manual segmentation used as the gold standard. Method A had the highest mean Jaccard coefficient and was the most accurate ABS method assessed. Measures of overlap for ABS methods that used manual brain extraction (A and C) ranged from 0.75 to 0.95 and compared favorably with repeated measures of overlap for manual extraction, which ranged from 0.88 to 0.97. Atlas-based segmentation was an accurate and repeatable method for segmentation of canine brain structures. It could be performed more rapidly than manual segmentation, which should allow the application of computer-assisted volumetry to large data sets and clinical cases and facilitate neuroimaging research and disease diagnosis.
El-Yazbi, Amira F
2017-01-20
Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virusinfection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with <em>P</em>-value > 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.
NASA Technical Reports Server (NTRS)
Liever, Peter A.; West, Jeffrey S.; Harris, Robert E.
2016-01-01
A hybrid Computational Fluid Dynamics and Computational Aero-Acoustics (CFD/CAA) modeling framework has been developed for launch vehicle liftoff acoustic environment predictions. The framework couples the existing highly-scalable NASA production CFD code, Loci/CHEM, with a high-order accurate Discontinuous Galerkin solver developed in the same production framework, Loci/THRUST, to accurately resolve and propagate acoustic physics across the entire launch environment. Time-accurate, Hybrid RANS/LES CFD modeling is applied for predicting the acoustic generation physics at the plume source, and a high-order accurate unstructured mesh Discontinuous Galerkin (DG) method is employed to propagate acoustic waves away from the source across large distances using high-order accurate schemes. The DG solver is capable of solving 2nd, 3rd, and 4th order Euler solutions for non-linear, conservative acoustic field propagation. Initial application testing and validation has been carried out against high resolution acoustic data from the Ares Scale Model Acoustic Test (ASMAT) series to evaluate the capabilities and production readiness of the CFD/CAA system to resolve the observed spectrum of acoustic frequency content. This paper presents results from this validation and outlines efforts to mature and improve the computational simulation framework.
Compensation of the sheath effects in cylindrical floating probes
NASA Astrophysics Data System (ADS)
Park, Ji-Hwan; Chung, Chin-Wook
2018-05-01
In cylindrical floating probe measurements, the plasma density and electron temperature are overestimated due to sheath expansion and oscillation. To reduce these sheath effects, a compensation method based on well-developed floating sheath theories is proposed and applied to the floating harmonic method. The iterative calculation of the Allen-Boyd-Reynolds equation can derive the floating sheath thickness, which can be used to calculate the effective ion collection area; in this way, an accurate ion density is obtained. The Child-Langmuir law is used to calculate the ion harmonic currents caused by sheath oscillation of the alternating-voltage-biased probe tip. Accurate plasma parameters can be obtained by subtracting these ion harmonic currents from the total measured harmonic currents. Herein, the measurement principles and compensation method are discussed in detail and an experimental demonstration is presented.
Hori, Kenta; Kuroda, Tomohiro; Oyama, Hiroshi; Ozaki, Yasuhiko; Nakamura, Takehiko; Takahashi, Takashi
2005-12-01
For faultless collaboration among the surgeon, surgical staffs, and surgical robots in telesurgery, communication must include environmental information of the remote operating room, such as behavior of robots and staffs, vital information of a patient, named supporting information, in addition to view of surgical field. "Surgical Cockpit System, " which is a telesurgery support system that has been developed by the authors, is mainly focused on supporting information exchange between remote sites. Live video presentation is important technology for Surgical Cockpit System. Visualization method to give precise location/posture of surgical instruments is indispensable for accurate control and faultless operation. In this paper, the authors propose three-side-view presentation method for precise location/posture control of surgical instruments in telesurgery. The experimental results show that the proposed method improved accurate positioning of a telemanipulator.
NASA Astrophysics Data System (ADS)
Bart, Gerhard; Aerne, Ernst Tino; Burri, Martin; Zwicky, Hans-Urs
1986-11-01
Cladding carburization during irradiation of advanced mixed uranium plutonium carbide fast breeder reactor fuel is possibly a life limiting fuel pin factor. The quantitative assessment of such clad carbon embrittlement is difficult to perform by electron microprobe analysis because of sample surface contamination, and due to the very low energy of the carbon K α X-ray transition. The work presented here describes a method developed at the Swiss Federal Institute for Reactor Research (EIR) to use shielded secondary ion mass spectrometry (SIMS) as an accurate tool to determine radial distribution profiles of carbon in radioactive stainless steel fuel pin cladding. Compared with nuclear microprobe analysis (NMA) [1], which is also an accurate method for carbon analysis, the SIMS method distinguishes itself by its versatility for simultaneous determination of additional impurities.
Rose, John P; Wang, Bi-Cheng; Weiss, Manfred S
2015-07-01
Native SAD phasing uses the anomalous scattering signal of light atoms in the crystalline, native samples of macromolecules collected from single-wavelength X-ray diffraction experiments. These atoms include sodium, magnesium, phosphorus, sulfur, chlorine, potassium and calcium. Native SAD phasing is challenging and is critically dependent on the collection of accurate data. Over the past five years, advances in diffraction hardware, crystallographic software, data-collection methods and strategies, and the use of data statistics have been witnessed which allow 'highly accurate data' to be routinely collected. Today, native SAD sits on the verge of becoming a 'first-choice' method for both de novo and molecular-replacement structure determination. This article will focus on advances that have caught the attention of the community over the past five years. It will also highlight both de novo native SAD structures and recent structures that were key to methods development.
Overview of aerothermodynamic loads definition study
NASA Technical Reports Server (NTRS)
Gaugler, Raymond E.
1991-01-01
The objective of the Aerothermodynamic Loads Definition Study is to develop methods of accurately predicting the operating environment in advanced Earth-to-Orbit (ETO) propulsion systems, such as the Space Shuttle Main Engine (SSME) powerhead. Development of time averaged and time dependent three dimensional viscous computer codes as well as experimental verification and engine diagnostic testing are considered to be essential in achieving that objective. Time-averaged, nonsteady, and transient operating loads must all be well defined in order to accurately predict powerhead life. Described here is work in unsteady heat flow analysis, improved modeling of preburner flow, turbulence modeling for turbomachinery, computation of three dimensional flow with heat transfer, and unsteady viscous multi-blade row turbine analysis.
Houts, Carrie R; Edwards, Michael C; Wirth, R J; Deal, Linda S
2016-11-01
There has been a notable increase in the advocacy of using small-sample designs as an initial quantitative assessment of item and scale performance during the scale development process. This is particularly true in the development of clinical outcome assessments (COAs), where Rasch analysis has been advanced as an appropriate statistical tool for evaluating the developing COAs using a small sample. We review the benefits such methods are purported to offer from both a practical and statistical standpoint and detail several problematic areas, including both practical and statistical theory concerns, with respect to the use of quantitative methods, including Rasch-consistent methods, with small samples. The feasibility of obtaining accurate information and the potential negative impacts of misusing large-sample statistical methods with small samples during COA development are discussed.
Salis, Howard; Kaznessis, Yiannis N
2005-12-01
Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.
Pang, Guo-Fang; Fan, Chun-Lin; Chang, Qiao-Ying; Li, Jian-Xun; Kang, Jian; Lu, Mei-Ling
2018-03-22
This paper uses the LC-quadrupole-time-of-flight MS technique to evaluate the behavioral characteristics of MSof 485 pesticides under different conditions and has developed an accurate mass database and spectra library. A high-throughput screening and confirmation method has been developed for the 485 pesticides in fruits and vegetables. Through the optimization of parameters such as accurate mass number, time of retention window, ionization forms, etc., the method has improved the accuracy of pesticide screening, thus avoiding the occurrence of false-positive and false-negative results. The method features a full scan of fragments, with 80% of pesticide qualitative points over 10, which helps increase pesticide qualitative accuracy. The abundant differences of fragment categories help realize the effective separation and qualitative identification of isomer pesticides. Four different fruits and vegetables-apples, grapes, celery, and tomatoes-were chosen to evaluate the efficiency of the method at three fortification levels of 5, 10, and 20 μg/kg, and satisfactory results were obtained. With this method, a national survey of pesticide residues was conducted between 2012 and 2015 for 12 551 samples of 146 different fruits and vegetables collected from 638 sampling points in 284 counties across 31 provincial capitals/cities directly under the central government, which provided scientific data backup for ensuring pesticide residue safety of the fruits and vegetables consumed daily by the public. Meanwhile, the big data statistical analysis of the new technique also further proves it to be of high speed, high throughput, high accuracy, high reliability, and high informatization.
[Recent advances in sample preparation methods of plant hormones].
Wu, Qian; Wang, Lus; Wu, Dapeng; Duan, Chunfeng; Guan, Yafeng
2014-04-01
Plant hormones are a group of naturally occurring trace substances which play a crucial role in controlling the plant development, growth and environment response. With the development of the chromatography and mass spectroscopy technique, chromatographic analytical method has become a widely used way for plant hormone analysis. Among the steps of chromatographic analysis, sample preparation is undoubtedly the most vital one. Thus, a highly selective and efficient sample preparation method is critical for accurate identification and quantification of phytohormones. For the three major kinds of plant hormones including acidic plant hormones & basic plant hormones, brassinosteroids and plant polypeptides, the sample preparation methods are reviewed in sequence especially the recently developed methods. The review includes novel methods, devices, extractive materials and derivative reagents for sample preparation of phytohormones analysis. Especially, some related works of our group are included. At last, the future developments in this field are also prospected.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
Formińska, Kamila; Zasada, Aleksandra A; Rastawicki, Waldemar; Śmietańska, Karolina; Bander, Dorota; Wawrzynowicz-Syczewska, Marta; Yanushevych, Mariya; Niścigórska-Olsen, Jolanta; Wawszczak, Marek
2015-01-01
The study describes four cases of tularaemia - one developed after contact with rabbits and three developed after an arthropod bite. Due to non-specific clinical symptoms, accurate diagnosis of tularaemia may be difficult. The increasing contribution of the arthropod vectors in the transmission of the disease indicates that special effort should be made to apply sensitive and specific diagnostic methods for tularaemia, and to remind health-care workers about this route of Francisella tularensis infections. The advantages and disadvantages of various diagnostic methods - molecular, serological and microbiological culture - are discussed. The PCR as a rapid and proper diagnostic method for ulceroglandular tularaemia is presented.
Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction.
Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi
2017-08-08
Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors.
Just-in-Time Correntropy Soft Sensor with Noisy Data for Industrial Silicon Content Prediction
Chen, Kun; Liang, Yu; Gao, Zengliang; Liu, Yi
2017-01-01
Development of accurate data-driven quality prediction models for industrial blast furnaces encounters several challenges mainly because the collected data are nonlinear, non-Gaussian, and uneven distributed. A just-in-time correntropy-based local soft sensing approach is presented to predict the silicon content in this work. Without cumbersome efforts for outlier detection, a correntropy support vector regression (CSVR) modeling framework is proposed to deal with the soft sensor development and outlier detection simultaneously. Moreover, with a continuous updating database and a clustering strategy, a just-in-time CSVR (JCSVR) method is developed. Consequently, more accurate prediction and efficient implementations of JCSVR can be achieved. Better prediction performance of JCSVR is validated on the online silicon content prediction, compared with traditional soft sensors. PMID:28786957
Gall-ID: Tools for genotyping gall-causing phytopathogenic bacteria
USDA-ARS?s Scientific Manuscript database
Understanding the population structure and genetic diversity of plant pathogens, as well as the effect of agricultural practices on pathogen evolution, are important for disease management. Developments in molecular methods have contributed to increasing the resolution for accurate pathogen identifi...
CASE STUDIES EXAMINING LCA STREAMLINING TECHNIQUES
Pressure is mounting for more streamlined Life Cycle Assessment (LCA) methods that allow for evaluations that are quick and simple, but accurate. As part of an overall research effort to develop and demonstrate streamlined LCA, the U.S. Environmental Protection Agency has funded ...
[Study on Accurately Controlling Discharge Energy Method Used in External Defibrillator].
Song, Biao; Wang, Jianfei; Jin, Lian; Wu, Xiaomei
2016-01-01
This paper introduces a new method which controls discharge energy accurately. It is achieved by calculating target voltage based on transthoracic impedance and accurately controlling charging voltage and discharge pulse width. A new defibrillator is designed and programmed using this method. The test results show that this method is valid and applicable to all kinds of external defibrillators.
Tichauer, Kenneth M.; Wang, Yu; Pogue, Brian W.; Liu, Jonathan T. C.
2015-01-01
The development of methods to accurately quantify cell-surface receptors in living tissues would have a seminal impact in oncology. For example, accurate measures of receptor density in vivo could enhance early detection or surgical resection of tumors via protein-based contrast, allowing removal of cancer with high phenotype specificity. Alternatively, accurate receptor expression estimation could be used as a biomarker to guide patient-specific clinical oncology targeting of the same molecular pathway. Unfortunately, conventional molecular contrast-based imaging approaches are not well adapted to accurately estimating the nanomolar-level cell-surface receptor concentrations in tumors, as most images are dominated by nonspecific sources of contrast such as high vascular permeability and lymphatic inhibition. This article reviews approaches for overcoming these limitations based upon tracer kinetic modeling and the use of emerging protocols to estimate binding potential and the related receptor concentration. Methods such as using single time point imaging or a reference-tissue approach tend to have low accuracy in tumors, whereas paired-agent methods or advanced kinetic analyses are more promising to eliminate the dominance of interstitial space in the signals. Nuclear medicine and optical molecular imaging are the primary modalities used, as they have the nanomolar level sensitivity needed to quantify cell-surface receptor concentrations present in tissue, although each likely has a different clinical niche. PMID:26134619
William J. Zielinski; Fredrick V. Schlexer; T. Luke George; Kristine L. Pilgrim; Michael K. Schwartz
2013-01-01
The Point Arena mountain beaver (Aplodontia rufa nigra) is federally listed as an endangered subspecies that is restricted to a small geographic range in coastal Mendocino County, California. Management of this imperiled taxon requires accurate information on its demography and vital rates. We developed noninvasive survey methods, using hair snares to sample DNA and to...
ERIC Educational Resources Information Center
Supej, Matej; Holmberg, Hans-Christer
2011-01-01
Accurate time measurement is essential to temporal analysis in sport. This study aimed to (a) develop a new method for time computation from surveyed trajectories using a high-end global navigation satellite system (GNSS), (b) validate its precision by comparing GNSS with photocells, and (c) examine whether gate-to-gate times can provide more…
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The use of a new splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
Development of a new flux splitting scheme
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Christopher J., Jr.
1991-01-01
The successful use of a novel splitting scheme, the advection upstream splitting method, for model aerodynamic problems where Van Leer and Roe schemes had failed previously is discussed. The present scheme is based on splitting in which the convective and pressure terms are separated and treated differently depending on the underlying physical conditions. The present method is found to be both simple and accurate.
New methods and materials for molding and casting ice formations
NASA Technical Reports Server (NTRS)
Reehorst, Andrew L.; Richter, G. Paul
1987-01-01
This study was designed to find improved materials and techniques for molding and casting natural or simulated ice shapes that could replace the wax and plaster method. By utilizing modern molding and casting materials and techniques, a new methodology was developed that provides excellent reproduction, low-temperature capability, and reasonable turnaround time. The resulting casts are accurate and tough.
Ye, Yuanyuan; Deng, Yin; Mao, Jinju; Yan, Qin; Huang, Yidan; Zhang, Jun; Zheng, Jian; Li, Yue; Chen, Weixian
2018-05-01
Fecal occult bloodtest (FOBT) plays an important role in the diagnosis of gastrointestinal diseases. The sensitivities of current FOBT methods are still not satisfactory. The aim of this study is to develop a combined human transferrin (HTf)-hemoglobin (HHb) lateral flow assay (LFA) for accurate and rapid FOBT. Monoclonal antibodies (MAbs) targeting HTf were developed by conventional methods and paired using LFA strips. The best HTf MAb pair was chosen according to the overall performance on testing limit and specificity. Meanwhile, HHb LFA strips were prepared using previously developed HHb MAbs. The testing limit and specificity were characterized. Based on the selected HTf MAb pair and the verified HHb MAb pair, combined HTf-HHb strips were developed. The combined HTf-HHb strips were used for FOBT of 400 human fecal samples, including 200 gastrointestinal bleeding specimens and 200 healthy subjects. For comparison, the homemade individual HTf and HHb strips, as well as three kinds of commercial FOBT strips, were also used for the FOBT. Two MAb pairs targeting HTf were developed for LFA. Two types of HTf strips were prepared accordingly. The type I was chosen due to its lower detection limit. Using the type I HTf MAb pair and the verified HHb- MAb pair, the combined HTf-HHb strips could detect the HTf at concentrations between 1 ng/mL and 1 x 106 ng/mL and the HHb between 10 ng/mL and 2.5 x 106 ng/mL. Compared to individual HTf and HHb strips and three kinds of commercial strips, the combined strips showed the highest diagnostic sensitivity in FOBT (96.0%). The specificity was a satisfactory 99%. Our combined HTf-HHb test strips are a very promising product for accurate and rapid FOBT.
Innovations in energy expenditure assessment.
Achamrah, Najate; Oshima, Taku; Genton, Laurence
2018-06-15
Optimal nutritional therapy has been associated with better clinical outcomes and requires providing energy as closed as possible to measured energy expenditure. We reviewed the current innovations in energy expenditure assessment in humans, focusing on indirect calorimetry and other new alternative methods. Although considered the reference method to measure energy expenditure, the use of indirect calorimetry is currently limited by the lack of an adequate device. However, recent technical developments may allow a broader use of indirect calorimetry for in-patients and out-patients. An ongoing international academic initiative to develop a new indirect calorimeter aimed to provide innovative and affordable technical solutions for many of the current limitations of indirect calorimetry. New alternative methods to indirect calorimetry, including CO2 measurements in mechanically ventilated patients, isotopic approaches and accelerometry-based fitness equipments, show promises but have been either poorly studied and/or are not accurate compared to indirect calorimetry. Therefore, to date, energy expenditure measured by indirect calorimetry remains the gold standard to guide nutritional therapy. Some new innovative methods are demonstrating promises in energy expenditure assessment, but still need to be validated. There is an ongoing need for easy-to-use, accurate and affordable indirect calorimeter for daily use in in-patients and out-patients.
Numerical experiments with a symmetric high-resolution shock-capturing scheme
NASA Technical Reports Server (NTRS)
Yee, H. C.
1986-01-01
Characteristic-based explicit and implicit total variation diminishing (TVD) schemes for the two-dimensional compressible Euler equations have recently been developed. This is a generalization of recent work of Roe and Davis to a wider class of symmetric (non-upwind) TVD schemes other than Lax-Wendroff. The Roe and Davis schemes can be viewed as a subset of the class of explicit methods. The main properties of the present class of schemes are that they can be implicit, and, when steady-state calculations are sought, the numerical solution is independent of the time step. In a recent paper, a comparison of a linearized form of the present implicit symmetric TVD scheme with an implicit upwind TVD scheme originally developed by Harten and modified by Yee was given. Results favored the symmetric method. It was found that the latter is just as accurate as the upwind method while requiring less computational effort. Currently, more numerical experiments are being conducted on time-accurate calculations and on the effect of grid topology, numerical boundary condition procedures, and different flow conditions on the behavior of the method for steady-state applications. The purpose here is to report experiences with this type of scheme and give guidelines for its use.
Akram, Usman M; Khan, Shoab A
2012-10-01
There is an ever-increasing interest in the development of automatic medical diagnosis systems due to the advancement in computing technology and also to improve the service by medical community. The knowledge about health and disease is required for reliable and accurate medical diagnosis. Diabetic Retinopathy (DR) is one of the most common causes of blindness and it can be prevented if detected and treated early. DR has different signs and the most distinctive are microaneurysm and haemorrhage which are dark lesions and hard exudates and cotton wool spots which are bright lesions. Location and structure of blood vessels and optic disk play important role in accurate detection and classification of dark and bright lesions for early detection of DR. In this article, we propose a computer aided system for the early detection of DR. The article presents algorithms for retinal image preprocessing, blood vessel enhancement and segmentation and optic disk localization and detection which eventually lead to detection of different DR lesions using proposed hybrid fuzzy classifier. The developed methods are tested on four different publicly available databases. The presented methods are compared with recently published methods and the results show that presented methods outperform all others.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
A method to accelerate creation of plasma etch recipes using physics and Bayesian statistics
NASA Astrophysics Data System (ADS)
Chopra, Meghali J.; Verma, Rahul; Lane, Austin; Willson, C. G.; Bonnecaze, Roger T.
2017-03-01
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
2012-01-01
Background Efficient, robust, and accurate genotype imputation algorithms make large-scale application of genomic selection cost effective. An algorithm that imputes alleles or allele probabilities for all animals in the pedigree and for all genotyped single nucleotide polymorphisms (SNP) provides a framework to combine all pedigree, genomic, and phenotypic information into a single-stage genomic evaluation. Methods An algorithm was developed for imputation of genotypes in pedigreed populations that allows imputation for completely ungenotyped animals and for low-density genotyped animals, accommodates a wide variety of pedigree structures for genotyped animals, imputes unmapped SNP, and works for large datasets. The method involves simple phasing rules, long-range phasing and haplotype library imputation and segregation analysis. Results Imputation accuracy was high and computational cost was feasible for datasets with pedigrees of up to 25 000 animals. The resulting single-stage genomic evaluation increased the accuracy of estimated genomic breeding values compared to a scenario in which phenotypes on relatives that were not genotyped were ignored. Conclusions The developed imputation algorithm and software and the resulting single-stage genomic evaluation method provide powerful new ways to exploit imputation and to obtain more accurate genetic evaluations. PMID:22462519
A flexible new method for 3D measurement based on multi-view image sequences
NASA Astrophysics Data System (ADS)
Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu
2016-11-01
Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.
Computation of infrared cooling rates in the water vapor bands
NASA Technical Reports Server (NTRS)
Chou, M. D.; Arking, A.
1978-01-01
A fast but accurate method for calculating the infrared radiative terms due to water vapor has been developed. It makes use of the far wing approximation to scale transmission along an inhomogeneous path to an equivalent homogeneous path. Rather than using standard conditions for scaling, the reference temperatures and pressures are chosen in this study to correspond to the regions where cooling is most significant. This greatly increased the accuracy of the new method. Compared to line by line calculations, the new method has errors up to 4% of the maximum cooling rate, while a commonly used method based upon the Goody band model (Rodgers and Walshaw, 1966) introduces errors up to 11%. The effect of temperature dependence of transmittance has also been evaluated; the cooling rate errors range up to 11% when the temperature dependence is ignored. In addition to being more accurate, the new method is much faster than those based upon the Goody band model.
Blancett, Candace D; Fetterer, David P; Koistinen, Keith A; Morazzani, Elaine M; Monninger, Mitchell K; Piper, Ashley E; Kuehl, Kathleen A; Kearney, Brian J; Norris, Sarah L; Rossi, Cynthia A; Glass, Pamela J; Sun, Mei G
2017-10-01
A method for accurate quantitation of virus particles has long been sought, but a perfect method still eludes the scientific community. Electron Microscopy (EM) quantitation is a valuable technique because it provides direct morphology information and counts of all viral particles, whether or not they are infectious. In the past, EM negative stain quantitation methods have been cited as inaccurate, non-reproducible, and with detection limits that were too high to be useful. To improve accuracy and reproducibility, we have developed a method termed Scanning Transmission Electron Microscopy - Virus Quantitation (STEM-VQ), which simplifies sample preparation and uses a high throughput STEM detector in a Scanning Electron Microscope (SEM) coupled with commercially available software. In this paper, we demonstrate STEM-VQ with an alphavirus stock preparation to present the method's accuracy and reproducibility, including a comparison of STEM-VQ to viral plaque assay and the ViroCyt Virus Counter. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Measurement of bedload transport in sand-bed rivers: a look at two indirect sampling methods
Holmes, Robert R.; Gray, John R.; Laronne, Jonathan B.; Marr, Jeffrey D.G.
2010-01-01
Sand-bed rivers present unique challenges to accurate measurement of the bedload transport rate using the traditional direct sampling methods of direct traps (for example the Helley-Smith bedload sampler). The two major issues are: 1) over sampling of sand transport caused by “mining” of sand due to the flow disturbance induced by the presence of the sampler and 2) clogging of the mesh bag with sand particles reducing the hydraulic efficiency of the sampler. Indirect measurement methods hold promise in that unlike direct methods, no transport-altering flow disturbance near the bed occurs. The bedform velocimetry method utilizes a measure of the bedform geometry and the speed of bedform translation to estimate the bedload transport through mass balance. The bedform velocimetry method is readily applied for the estimation of bedload transport in large sand-bed rivers so long as prominent bedforms are present and the streamflow discharge is steady for long enough to provide sufficient bedform translation between the successive bathymetric data sets. Bedform velocimetry in small sandbed rivers is often problematic due to rapid variation within the hydrograph. The bottom-track bias feature of the acoustic Doppler current profiler (ADCP) has been utilized to accurately estimate the virtual velocities of sand-bed rivers. Coupling measurement of the virtual velocity with an accurate determination of the active depth of the streambed sediment movement is another method to measure bedload transport, which will be termed the “virtual velocity” method. Much research remains to develop methods and determine accuracy of the virtual velocity method in small sand-bed rivers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Jianwei; Remsing, Richard C.; Zhang, Yubo
2016-06-13
One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and vanmore » der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.« less
Porru, Marcella; Özkan, Leyla
2017-05-24
This paper develops a new simulation model for crystal size distribution dynamics in industrial batch crystallization. The work is motivated by the necessity of accurate prediction models for online monitoring purposes. The proposed numerical scheme is able to handle growth, nucleation, and agglomeration kinetics by means of the population balance equation and the method of characteristics. The former offers a detailed description of the solid phase evolution, while the latter provides an accurate and efficient numerical solution. In particular, the accuracy of the prediction of the agglomeration kinetics, which cannot be ignored in industrial crystallization, has been assessed by comparing it with solutions in the literature. The efficiency of the solution has been tested on a simulation of a seeded flash cooling batch process. Since the proposed numerical scheme can accurately simulate the system behavior more than hundred times faster than the batch duration, it is suitable for online applications such as process monitoring tools based on state estimators.
Error Reduction Program. [combustor performance evaluation codes
NASA Technical Reports Server (NTRS)
Syed, S. A.; Chiappetta, L. M.; Gosman, A. D.
1985-01-01
The details of a study to select, incorporate and evaluate the best available finite difference scheme to reduce numerical error in combustor performance evaluation codes are described. The combustor performance computer programs chosen were the two dimensional and three dimensional versions of Pratt & Whitney's TEACH code. The criteria used to select schemes required that the difference equations mirror the properties of the governing differential equation, be more accurate than the current hybrid difference scheme, be stable and economical, be compatible with TEACH codes, use only modest amounts of additional storage, and be relatively simple. The methods of assessment used in the selection process consisted of examination of the difference equation, evaluation of the properties of the coefficient matrix, Taylor series analysis, and performance on model problems. Five schemes from the literature and three schemes developed during the course of the study were evaluated. This effort resulted in the incorporation of a scheme in 3D-TEACH which is usuallly more accurate than the hybrid differencing method and never less accurate.
Operative record using intraoperative digital data in neurosurgery.
Houkin, K; Kuroda, S; Abe, H
2000-01-01
The purpose of this study was to develop a new method for more efficient and accurate operative records using intra-operative digital data in neurosurgery, including macroscopic procedures and microscopic procedures under an operating microscope. Macroscopic procedures were recorded using a digital camera and microscopic procedures were also recorded using a microdigital camera attached to an operating microscope. Operative records were then recorded digitally and filed in a computer using image retouch software and database base software. The time necessary for editing of the digital data and completing the record was less than 30 minutes. Once these operative records are digitally filed, they are easily transferred and used as database. Using digital operative records along with digital photography, neurosurgeons can document their procedures more accurately and efficiently than by the conventional method (handwriting). A complete digital operative record is not only accurate but also time saving. Construction of a database, data transfer and desktop publishing can be achieved using the intra-operative data, including intra-operative photographs.
2017-01-01
This paper develops a new simulation model for crystal size distribution dynamics in industrial batch crystallization. The work is motivated by the necessity of accurate prediction models for online monitoring purposes. The proposed numerical scheme is able to handle growth, nucleation, and agglomeration kinetics by means of the population balance equation and the method of characteristics. The former offers a detailed description of the solid phase evolution, while the latter provides an accurate and efficient numerical solution. In particular, the accuracy of the prediction of the agglomeration kinetics, which cannot be ignored in industrial crystallization, has been assessed by comparing it with solutions in the literature. The efficiency of the solution has been tested on a simulation of a seeded flash cooling batch process. Since the proposed numerical scheme can accurately simulate the system behavior more than hundred times faster than the batch duration, it is suitable for online applications such as process monitoring tools based on state estimators. PMID:28603342
Measurement of compressed breast thickness by optical stereoscopic photogrammetry.
Tyson, Albert H; Mawdsley, Gordon E; Yaffe, Martin J
2009-02-01
The determination of volumetric breast density (VBD) from mammograms requires accurate knowledge of the thickness of the compressed breast. In attempting to accurately determine VBD from images obtained on conventional mammography systems, the authors found that the thickness reported by a number of mammography systems in the field varied by as much as 15 mm when compressing the same breast or phantom. In order to evaluate the behavior of mammographic compression systems and to be able to predict the thickness at different locations in the breast on patients, they have developed a method for measuring the local thickness of the breast at all points of contact with the compression paddle using optical stereoscopic photogrammetry. On both flat (solid) and compressible phantoms, the measurements were accurate to better than 1 mm with a precision of 0.2 mm. In a pilot study, this method was used to measure thickness on 108 volunteers who were undergoing mammography examination. This measurement tool will allow us to characterize paddle surface deformations, deflections and calibration offsets for mammographic units.
Sun, Jianwei; Remsing, Richard C; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L; Perdew, John P
2016-09-01
One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.
Wyatt, Mark F; Stein, Bridget K; Brenton, A Gareth
2006-05-01
Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) has been shown to be an effective technique for the characterization of organometallic, coordination, and highly conjugated compounds. The preferred matrix is 2-[(2E)-3-(4-tert-butylphenyl)-2-methylprop-2-enylidene]malononitrile (DCTB), with radical ions observed. However, MALDI-TOFMS is generally not favored for accurate mass measurement. A specific method had to be developed for such compounds to assure the quality of our accurate mass results. Therefore, in this preliminary study, two methods of data acquisition, and both even-electron (EE+) ion and odd-electron (OE+.) radical ion mass calibration standards, have been investigated to establish the basic measurement technique. The benefit of this technique is demonstrated for a copper compound for which ions were observed by MALDI, but not by electrospray (ESI) or liquid secondary ion mass spectrometry (LSIMS); a mean mass accuracy error of -1.2 ppm was obtained.
Semiautomated Segmentation of Polycystic Kidneys in T2-Weighted MR Images.
Kline, Timothy L; Edwards, Marie E; Korfiatis, Panagiotis; Akkus, Zeynettin; Torres, Vicente E; Erickson, Bradley J
2016-09-01
The objective of the present study is to develop and validate a fast, accurate, and reproducible method that will increase and improve institutional measurement of total kidney volume and thereby avoid the higher costs, increased operator processing time, and inherent subjectivity associated with manual contour tracing. We developed a semiautomated segmentation approach, known as the minimal interaction rapid organ segmentation (MIROS) method, which results in human interaction during measurement of total kidney volume on MR images being reduced to a few minutes. This software tool automatically steps through slices and requires rough definition of kidney boundaries supplied by the user. The approach was verified on T2-weighted MR images of 40 patients with autosomal dominant polycystic kidney disease of varying degrees of severity. The MIROS approach required less than 5 minutes of user interaction in all cases. When compared with the ground-truth reference standard, MIROS showed no significant bias and had low variability (mean ± 2 SD, 0.19% ± 6.96%). The MIROS method will greatly facilitate future research studies in which accurate and reproducible measurements of cystic organ volumes are needed.
Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images
Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu
2013-01-01
With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856
Development of a Global Multilayered Cloud Retrieval System
NASA Technical Reports Server (NTRS)
Huang, J.; Minnis, P.; Lin, B.; Yi, Y.; Ayers, J. K.; Khaiyer, M. M.; Arduini, R.; Fan, T.-F
2004-01-01
A more rigorous multilayered cloud retrieval system has been developed to improve the determination of high cloud properties in multilayered clouds. The MCRS attempts a more realistic interpretation of the radiance field than earlier methods because it explicitly resolves the radiative transfer that would produce the observed radiances. A two-layer cloud model was used to simulate multilayered cloud radiative characteristics. Despite the use of a simplified two-layer cloud reflectance parameterization, the MCRS clearly produced a more accurate retrieval of ice water path than simple differencing techniques used in the past. More satellite data and ground observation have to be used to test the MCRS. The MCRS methods are quite appropriate for interpreting the radiances when the high cloud has a relatively large optical depth (tau(sub I) greater than 2). For thinner ice clouds, a more accurate retrieval might be possible using infrared methods. Selection of an ice cloud retrieval and a variety of other issues must be explored before a complete global application of this technique can be implemented. Nevertheless, the initial results look promising.
NASA Technical Reports Server (NTRS)
Roth, Don J.
1998-01-01
NASA Lewis Research Center's Life Prediction Branch, in partnership with Sonix, Inc., and Cleveland State University, recently advanced the development of, refined, and commercialized an advanced nondestructive evaluation (NDE) inspection method entitled the Single Transducer Thickness-Independent Ultrasonic Imaging Method. Selected by R&D Magazine as one of the 100 most technologically significant new products of 1996, the method uses a single transducer to eliminate the superimposing effects of thickness variation in the ultrasonic images of materials. As a result, any variation seen in the image is due solely to microstructural variation. This nondestructive method precisely and accurately characterizes material gradients (pore fraction, density, or chemical) that affect the uniformity of a material's physical performance (mechanical, thermal, or electrical). Advantages of the method over conventional ultrasonic imaging include (1) elimination of machining costs (for precision thickness control) during the quality control stages of material processing and development and (2) elimination of labor costs and subjectivity involved in further image processing and image interpretation. At NASA Lewis, the method has been used primarily for accurate inspections of high temperature structural materials including monolithic ceramics, metal matrix composites, and polymer matrix composites. Data were published this year for platelike samples, and current research is focusing on applying the method to tubular components. The initial publicity regarding the development of the method generated 150 requests for further information from a wide variety of institutions and individuals including the Federal Bureau of Investigation (FBI), Lockheed Martin Corporation, Rockwell International, Hewlett Packard Company, and Procter & Gamble Company. In addition, NASA has been solicited by the 3M Company and Allison Abrasives to use this method to inspect composite materials that are manufactured by these companies.
The goal of this Funding Opportunity Announcement (FOA) is to advance surveillance science by supporting the development of new and innovative tools and methods for more efficient, detailed, timely, and accurate data collection by cancer registries. Specifically, the FOA seeks applications for projects to develop, adapt, apply, scale-up, and validate tools and methods to improve the collection and integration cancer registry data and to expand the data items collected. Population-based central cancer registries (a partnership must involve at least two different registries).
Application of artificial neural networks in nonlinear analysis of trusses
NASA Technical Reports Server (NTRS)
Alam, J.; Berke, L.
1991-01-01
A method is developed to incorporate neural network model based upon the Backpropagation algorithm for material response into nonlinear elastic truss analysis using the initial stiffness method. Different network configurations are developed to assess the accuracy of neural network modeling of nonlinear material response. In addition to this, a scheme based upon linear interpolation for material data, is also implemented for comparison purposes. It is found that neural network approach can yield very accurate results if used with care. For the type of problems under consideration, it offers a viable alternative to other material modeling methods.
Development of a nonlinear vortex method
NASA Technical Reports Server (NTRS)
Kandil, O. A.
1982-01-01
Steady and unsteady Nonliner Hybrid Vortex (NHV) method, for low aspect ratio wings at large angles of attack, is developed. The method uses vortex panels with first-order vorticity distribution (equivalent to second-order doublet distribution) to calculate the induced velocity in the near field using closed form expressions. In the far field, the distributed vorticity is reduced to concentrated vortex lines and the simpler Biot-Savart's law is employed. The method is applied to rectangular wings in steady and unsteady flows without any restriction on the order of magnitude of the disturbances in the flow field. The numerical results show that the method accurately predicts the distributed aerodynamic loads and that it is of acceptable computational efficiency.
Value and Methods for Molecular Subtyping of Bacteria
NASA Astrophysics Data System (ADS)
Moorman, Mark; Pruett, Payton; Weidman, Martin
Tracking sources of microbial contaminants has been a concern since the early days of commercial food processing; however, recent advances in the development of molecular subtyping methods have provided tools that allow more rapid and highly accurate determinations of these sources. Only individuals with an understanding of the molecular subtyping methods, and the epidemiological techniques used, can evaluate the reliability of a link between a food-manufacturing plant, a food, and a foodborne disease outbreak.
Blomqvist, Maria; Borén, Jan; Zetterberg, Henrik; Blennow, Kaj; Månsson, Jan-Eric; Ståhlman, Marcus
2017-07-01
Sulfatides (STs) are a group of glycosphingolipids that are highly expressed in brain. Due to their importance for normal brain function and their potential involvement in neurological diseases, development of accurate and sensitive methods for their determination is needed. Here we describe a high-throughput oriented and quantitative method for the determination of STs in cerebrospinal fluid (CSF). The STs were extracted using a fully automated liquid/liquid extraction method and quantified using ultra-performance liquid chromatography coupled to tandem mass spectrometry. With the high sensitivity of the developed method, quantification of 20 ST species from only 100 μl of CSF was performed. Validation of the method showed that the STs were extracted with high recovery (90%) and could be determined with low inter- and intra-day variation. Our method was applied to a patient cohort of subjects with an Alzheimer's disease biomarker profile. Although the total ST levels were unaltered compared with an age-matched control group, we show that the ratio of hydroxylated/nonhydroxylated STs was increased in the patient cohort. In conclusion, we believe that the fast, sensitive, and accurate method described in this study is a powerful new tool for the determination of STs in clinical as well as preclinical settings. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.
A fast method to compute Three-Dimensional Infrared Radiative Transfer in non scattering medium
NASA Astrophysics Data System (ADS)
Makke, Laurent; Musson-Genon, Luc; Carissimo, Bertrand
2014-05-01
The Atmospheric Radiation field has seen the development of more accurate and faster methods to take into account absoprtion in participating media. Radiative fog appears with clear sky condition due to a significant cooling during the night, so scattering is left out. Fog formation modelling requires accurate enough method to compute cooling rates. Thanks to High Performance Computing, multi-spectral approach of Radiative Transfer Equation resolution is most often used. Nevertheless, the coupling of three-dimensionnal radiative transfer with fluid dynamics is very detrimental to the computational cost. To reduce the time spent in radiation calculations, the following method uses analytical absorption functions fitted by Sasamori (1968) on Yamamoto's charts (Yamamoto,1956) to compute a local linear absorption coefficient. By averaging radiative properties, this method eliminates the spectral integration. For an isothermal atmosphere, analytical calculations lead to an explicit formula between emissivities functions and linear absorption coefficient. In the case of cooling to space approximation, this analytical expression gives very accurate results compared to correlated k-distribution. For non homogeneous paths, we propose a two steps algorithm. One-dimensional radiative quantities and linear absorption coefficient are computed by a two-flux method. Then, three-dimensional RTE under the grey medium assumption is solved with the DOM. Comparisons with measurements of radiative quantities during ParisFOG field (2006) shows the cability of this method to handle strong vertical variations of pressure/temperature and gases concentrations.
Ota, Hiroyuki; Lim, Tae-Kyu; Tanaka, Tsuyoshi; Yoshino, Tomoko; Harada, Manabu; Matsunaga, Tadashi
2006-09-18
A novel, automated system, PNE-1080, equipped with eight automated pestle units and a spectrophotometer was developed for genomic DNA extraction from maize using aminosilane-modified bacterial magnetic particles (BMPs). The use of aminosilane-modified BMPs allowed highly accurate DNA recovery. The (A(260)-A(320)):(A(280)-A(320)) ratio of the extracted DNA was 1.9+/-0.1. The DNA quality was sufficiently pure for PCR analysis. The PNE-1080 offered rapid assay completion (30 min) with high accuracy. Furthermore, the results of real-time PCR confirmed that our proposed method permitted the accurate determination of genetically modified DNA composition and correlated well with results obtained by conventional cetyltrimethylammonium bromide (CTAB)-based methods.
Fernández-Carrobles, M. Milagro; Tadeo, Irene; Bueno, Gloria; Noguera, Rosa; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial
2013-01-01
Given that angiogenesis and lymphangiogenesis are strongly related to prognosis in neoplastic and other pathologies and that many methods exist that provide different results, we aim to construct a morphometric tool allowing us to measure different aspects of the shape and size of vascular vessels in a complete and accurate way. The developed tool presented is based on vessel closing which is an essential property to properly characterize the size and the shape of vascular and lymphatic vessels. The method is fast and accurate improving existing tools for angiogenesis analysis. The tool also improves the accuracy of vascular density measurements, since the set of endothelial cells forming a vessel is considered as a single object. PMID:24489494
Development of Improved Surface Integral Methods for Jet Aeroacoustic Predictions
NASA Technical Reports Server (NTRS)
Pilon, Anthony R.; Lyrintzis, Anastasios S.
1997-01-01
The accurate prediction of aerodynamically generated noise has become an important goal over the past decade. Aeroacoustics must now be an integral part of the aircraft design process. The direct calculation of aerodynamically generated noise with CFD-like algorithms is plausible. However, large computer time and memory requirements often make these predictions impractical. It is therefore necessary to separate the aeroacoustics problem into two parts, one in which aerodynamic sound sources are determined, and another in which the propagating sound is calculated. This idea is applied in acoustic analogy methods. However, in the acoustic analogy, the determination of far-field sound requires the solution of a volume integral. This volume integration again leads to impractical computer requirements. An alternative to the volume integrations can be found in the Kirchhoff method. In this method, Green's theorem for the linear wave equation is used to determine sound propagation based on quantities on a surface surrounding the source region. The change from volume to surface integrals represents a tremendous savings in the computer resources required for an accurate prediction. This work is concerned with the development of enhancements of the Kirchhoff method for use in a wide variety of aeroacoustics problems. This enhanced method, the modified Kirchhoff method, is shown to be a Green's function solution of Lighthill's equation. It is also shown rigorously to be identical to the methods of Ffowcs Williams and Hawkings. This allows for development of versatile computer codes which can easily alternate between the different Kirchhoff and Ffowcs Williams-Hawkings formulations, using the most appropriate method for the problem at hand. The modified Kirchhoff method is developed primarily for use in jet aeroacoustics predictions. Applications of the method are shown for two dimensional and three dimensional jet flows. Additionally, the enhancements are generalized so that they may be used in any aeroacoustics problem.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hendricks, J. Lynne; Whalen, Mike F.; Bodis, James R.; Martin, Katherine
1996-01-01
This article describes the commercial implementation of ultrasonic velocity imaging methods developed and refined at NASA Lewis Research Center on the Sonix c-scan inspection system. Two velocity imaging methods were implemented: thickness-based and non-thickness-based reflector plate methods. The article demonstrates capabilities of the commercial implementation and gives the detailed operating procedures required for Sonix customers to achieve optimum velocity imaging results. This commercial implementation of velocity imaging provides a 100x speed increase in scanning and processing over the lab-based methods developed at LeRC. The significance of this cooperative effort is that the aerospace and other materials development-intensive industries which use extensive ultrasonic inspection for process control and failure analysis will now have an alternative, highly accurate imaging method commercially available.
Improved perturbation method for gadolinia worth calculation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, R.T.; Congdon, S.P.
1986-01-01
Gadolinia is utilized in light water power reactors as burnable poison for reserving excess reactivity. Good gadolinia worth estimation is useful for evaluating fuel bundle designs, core operating strategies, and fuel cycle economics. The authors have developed an improved perturbation method based on exact perturbation theory for gadolinia worth calculations in fuel bundles. The method predicts much more accurate gadolinia worth than the first-order perturbation method (commonly used to estimate nuclide worths) for bundles containing fresh or partly burned gadolinia.
NASA Astrophysics Data System (ADS)
Hu, Jianqiang; Liu, Ahdi; Zhou, Chu; Zhang, Xiaohui; Wang, Mingyuan; Zhang, Jin; Feng, Xi; Li, Hong; Xie, Jinlin; Liu, Wandong; Yu, Changxuan
2017-08-01
A new integrated technique for fast and accurate measurement of the quasi-optics, especially for the microwave/millimeter wave diagnostic systems of fusion plasma, has been developed. Using the LabVIEW-based comprehensive scanning system, we can realize not only automatic but also fast and accurate measurement, which will help to eliminate the effects of temperature drift and standing wave/multi-reflection. With the Matlab-based asymmetric two-dimensional Gaussian fitting method, all the desired parameters of the microwave beam can be obtained. This technique can be used in the design and testing of microwave diagnostic systems such as reflectometers and the electron cyclotron emission imaging diagnostic systems of the Experimental Advanced Superconducting Tokamak.
NASA Technical Reports Server (NTRS)
Kochevar, H. J.
1972-01-01
A new technique has been developed to accurately measure the G/T of a small aperture antenna using geostationary satellites and the well established radio star method. A large aperture antenna having the capability of accurately measuring its G/T by using a radio star of known power density is used to obtain an accurate G/T to use as a reference. The CNR of both the large and small aperture antennas are then measured using an Applications Technology Satellite (ATS). After normalizing the two C/N ratios to the large antenna system noise temperature the G/T or the gain G of the small aperture antenna can then be determined.
Scribbans, T D; Berg, K; Narazaki, K; Janssen, I; Gurd, B J
2015-09-01
There is currently little information regarding the ability of metabolic prediction equations to accurately predict oxygen uptake and exercise intensity from heart rate (HR) during intermittent sport. The purpose of the present study was to develop and, cross-validate equations appropriate for accurately predicting oxygen cost (VO2) and energy expenditure from HR during intermittent sport participation. Eleven healthy adult males (19.9±1.1yrs) were recruited to establish the relationship between %VO2peak and %HRmax during low-intensity steady state endurance (END), moderate-intensity interval (MOD) and high intensity-interval exercise (HI), as performed on a cycle ergometer. Three equations (END, MOD, and HI) for predicting %VO2peak based on %HRmax were developed. HR and VO2 were directly measured during basketball games (6 male, 20.8±1.0 yrs; 6 female, 20.0±1.3yrs) and volleyball drills (12 female; 20.8±1.0yrs). Comparisons were made between measured and predicted VO2 and energy expenditure using the 3 equations developed and 2 previously published equations. The END and MOD equations accurately predicted VO2 and energy expenditure, while the HI equation underestimated, and the previously published equations systematically overestimated VO2 and energy expenditure. Intermittent sport VO2 and energy expenditure can be accurately predicted from heart rate data using either the END (%VO2peak=%HRmax x 1.008-17.17) or MOD (%VO2peak=%HRmax x 1.2-32) equations. These 2 simple equations provide an accessible and cost-effective method for accurate estimation of exercise intensity and energy expenditure during intermittent sport.
Arashida, Naoko; Nishimoto, Rumi; Harada, Masashi; Shimbo, Kazutaka; Yamada, Naoyuki
2017-02-15
Amino acids and their related metabolites play important roles in various physiological processes and have consequently become biomarkers for diseases. However, accurate quantification methods have only been established for major compounds, such as amino acids and a limited number of target metabolites. We previously reported a highly sensitive high-throughput method for the simultaneous quantification of amines using 3-aminopyridyl-N-succinimidyl carbamate as a derivatization reagent combined with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Herein, we report the successful development of a practical and accurate LC-MS/MS method to analyze low concentrations of 40 physiological amines in 19 min. Thirty-five of these amines showed good linearity, limits of quantification, accuracy, precision, and recovery characteristics in plasma, with scheduled selected reaction monitoring acquisitions. Plasma samples from 10 healthy volunteers were evaluated using our newly developed method. The results revealed that 27 amines were detected in one of the samples, and that 24 of these compounds could be quantified. Notably, this new method successfully quantified metabolites with high accuracy across three orders of magnitude, with lowest and highest averaged concentrations of 31.7 nM (for spermine) and 18.3 μM (for α-aminobutyric acid), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel
2017-01-01
Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung
1993-01-01
A new numerical framework for solving conservation laws is being developed. This new approach differs substantially in both concept and methodology from the well-established methods--i.e., finite difference, finite volume, finite element, and spectral methods. It is conceptually simple and designed to avoid several key limitations to the above traditional methods. An explicit model scheme for solving a simple 1-D unsteady convection-diffusion equation is constructed and used to illuminate major differences between the current method and those mentioned above. Unexpectedly, its amplification factors for the pure convection and pure diffusion cases are identical to those of the Leapfrog and the DuFort-Frankel schemes, respectively. Also, this explicit scheme and its Navier-Stokes extension have the unusual property that their stabilities are limited only by the CFL condition. Moreover, despite the fact that it does not use any flux-limiter or slope-limiter, the Navier-Stokes solver is capable of generating highly accurate shock tube solutions with shock discontinuities being resolved within one mesh interval. An accurate Euler solver also is constructed through another extension. It has many unusual properties, e.g., numerical diffusion at all mesh points can be controlled by a set of local parameters.
NASA Astrophysics Data System (ADS)
Zhang, Rui; Newhauser, Wayne D.
2009-03-01
In proton therapy, the radiological thickness of a material is commonly expressed in terms of water equivalent thickness (WET) or water equivalent ratio (WER). However, the WET calculations required either iterative numerical methods or approximate methods of unknown accuracy. The objective of this study was to develop a simple deterministic formula to calculate WET values with an accuracy of 1 mm for materials commonly used in proton radiation therapy. Several alternative formulas were derived in which the energy loss was calculated based on the Bragg-Kleeman rule (BK), the Bethe-Bloch equation (BB) or an empirical version of the Bethe-Bloch equation (EBB). Alternative approaches were developed for targets that were 'radiologically thin' or 'thick'. The accuracy of these methods was assessed by comparison to values from an iterative numerical method that utilized evaluated stopping power tables. In addition, we also tested the approximate formula given in the International Atomic Energy Agency's dosimetry code of practice (Technical Report Series No 398, 2000, IAEA, Vienna) and stopping power ratio approximation. The results of these comparisons revealed that most methods were accurate for cases involving thin or low-Z targets. However, only the thick-target formulas provided accurate WET values for targets that were radiologically thick and contained high-Z material.
Earthquake Rupture Dynamics using Adaptive Mesh Refinement and High-Order Accurate Numerical Methods
NASA Astrophysics Data System (ADS)
Kozdon, J. E.; Wilcox, L.
2013-12-01
Our goal is to develop scalable and adaptive (spatial and temporal) numerical methods for coupled, multiphysics problems using high-order accurate numerical methods. To do so, we are developing an opensource, parallel library known as bfam (available at http://bfam.in). The first application to be developed on top of bfam is an earthquake rupture dynamics solver using high-order discontinuous Galerkin methods and summation-by-parts finite difference methods. In earthquake rupture dynamics, wave propagation in the Earth's crust is coupled to frictional sliding on fault interfaces. This coupling is two-way, required the simultaneous simulation of both processes. The use of laboratory-measured friction parameters requires near-fault resolution that is 4-5 orders of magnitude higher than that needed to resolve the frequencies of interest in the volume. This, along with earlier simulations using a low-order, finite volume based adaptive mesh refinement framework, suggest that adaptive mesh refinement is ideally suited for this problem. The use of high-order methods is motivated by the high level of resolution required off the fault in earlier the low-order finite volume simulations; we believe this need for resolution is a result of the excessive numerical dissipation of low-order methods. In bfam spatial adaptivity is handled using the p4est library and temporal adaptivity will be accomplished through local time stepping. In this presentation we will present the guiding principles behind the library as well as verification of code against the Southern California Earthquake Center dynamic rupture code validation test problems.
Improvements to robotics-inspired conformational sampling in rosetta.
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.
Improvements to Robotics-Inspired Conformational Sampling in Rosetta
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new “next-generation KIC” method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions. PMID:23704889
Life prediction technologies for aeronautical propulsion systems
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.
1990-01-01
Fatigue and fracture problems continue to occur in aeronautical gas turbine engines. Components whose useful life is limited by these failure modes include turbine hot-section blades, vanes, and disks. Safety considerations dictate that catastrophic failures be avoided, while economic considerations dictate that catastrophic failures be avoided, while economic considerations dictate that noncatastrophic failures occur as infrequently as possible. Therefore, the decision in design is making the tradeoff between engine performance and durability. LeRC has contributed to the aeropropulsion industry in the area of life prediction technology for over 30 years, developing creep and fatigue life prediction methodologies for hot-section materials. At the present time, emphasis is being placed on the development of methods capable of handling both thermal and mechanical fatigue under severe environments. Recent accomplishments include the development of more accurate creep-fatigue life prediction methods such as the total strain version of LeRC's strain-range partitioning (SRP) and the HOST-developed cyclic damage accumulation (CDA) model. Other examples include the development of a more accurate cumulative fatigue damage rule - the double damage curve approach (DDCA), which provides greatly improved accuracy in comparison with usual cumulative fatigue design rules. Other accomplishments in the area of high-temperature fatigue crack growth may also be mentioned. Finally, we are looking to the future and are beginning to do research on the advanced methods which will be required for development of advanced materials and propulsion systems over the next 10-20 years.
GlobalSoilMap France: High-resolution spatial modelling the soils of France up to two meter depth.
Mulder, V L; Lacoste, M; Richer-de-Forges, A C; Arrouays, D
2016-12-15
This work presents the first GlobalSoilMap (GSM) products for France. We developed an automatic procedure for mapping the primary soil properties (clay, silt, sand, coarse elements, pH, soil organic carbon (SOC), cation exchange capacity (CEC) and soil depth). The procedure employed a data-mining technique and a straightforward method for estimating the 90% confidence intervals (CIs). The most accurate models were obtained for pH, sand and silt. Next, CEC, clay and SOC were found reasonably accurate predicted. Coarse elements and soil depth were the least accurate of all models. Overall, all models were considered robust; important indicators for this were 1) the small difference in model diagnostics between the calibration and cross-validation set, 2) the unbiased mean predictions, 3) the smaller spatial structure of the prediction residuals in comparison to the observations and 4) the similar performance compared to other developed GlobalSoilMap products. Nevertheless, the confidence intervals (CIs) were rather wide for all soil properties. The median predictions became less reliable with increasing depth, as indicated by the increase of CIs with depth. In addition, model accuracy and the corresponding CIs varied depending on the soil variable of interest, soil depth and geographic location. These findings indicated that the CIs are as informative as the model diagnostics. In conclusion, the presented method resulted in reasonably accurate predictions for the majority of the soil properties. End users can employ the products for different purposes, as was demonstrated with some practical examples. The mapping routine is flexible for cloud-computing and provides ample opportunity to be further developed when desired by its users. This allows regional and international GSM partners with fewer resources to develop their own products or, otherwise, to improve the current routine and work together towards a robust high-resolution digital soil map of the world. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Avila, Edwin M. Martinez; Muniz, Ricardo; Szafran, Jamie; Dalton, Adam
2011-01-01
Lines of code (LOC) analysis is one of the methods used to measure programmer productivity and estimate schedules of programming projects. The Launch Control System (LCS) had previously used this method to estimate the amount of work and to plan development efforts. The disadvantage of using LOC as a measure of effort is that one can only measure 30% to 35% of the total effort of software projects involves coding [8]. In the application, instead of using the LOC we are using function point for a better estimation of hours in each software to develop. Because of these disadvantages, Jamie Szafran of the System Software Branch of Control And Data Systems (NE-C3) at Kennedy Space Canter developed a web application called Function Point Analysis (FPA) Depot. The objective of this web application is that the LCS software architecture team can use the data to more accurately estimate the effort required to implement customer requirements. This paper describes the evolution of the domain model used for function point analysis as project managers continually strive to generate more accurate estimates.
A comparison of optical gradation analysis devices to current test methods--phase 2.
DOT National Transportation Integrated Search
2012-04-01
Optical devices are being developed to deliver accurate size and shape of aggregate particles with, less labor, less consistency error, : and greater reliability. This study was initiated to review the existing technology, and generate basic data to ...
Iyer, Janani; Wang, Qingyu; Le, Thanh; Pizzo, Lucilla; Grönke, Sebastian; Ambegaokar, Surendra S.; Imai, Yuzuru; Srivastava, Ashutosh; Troisí, Beatriz Llamusí; Mardon, Graeme; Artero, Ruben; Jackson, George R.; Isaacs, Adrian M.; Partridge, Linda; Lu, Bingwei; Kumar, Justin P.; Girirajan, Santhosh
2016-01-01
About two-thirds of the vital genes in the Drosophila genome are involved in eye development, making the fly eye an excellent genetic system to study cellular function and development, neurodevelopment/degeneration, and complex diseases such as cancer and diabetes. We developed a novel computational method, implemented as Flynotyper software (http://flynotyper.sourceforge.net), to quantitatively assess the morphological defects in the Drosophila eye resulting from genetic alterations affecting basic cellular and developmental processes. Flynotyper utilizes a series of image processing operations to automatically detect the fly eye and the individual ommatidium, and calculates a phenotypic score as a measure of the disorderliness of ommatidial arrangement in the fly eye. As a proof of principle, we tested our method by analyzing the defects due to eye-specific knockdown of Drosophila orthologs of 12 neurodevelopmental genes to accurately document differential sensitivities of these genes to dosage alteration. We also evaluated eye images from six independent studies assessing the effect of overexpression of repeats, candidates from peptide library screens, and modifiers of neurotoxicity and developmental processes on eye morphology, and show strong concordance with the original assessment. We further demonstrate the utility of this method by analyzing 16 modifiers of sine oculis obtained from two genome-wide deficiency screens of Drosophila and accurately quantifying the effect of its enhancers and suppressors during eye development. Our method will complement existing assays for eye phenotypes, and increase the accuracy of studies that use fly eyes for functional evaluation of genes and genetic interactions. PMID:26994292
Summary of Research 1997, Department of Mechanical Engineering.
1999-01-01
Maintenance for Diesel Engines 49 Control Architectures and Non-Linear Controllers for Unmanned Underwater Vehicles 38 Creep of Fiber Reinforced Metal...Technology Demonstration (ATD) 50 Development of Delphi Visual Performance Model 25 Diffraction Methods for the Accurate Measurement of Structure Factors...literature. If this could be done, a U.S. version of ORACLE (to be called DELPHI ) could be developed and used. The result has been the development of a
NASA Astrophysics Data System (ADS)
Chen, Liang-Chia; Ho, Hsuan-Wei; Nguyen, Xuan-Loc
2010-02-01
This article presents a novel band-pass filter for Fourier transform profilometry (FTP) for accurate 3-D surface reconstruction. FTP can be employed to obtain 3-D surface profiles by one-shot images to achieve high-speed measurement. However, its measurement accuracy has been significantly influenced by the spectrum filtering process required to extract the phase information representing various surface heights. Using the commonly applied 2-D Hanning filter, the measurement errors could be up to 5-10% of the overall measuring height and it is unacceptable to various industrial application. To resolve this issue, the article proposes an elliptical band-pass filter for extracting the spectral region possessing essential phase information for reconstructing accurate 3-D surface profiles. The elliptical band-pass filter was developed and optimized to reconstruct 3-D surface models with improved measurement accuracy. Some experimental results verify that the accuracy can be effectively enhanced by using the elliptical filter. The accuracy improvement of 44.1% and 30.4% can be achieved in 3-D and sphericity measurement, respectively, when the elliptical filter replaces the traditional filter as the band-pass filtering method. Employing the developed method, the maximum measured error can be kept within 3.3% of the overall measuring range.
Seashols-Williams, Sarah; Green, Raquel; Wohlfahrt, Denise; Brand, Angela; Tan-Torres, Antonio Limjuco; Nogales, Francy; Brooks, J Paul; Singh, Baneshwar
2018-05-17
Sequencing and classification of microbial taxa within forensically relevant biological fluids has the potential for applications in the forensic science and biomedical fields. The quantity of bacterial DNA from human samples is currently estimated based on quantity of total DNA isolated. This method can miscalculate bacterial DNA quantity due to the mixed nature of the sample, and consequently library preparation is often unreliable. We developed an assay that can accurately and specifically quantify bacterial DNA within a mixed sample for reliable 16S ribosomal DNA (16S rDNA) library preparation and high throughput sequencing (HTS). A qPCR method was optimized using universal 16S rDNA primers, and a commercially available bacterial community DNA standard was used to develop a precise standard curve. Following qPCR optimization, 16S rDNA libraries from saliva, vaginal and menstrual secretions, urine, and fecal matter were amplified and evaluated at various DNA concentrations; successful HTS data were generated with as low as 20 pg of bacterial DNA. Changes in bacterial DNA quantity did not impact observed relative abundances of major bacterial taxa, but relative abundance changes of minor taxa were observed. Accurate quantification of microbial DNA resulted in consistent, successful library preparations for HTS analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Biosensors for spatiotemporal detection of reactive oxygen species in cells and tissues.
Erard, Marie; Dupré-Crochet, Sophie; Nüße, Oliver
2018-05-01
Redox biology has become a major issue in numerous areas of physiology. Reactive oxygen species (ROS) have a broad range of roles from signal transduction to growth control and cell death. To understand the nature of these roles, accurate measurement of the reactive compounds is required. An increasing number of tools for ROS detection is available; however, the specificity and sensitivity of these tools are often insufficient. Furthermore, their specificity has been rarely evaluated in complex physiological conditions. Many ROS probes are sensitive to environmental conditions in particular pH, which may interfere with ROS detection and cause misleading results. Accurate detection of ROS in physiology and pathophysiology faces additional challenges concerning the precise localization of the ROS and the timing of their production and disappearance. Certain ROS are membrane permeable, and certain ROS probes move across cells and organelles. Targetable ROS probes such as fluorescent protein-based biosensors are required for accurate localization. Here we analyze these challenges in more detail, provide indications on the strength and weakness of current tools for ROS detection, and point out developments that will provide improved ROS detection methods in the future. There is no universal method that fits all situations in physiology and cell biology. A detailed knowledge of the ROS probes is required to choose the appropriate method for a given biological problem. The knowledge of the shortcomings of these probes should also guide the development of new sensors.
Doorn, J; Storteboom, T T R; Mulder, A M; de Jong, W H A; Rottier, B L; Kema, I P
2015-07-01
Measurement of chloride in sweat is an essential part of the diagnostic algorithm for cystic fibrosis. The lack in sensitivity and reproducibility of current methods led us to develop an ion chromatography/high-performance liquid chromatography (IC/HPLC) method, suitable for the analysis of both chloride and sodium in small volumes of sweat. Precision, linearity and limit of detection of an in-house developed IC/HPLC method were established. Method comparison between the newly developed IC/HPLC method and the traditional Chlorocounter was performed, and trueness was determined using Passing Bablok method comparison with external quality assurance material (Royal College of Pathologists of Australasia). Precision and linearity fulfill criteria as established by UK guidelines are comparable with inductively coupled plasma-mass spectrometry methods. Passing Bablok analysis demonstrated excellent correlation between IC/HPLC measurements and external quality assessment target values, for both chloride and sodium. With a limit of quantitation of 0.95 mmol/L, our method is suitable for the analysis of small amounts of sweat and can thus be used in combination with the Macroduct collection system. Although a chromatographic application results in a somewhat more expensive test compared to a Chlorocounter test, more accurate measurements are achieved. In addition, simultaneous measurements of sodium concentrations will result in better detection of false positives, less test repeating and thus faster and more accurate and effective diagnosis. The described IC/HPLC method, therefore, provides a precise, relatively cheap and easy-to-handle application for the analysis of both chloride and sodium in sweat. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C
2015-06-08
Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Accurate prediction of secondary metabolite gene clusters in filamentous fungi.
Andersen, Mikael R; Nielsen, Jakob B; Klitgaard, Andreas; Petersen, Lene M; Zachariasen, Mia; Hansen, Tilde J; Blicher, Lene H; Gotfredsen, Charlotte H; Larsen, Thomas O; Nielsen, Kristian F; Mortensen, Uffe H
2013-01-02
Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify supporting enzymes for key synthases one cluster at a time. In this study, we design and apply a DNA expression array for Aspergillus nidulans in combination with legacy data to form a comprehensive gene expression compendium. We apply a guilt-by-association-based analysis to predict the extent of the biosynthetic clusters for the 58 synthases active in our set of experimental conditions. A comparison with legacy data shows the method to be accurate in 13 of 16 known clusters and nearly accurate for the remaining 3 clusters. Furthermore, we apply a data clustering approach, which identifies cross-chemistry between physically separate gene clusters (superclusters), and validate this both with legacy data and experimentally by prediction and verification of a supercluster consisting of the synthase AN1242 and the prenyltransferase AN11080, as well as identification of the product compound nidulanin A. We have used A. nidulans for our method development and validation due to the wealth of available biochemical data, but the method can be applied to any fungus with a sequenced and assembled genome, thus supporting further secondary metabolite pathway elucidation in the fungal kingdom.
Lei, Huan; Yang, Xiu; Zheng, Bin; ...
2015-11-05
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
The accuracy of ultrasound for measurement of mobile- bearing motion.
Aigner, Christian; Radl, Roman; Pechmann, Michael; Rehak, Peter; Stacher, Rudolf; Windhager, Reinhard
2004-04-01
After anterior cruciate ligament-sacrificing total knee replacement, mobile bearings sometimes have paradoxic movement but the implications of such movement on function, wear, and implant survival are not known. To study this potential problem accurate, reliable, and widely available inexpensive tools for in vivo mobile-bearing motion analyses are needed. We developed a method using an 8-MHz ultrasound to analyze mobile-bearing motion and ascertained accuracy, precision, and reliability compared with plain and standard digital radiographs. The anterior rim of the mobile bearing was the target for all methods. The radiographs were taken in a horizontal plane at neutral rotation and incremental external and internal rotations. Five investigators examined four positions of the mobile bearing with all three methods. The accuracy and precision were: ultrasound, 0.7 mm and 0.2 mm; digital radiograph, 0.4 mm and 0.2 mm; and plain radiographs, 0.7 mm and 0.3 mm. The interrater and intrarater reliability ranged between 0.3 to 0.4 mm and 0.1 to 0.2 mm, respectively. The difference between the methods was not significant for neutral rotation but ultrasound was significantly more accurate than any one degree of rotation or higher. Ultrasound of 8 MHz provides an accuracy and reliability that is suitable for evaluation of in vivo meniscal bearing motion. Whether this method or others are sufficiently accurate to detect motion leading to abnormal wear is not known.
MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering
Kim, Eun-Youn; Kim, Seon-Young; Ashlock, Daniel; Nam, Dougu
2009-01-01
Background Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. Results We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. Conclusion The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors. PMID:19698124
Lab-on-a-chip nucleic-acid analysis towards point-of-care applications
NASA Astrophysics Data System (ADS)
Kopparthy, Varun Lingaiah
Recent infectious disease outbreaks, such as Ebola in 2013, highlight the need for fast and accurate diagnostic tools to combat the global spread of the disease. Detection and identification of the disease-causing viruses and bacteria at the genetic level is required for accurate diagnosis of the disease. Nucleic acid analysis systems have shown promise in identifying diseases such as HIV, anthrax, and Ebola in the past. Conventional nucleic acid analysis systems are still time consuming, and are not suitable for point-ofcare applications. Miniaturized nucleic acid systems has shown great promise for rapid analysis, but they have not been commercialized due to several factors such as footprint, complexity, portability, and power consumption. This dissertation presents the development of technologies and methods for a labon-a-chip nucleic acid analysis towards point-of-care applications. An oscillatory-flow PCR methodology in a thermal gradient is developed which provides real-time analysis of nucleic-acid samples. Oscillating flow PCR was performed in the microfluidic device under thermal gradient in 40 minutes. Reverse transcription PCR (RT-PCR) was achieved in the system without an additional heating element for incubation to perform reverse transcription step. A novel method is developed for the simultaneous pattering and bonding of all-glass microfluidic devices in a microwave oven. Glass microfluidic devices were fabricated in less than 4 minutes. Towards an integrated system for the detection of amplified products, a thermal sensing method is studied for the optimization of the sensor output. Calorimetric sensing method is characterized to identify design considerations and optimal parameters such as placement of the sensor, steady state response, and flow velocity for improved performance. An understanding of these developed technologies and methods will facilitate the development of lab-on-a-chip systems for point-of-care analysis.
A two-step method for rapid characterization of electroosmotic flows in capillary electrophoresis.
Zhang, Wenjing; He, Muyi; Yuan, Tao; Xu, Wei
2017-12-01
The measurement of electroosmotic flow (EOF) is important in a capillary electrophoresis (CE) experiment in terms of performance optimization and stability improvement. Although several methods exist, there are demanding needs to accurately characterize ultra-low electroosmotic flow rates (EOF rates), such as in coated capillaries used in protein separations. In this work, a new method, called the two-step method, was developed to accurately and rapidly measure EOF rates in a capillary, especially for measuring the ultra-low EOF rates in coated capillaries. In this two-step method, the EOF rates were calculated by measuring the migration time difference of a neutral marker in two consecutive experiments, in which a pressure driven was introduced to accelerate the migration and the DC voltage was reversed to switch the EOF direction. Uncoated capillaries were first characterized by both this two-step method and a conventional method to confirm the validity of this new method. Then this new method was applied in the study of coated capillaries. Results show that this new method is not only fast in speed, but also better in accuracy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Gabor, Oliviu Sugar
To increase the aerodynamic efficiency of aircraft, in order to reduce the fuel consumption, a novel morphing wing concept has been developed. It consists in replacing a part of the wing upper and lower surfaces with a flexible skin whose shape can be modified using an actuation system placed inside the wing structure. Numerical studies in two and three dimensions were performed in order to determine the gains the morphing system achieves for the case of an Unmanned Aerial System and for a morphing technology demonstrator based on the wing tip of a transport aircraft. To obtain the optimal wing skin shapes in function of the flight condition, different global optimization algorithms were implemented, such as the Genetic Algorithm and the Artificial Bee Colony Algorithm. To reduce calculation times, a hybrid method was created by coupling the population-based algorithm with a fast, gradient-based local search method. Validations were performed with commercial state-of-the-art optimization tools and demonstrated the efficiency of the proposed methods. For accurately determining the aerodynamic characteristics of the morphing wing, two new methods were developed, a nonlinear lifting line method and a nonlinear vortex lattice method. Both use strip analysis of the span-wise wing section to account for the airfoil shape modifications induced by the flexible skin, and can provide accurate results for the wing drag coefficient. The methods do not require the generation of a complex mesh around the wing and are suitable for coupling with optimization algorithms due to the computational time several orders of magnitude smaller than traditional three-dimensional Computational Fluid Dynamics methods. Two-dimensional and three-dimensional optimizations of the Unmanned Aerial System wing equipped with the morphing skin were performed, with the objective of improving its performances for an extended range of flight conditions. The chordwise positions of the internal actuators, the spanwise number of actuation stations as well as the displacement limits were established. The performance improvements obtained and the limitations of the morphing wing concept were studied. To verify the optimization results, high-fidelity Computational Fluid Dynamics simulations were also performed, giving very accurate indications of the obtained gains. For the morphing model based on an aircraft wing tip, the skin shapes were optimized in order to control laminar flow on the upper surface. An automated structured mesh generation procedure was developed and implemented. To accurately capture the shape of the skin, a precision scanning procedure was done and its results were included in the numerical model. High-fidelity simulations were performed to determine the upper surface transition region and the numerical results were validated using experimental wind tunnel data.
Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory
NASA Technical Reports Server (NTRS)
Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.
2013-01-01
Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Able, Charles M., E-mail: cable@wfubmc.edu; Bright, Megan; Frizzell, Bart
Purpose: Statistical process control (SPC) is a quality control method used to ensure that a process is well controlled and operates with little variation. This study determined whether SPC was a viable technique for evaluating the proper operation of a high-dose-rate (HDR) brachytherapy treatment delivery system. Methods and Materials: A surrogate prostate patient was developed using Vyse ordnance gelatin. A total of 10 metal oxide semiconductor field-effect transistors (MOSFETs) were placed from prostate base to apex. Computed tomography guidance was used to accurately position the first detector in each train at the base. The plan consisted of 12 needles withmore » 129 dwell positions delivering a prescribed peripheral dose of 200 cGy. Sixteen accurate treatment trials were delivered as planned. Subsequently, a number of treatments were delivered with errors introduced, including wrong patient, wrong source calibration, wrong connection sequence, single needle displaced inferiorly 5 mm, and entire implant displaced 2 mm and 4 mm inferiorly. Two process behavior charts (PBC), an individual and a moving range chart, were developed for each dosimeter location. Results: There were 4 false positives resulting from 160 measurements from 16 accurately delivered treatments. For the inaccurately delivered treatments, the PBC indicated that measurements made at the periphery and apex (regions of high-dose gradient) were much more sensitive to treatment delivery errors. All errors introduced were correctly identified by either the individual or the moving range PBC in the apex region. Measurements at the urethra and base were less sensitive to errors. Conclusions: SPC is a viable method for assessing the quality of HDR treatment delivery. Further development is necessary to determine the most effective dose sampling, to ensure reproducible evaluation of treatment delivery accuracy.« less
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
A novel endoscopic fluorescent band ligation method for tumor localization.
Hyun, Jong Hee; Kim, Seok-Ki; Kim, Kwang Gi; Kim, Hong Rae; Lee, Hyun Min; Park, Sunup; Kim, Sung Chun; Choi, Yongdoo; Sohn, Dae Kyung
2016-10-01
Accurate tumor localization is essential for minimally invasive surgery. This study describes the development of a novel endoscopic fluorescent band ligation method for the rapid and accurate identification of tumor sites during surgery. The method utilized a fluorescent rubber band, made of indocyanine green (ICG) and a liquid rubber solution mixture, as well as a near-infrared fluorescence laparoscopic system with a dual light source using a high-powered light-emitting diode (LED) and a 785-nm laser diode. The fluorescent rubber bands were endoscopically placed on the mucosae of porcine stomachs and colons. During subsequent conventional laparoscopic stomach and colon surgery, the fluorescent bands were assayed using the near-infrared fluorescence laparoscopy system. The locations of the fluorescent clips were clearly identified on the fluorescence images in real time. The system was able to distinguish the two or three bands marked on the mucosal surfaces of the stomach and colon. Resection margins around the fluorescent bands were sufficient in the resected specimens obtained during stomach and colon surgery. These novel endoscopic fluorescent bands could be rapidly and accurately localized during stomach and colon surgery. Use of these bands may make possible the excision of exact target sites during minimally invasive gastrointestinal surgery.
Test techniques for model development of repetitive service energy storage capacitors
NASA Astrophysics Data System (ADS)
Thompson, M. C.; Mauldin, G. H.
1984-03-01
The performance of the Sandia perfluorocarbon family of energy storage capacitors was evaluated. The capacitors have a much lower charge noise signature creating new instrumentation performance goals. Thermal response to power loading and the importance of average and spot heating in the bulk regions require technical advancements in real time temperature measurements. Reduction and interpretation of thermal data are crucial to the accurate development of an intelligent thermal transport model. The thermal model is of prime interest in the high repetition rate, high average power applications of power conditioning capacitors. The accurate identification of device parasitic parameters has ramifications in both the average power loss mechanisms and peak current delivery. Methods to determine the parasitic characteristics and their nonlinearities and terminal effects are considered. Meaningful interpretations for model development, performance history, facility development, instrumentation, plans for the future, and present data are discussed.
NASA Astrophysics Data System (ADS)
Zhao, Huangxuan; Wang, Guangsong; Lin, Riqiang; Gong, Xiaojing; Song, Liang; Li, Tan; Wang, Wenjia; Zhang, Kunya; Qian, Xiuqing; Zhang, Haixia; Li, Lin; Liu, Zhicheng; Liu, Chengbo
2018-04-01
For the diagnosis and evaluation of ophthalmic diseases, imaging and quantitative characterization of vasculature in the iris are very important. The recently developed photoacoustic imaging, which is ultrasensitive in imaging endogenous hemoglobin molecules, provides a highly efficient label-free method for imaging blood vasculature in the iris. However, the development of advanced vascular quantification algorithms is still needed to enable accurate characterization of the underlying vasculature. We have developed a vascular information quantification algorithm by adopting a three-dimensional (3-D) Hessian matrix and applied for processing iris vasculature images obtained with a custom-built optical-resolution photoacoustic imaging system (OR-PAM). For the first time, we demonstrate in vivo 3-D vascular structures of a rat iris with a the label-free imaging method and also accurately extract quantitative vascular information, such as vessel diameter, vascular density, and vascular tortuosity. Our results indicate that the developed algorithm is capable of quantifying the vasculature in the 3-D photoacoustic images of the iris in-vivo, thus enhancing the diagnostic capability of the OR-PAM system for vascular-related ophthalmic diseases in vivo.
Novel semi-automated kidney volume measurements in autosomal dominant polycystic kidney disease.
Muto, Satoru; Kawano, Haruna; Isotani, Shuji; Ide, Hisamitsu; Horie, Shigeo
2018-06-01
We assessed the effectiveness and convenience of a novel semi-automatic kidney volume (KV) measuring high-speed 3D-image analysis system SYNAPSE VINCENT ® (Fuji Medical Systems, Tokyo, Japan) for autosomal dominant polycystic kidney disease (ADPKD) patients. We developed a novel semi-automated KV measurement software for patients with ADPKD to be included in the imaging analysis software SYNAPSE VINCENT ® . The software extracts renal regions using image recognition software and measures KV (VINCENT KV). The algorithm was designed to work with the manual designation of a long axis of a kidney including cysts. After using the software to assess the predictive accuracy of the VINCENT method, we performed an external validation study and compared accurate KV and ellipsoid KV based on geometric modeling by linear regression analysis and Bland-Altman analysis. Median eGFR was 46.9 ml/min/1.73 m 2 . Median accurate KV, Vincent KV and ellipsoid KV were 627.7, 619.4 ml (IQR 431.5-947.0) and 694.0 ml (IQR 488.1-1107.4), respectively. Compared with ellipsoid KV (r = 0.9504), Vincent KV correlated strongly with accurate KV (r = 0.9968), without systematic underestimation or overestimation (ellipsoid KV; 14.2 ± 22.0%, Vincent KV; - 0.6 ± 6.0%). There were no significant slice thickness-specific differences (p = 0.2980). The VINCENT method is an accurate and convenient semi-automatic method to measure KV in patients with ADPKD compared with the conventional ellipsoid method.
Recent work on material interface reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosso, S.J.; Swartz, B.K.
1997-12-31
For the last 15 years, many Eulerian codes have relied on a series of piecewise linear interface reconstruction algorithms developed by David Youngs. In a typical Youngs` method, the material interfaces were reconstructed based upon nearly cell values of volume fractions of each material. The interfaces were locally represented by linear segments in two dimensions and by pieces of planes in three dimensions. The first step in such reconstruction was to locally approximate an interface normal. In Youngs` 3D method, a local gradient of a cell-volume-fraction function was estimated and taken to be the local interface normal. A linear interfacemore » was moved perpendicular to the now known normal until the mass behind it matched the material volume fraction for the cell in question. But for distorted or nonorthogonal meshes, the gradient normal estimate didn`t accurately match that of linear material interfaces. Moreover, curved material interfaces were also poorly represented. The authors will present some recent work in the computation of more accurate interface normals, without necessarily increasing stencil size. Their estimate of the normal is made using an iterative process that, given mass fractions for nearby cells of known but arbitrary variable density, converges in 3 or 4 passes in practice (and quadratically--like Newton`s method--in principle). The method reproduces a linear interface in both orthogonal and nonorthogonal meshes. The local linear approximation is generally 2nd-order accurate, with a 1st-order accurate normal for curved interfaces in both two and three dimensional polyhedral meshes. Recent work demonstrating the interface reconstruction for curved surfaces will /be discussed.« less
NASA Astrophysics Data System (ADS)
Sattarpanah Karganroudi, Sasan
The competitive industrial market demands manufacturing companies to provide the markets with a higher quality of production. The quality control department in industrial sectors verifies geometrical requirements of products with consistent tolerances. These requirements are presented in Geometric Dimensioning and Tolerancing (GD&T) standards. However, conventional measuring and dimensioning methods for manufactured parts are time-consuming and costly. Nowadays manual and tactile measuring methods have been replaced by Computer-Aided Inspection (CAI) methods. The CAI methods apply improvements in computational calculations and 3-D data acquisition devices (scanners) to compare the scan mesh of manufactured parts with the Computer-Aided Design (CAD) model. Metrology standards, such as ASME-Y14.5 and ISO-GPS, require implementing the inspection in free-state, wherein the part is only under its weight. Non-rigid parts are exempted from the free-state inspection rule because of their significant geometrical deviation in a free-state with respect to the tolerances. Despite the developments in CAI methods, inspection of non-rigid parts still remains a serious challenge. Conventional inspection methods apply complex fixtures for non-rigid parts to retrieve the functional shape of these parts on physical fixtures; however, the fabrication and setup of these fixtures are sophisticated and expensive. The cost of fixtures has doubled since the client and manufacturing sectors require repetitive and independent inspection fixtures. To eliminate the need for costly and time-consuming inspection fixtures, fixtureless inspection methods of non-rigid parts based on CAI methods have been developed. These methods aim at distinguishing flexible deformations of parts in a free-state from defects. Fixtureless inspection methods are required to be automatic, reliable, reasonably accurate and repeatable for non-rigid parts with complex shapes. The scan model, which is acquired as point clouds, represent the shape of a part in a free-state. Afterward, the inspection of defects is performed by comparing the scan and CAD models, but these models are presented in different coordinate systems. Indeed, the scan model is presented in the measurement coordinate system whereas the CAD model is introduced in the designed coordinate system. To accomplish the inspection and facilitate an accurate comparison between the models, the registration process is required to align the scan and CAD models in a common coordinate system. The registration includes a virtual compensation for the flexible deformation of the parts in a free-state. Then, the inspection is implemented as a geometrical comparison between the CAD and scan models. This thesis focuses on developing automatic and accurate fixtureless CAI methods for non-rigid parts along with assessing the robustness of the methods. To this end, an automatic fixtureless CAI method for non-rigid parts based on filtering registration points is developed to identify and quantify defects more accurately on the surface of scan models. The flexible deformation of parts in a free-state in our developed automatic fixtureless CAI method is compensated by applying FE non-rigid Registration (FENR) to deform the CAD model towards the scan mesh. The displacement boundary conditions (BCs) for FENR are determined based on the corresponding sample points, which are generated by the Generalized Numerical Inspection Fixture (GNIF) method on the CAD and scan models. These corresponding sample points are evenly distributed on the surface of the models. The comparison between this deformed CAD model and the scan mesh intend to evaluate and quantify the defects on the scan model. However, some sample points can be located close or on defect areas which result in an inaccurate estimation of defects. These sample points are automatically filtered out in our CAI method based on curvature and von Mises stress criteria. Once filtered out, the remaining sample points are used in a new FENR, which allows an accurate evaluation of defects with respect to the tolerances. The performance and robustness of all CAI methods are generally required to be assessed with respect to the actual measurements. This thesis also introduces a new validation metric for Verification and Validation (V&V) of CAI methods based on ASME recommendations. The developed V&V approach uses a nonparametric statistical hypothesis test, namely the Kolmogorov-Smirnov (K-S) test. In addition to validating the defects size, the K-S test allows a deeper evaluation based on distance distribution of defects. The robustness of CAI method with respect to uncertainties such as scanning noise is quantitatively assessed using the developed validation metric. Due to the compliance of non-rigid parts, a geometrically deviated part can still be assembled in the assembly-state. This thesis also presents a fixtureless CAI method for geometrically deviated (presenting defects) non-rigid parts to evaluate the feasibility of mounting these parts in the functional assembly-state. Our developed Virtual Mounting Assembly-State Inspection (VMASI) method performs a non-rigid registration to virtually mount the scan mesh in assembly-state. To this end, the point clouds of scan model representing the part in a free-state is deformed to meet the assembly constraints such as fixation position (e.g. mounting holes). In some cases, the functional shape of a deviated part can be retrieved by applying assembly loads, which are limited to permissible loads, on the surface of the part. The required assembly loads are estimated through our developed Restraining Pressures Optimization (RPO) aiming at displacing the deviated scan model to achieve the tolerance for mounting holes. Therefore, the deviated scan model can be assembled if the mounting holes on the predicted functional shape of scan model attain the tolerance range. Different industrial parts are used to evaluate the performance of our developed methods in this thesis. The automatic inspection for identifying different types of small (local) and big (global) defects on the parts results in an accurate evaluation of defects. The robustness of this inspection method is also validated with respect to different levels of scanning noise, which shows promising results. Meanwhile, the VMASI method is performed on various parts with different types of defects, which concludes that in some cases the functional shape of deviated parts can be retrieved by mounting them on a virtual fixture in assembly-state under restraining loads.
NASA Astrophysics Data System (ADS)
Yang, Zili
2017-07-01
Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.
The accurate quantitation of proteins or peptides using Mass Spectrometry (MS) is gaining prominence in the biomedical research community as an alternative method for analyte measurement. The Clinical Proteomic Tumor Analysis Consortium (CPTAC) investigators have been at the forefront in the promotion of reproducible MS techniques, through the development and application of standardized proteomic methods for protein quantitation on biologically relevant samples.
A new method of measurement of tension on a moving magnetic tape
NASA Technical Reports Server (NTRS)
Kurtinaytis, A. K.; Lauzhinskas, Y. S.
1973-01-01
The possibility of no-contact measurement of the tension on a moving magnetic tape, assuming the tape is uniform, is discussed. A scheme for calculation of the natural frequency of transverse vibrations of magnetic tape is shown. Mathematical models are developed to show the relationships of the parameters. The method is applicable to the analysis of accurate tape feed mechanisms design.
Dale G. Brockway; Edward F. Loewenstein; Kenneth W. Outcalt
2014-01-01
Proportional basal area (Pro-B) was developed as an accurate, easy-to-use method for making uneven-aged silviculture a practical management option. Following less than 3 h of training, forest staff from a range of professional backgrounds used Pro-B in an operational-scale field study to apply single-tree selection and group selection systems in longleaf pine (Pinus...
Thermal stress analysis of reusable surface insulation for shuttle
NASA Technical Reports Server (NTRS)
Ojalvo, I. U.; Levy, A.; Austin, F.
1974-01-01
An iterative procedure for accurately determining tile stresses associated with static mechanical and thermally induced internal loads is presented. The necessary conditions for convergence of the method are derived. An user-oriented computer program based upon the present method of analysis was developed. The program is capable of analyzing multi-tiled panels and determining the associated stresses. Typical numerical results from this computer program are presented.
Rapid and efficient differentiation of Yersinia species using high-resolution melting analysis.
Souza, Roberto A; Frazão, Miliane R; Almeida, Alzira M P; Falcão, Juliana P
2015-08-01
The primary goal of clinical microbiology is the accurate identification of the causative agent of the disease. Here, we describe a method for differentiation between Yersinia species using PCR-HRMA. The results revealed species-specific melting profiles. The herein developed assay can be used as an effective method to differentiate Yersinia species. Copyright © 2015 Elsevier B.V. All rights reserved.
The aluminium content of breast tissue taken from women with breast cancer.
House, Emily; Polwart, Anthony; Darbre, Philippa; Barr, Lester; Metaxas, George; Exley, Christopher
2013-10-01
The aetiology of breast cancer is multifactorial. While there are known genetic predispositions to the disease it is probable that environmental factors are also involved. Recent research has demonstrated a regionally specific distribution of aluminium in breast tissue mastectomies while other work has suggested mechanisms whereby breast tissue aluminium might contribute towards the aetiology of breast cancer. We have looked to develop microwave digestion combined with a new form of graphite furnace atomic absorption spectrometry as a precise, accurate and reproducible method for the measurement of aluminium in breast tissue biopsies. We have used this method to test the thesis that there is a regional distribution of aluminium across the breast in women with breast cancer. Microwave digestion of whole breast tissue samples resulted in clear homogenous digests perfectly suitable for the determination of aluminium by graphite furnace atomic absorption spectrometry. The instrument detection limit for the method was 0.48 μg/L. Method blanks were used to estimate background levels of contamination of 14.80 μg/L. The mean concentration of aluminium across all tissues was 0.39 μg Al/g tissue dry wt. There were no statistically significant regionally specific differences in the content of aluminium. We have developed a robust method for the precise and accurate measurement of aluminium in human breast tissue. There are very few such data currently available in the scientific literature and they will add substantially to our understanding of any putative role of aluminium in breast cancer. While we did not observe any statistically significant differences in aluminium content across the breast it has to be emphasised that herein we measured whole breast tissue and not defatted tissue where such a distribution was previously noted. We are very confident that the method developed herein could now be used to provide accurate and reproducible data on the aluminium content in defatted tissue and oil from such tissues and thereby contribute towards our knowledge on aluminium and any role in breast cancer. Copyright © 2013 Elsevier GmbH. All rights reserved.
Kobayashi, Shinya; Ishikawa, Tatsuya; Mutoh, Tatsushi; Hikichi, Kentaro; Suzuki, Akifumi
2012-01-01
Background: Surgical placement of a ventriculoperitoneal shunt (VPS) is the main strategy to manage hydrocephalus. However, the failure rate associated with placement of ventricular catheters remains high. Methods: A hybrid operating room, equipped with a flat-panel detector digital subtraction angiography system containing C-arm cone-beam computed tomography (CB-CT) imaging, has recently been developed and utilized to assist neurosurgical procedures. We have developed a novel technique using intraoperative fluoroscopy and a C-arm CB-CT system to facilitate accurate placement of a VPS. Results: Using this novel technique, 39 consecutive ventricular catheters were placed accurately, and no ventricular catheter failures were experienced during the follow-up period. Only two patients experienced obstruction of the VPS, both of which occurred in the extracranial portion of the shunt system. Conclusion: Surgical placement of a VPS assisted by flat panel detector CT-guided real-time fluoroscopy enabled accurate placement of ventricular catheters and was associated with a decreased need for shunt revision. PMID:23226605
Swanson, Jon; Audie, Joseph
2018-01-01
A fundamental and unsolved problem in biophysical chemistry is the development of a computationally simple, physically intuitive, and generally applicable method for accurately predicting and physically explaining protein-protein binding affinities from protein-protein interaction (PPI) complex coordinates. Here, we propose that the simplification of a previously described six-term PPI scoring function to a four term function results in a simple expression of all physically and statistically meaningful terms that can be used to accurately predict and explain binding affinities for a well-defined subset of PPIs that are characterized by (1) crystallographic coordinates, (2) rigid-body association, (3) normal interface size, and hydrophobicity and hydrophilicity, and (4) high quality experimental binding affinity measurements. We further propose that the four-term scoring function could be regarded as a core expression for future development into a more general PPI scoring function. Our work has clear implications for PPI modeling and structure-based drug design.
A new method for wind speed forecasting based on copula theory.
Wang, Yuankun; Ma, Huiqun; Wang, Dong; Wang, Guizuo; Wu, Jichun; Bian, Jinyu; Liu, Jiufu
2018-01-01
How to determine representative wind speed is crucial in wind resource assessment. Accurate wind resource assessments are important to wind farms development. Linear regressions are usually used to obtain the representative wind speed. However, terrain flexibility of wind farm and long distance between wind speed sites often lead to low correlation. In this study, copula method is used to determine the representative year's wind speed in wind farm by interpreting the interaction of the local wind farm and the meteorological station. The result shows that the method proposed here can not only determine the relationship between the local anemometric tower and nearby meteorological station through Kendall's tau, but also determine the joint distribution without assuming the variables to be independent. Moreover, the representative wind data can be obtained by the conditional distribution much more reasonably. We hope this study could provide scientific reference for accurate wind resource assessments. Copyright © 2017 Elsevier Inc. All rights reserved.
Imaging with Mass Spectrometry of Bacteria on the Exoskeleton of Fungus-Growing Ants.
Gemperline, Erin; Horn, Heidi A; DeLaney, Kellen; Currie, Cameron R; Li, Lingjun
2017-08-18
Mass spectrometry imaging is a powerful analytical technique for detecting and determining spatial distributions of molecules within a sample. Typically, mass spectrometry imaging is limited to the analysis of thin tissue sections taken from the middle of a sample. In this work, we present a mass spectrometry imaging method for the detection of compounds produced by bacteria on the outside surface of ant exoskeletons in response to pathogen exposure. Fungus-growing ants have a specialized mutualism with Pseudonocardia, a bacterium that lives on the ants' exoskeletons and helps protect their fungal garden food source from harmful pathogens. The developed method allows for visualization of bacterial-derived compounds on the ant exoskeleton. This method demonstrates the capability to detect compounds that are specifically localized to the bacterial patch on ant exoskeletons, shows good reproducibility across individual ants, and achieves accurate mass measurements within 5 ppm error when using a high-resolution, accurate-mass mass spectrometer.
Song, Youyi; Zhang, Ling; Chen, Siping; Ni, Dong; Lei, Baiying; Wang, Tianfu
2015-10-01
In this paper, a multiscale convolutional network (MSCN) and graph-partitioning-based method is proposed for accurate segmentation of cervical cytoplasm and nuclei. Specifically, deep learning via the MSCN is explored to extract scale invariant features, and then, segment regions centered at each pixel. The coarse segmentation is refined by an automated graph partitioning method based on the pretrained feature. The texture, shape, and contextual information of the target objects are learned to localize the appearance of distinctive boundary, which is also explored to generate markers to split the touching nuclei. For further refinement of the segmentation, a coarse-to-fine nucleus segmentation framework is developed. The computational complexity of the segmentation is reduced by using superpixel instead of raw pixels. Extensive experimental results demonstrate that the proposed cervical nucleus cell segmentation delivers promising results and outperforms existing methods.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
Rose, John P.; Wang, Bi-Cheng; Weiss, Manfred S.
2015-01-01
Native SAD phasing uses the anomalous scattering signal of light atoms in the crystalline, native samples of macromolecules collected from single-wavelength X-ray diffraction experiments. These atoms include sodium, magnesium, phosphorus, sulfur, chlorine, potassium and calcium. Native SAD phasing is challenging and is critically dependent on the collection of accurate data. Over the past five years, advances in diffraction hardware, crystallographic software, data-collection methods and strategies, and the use of data statistics have been witnessed which allow ‘highly accurate data’ to be routinely collected. Today, native SAD sits on the verge of becoming a ‘first-choice’ method for both de novo and molecular-replacement structure determination. This article will focus on advances that have caught the attention of the community over the past five years. It will also highlight both de novo native SAD structures and recent structures that were key to methods development. PMID:26175902
Accurate Modeling Method for Cu Interconnect
NASA Astrophysics Data System (ADS)
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.
Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C
2016-01-01
We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
An inductive method for automatic generation of referring physician prefetch rules for PACS.
Okura, Yasuhiko; Matsumura, Yasushi; Harauchi, Hajime; Sukenobu, Yoshiharu; Kou, Hiroko; Kohyama, Syunsuke; Yasuda, Norihiro; Yamamoto, Yuichiro; Inamura, Kiyonari
2002-12-01
To prefetch images in a hospital-wide picture archiving and communication system (PACS), a rule must be devised to permit accurate selection of examinations in which a patient's images are stored. We developed an inductive method to compose prefetch rules from practical data which were obtained in a hospital using a decision tree algorithm. Our methods were evaluated on data acquired in Osaka University Hospital for one month. The data collected consisted of 58,617 cases of consultation reservations, 643,797 examination histories of patients, and 323,993 records of image requests in PACS. Four parameters indicating whether the images of the patient were requested or not for each consultation reservation were derived from the database. As a result, the successful selection sensitivity for consultations in which images were requested was approximately 0.8, and the specificity for excluding consultations accurately where images were not requested was approximately 0.7.
Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir
2014-01-01
Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.