Accurate pointing of tungsten welding electrodes
NASA Technical Reports Server (NTRS)
Ziegelmeier, P.
1971-01-01
Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.
PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences.
Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong; Warnow, Tandy
2015-05-01
We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate--slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory.
MRL and SuperFine+MRL: new supertree methods
2012-01-01
Background Supertree methods combine trees on subsets of the full taxon set together to produce a tree on the entire set of taxa. Of the many supertree methods, the most popular is MRP (Matrix Representation with Parsimony), a method that operates by first encoding the input set of source trees by a large matrix (the "MRP matrix") over {0,1, ?}, and then running maximum parsimony heuristics on the MRP matrix. Experimental studies evaluating MRP in comparison to other supertree methods have established that for large datasets, MRP generally produces trees of equal or greater accuracy than other methods, and can run on larger datasets. A recent development in supertree methods is SuperFine+MRP, a method that combines MRP with a divide-and-conquer approach, and produces more accurate trees in less time than MRP. In this paper we consider a new approach for supertree estimation, called MRL (Matrix Representation with Likelihood). MRL begins with the same MRP matrix, but then analyzes the MRP matrix using heuristics (such as RAxML) for 2-state Maximum Likelihood. Results We compared MRP and SuperFine+MRP with MRL and SuperFine+MRL on simulated and biological datasets. We examined the MRP and MRL scores of each method on a wide range of datasets, as well as the resulting topological accuracy of the trees. Our experimental results show that MRL, coupled with a very good ML heuristic such as RAxML, produced more accurate trees than MRP, and MRL scores were more strongly correlated with topological accuracy than MRP scores. Conclusions SuperFine+MRP, when based upon a good MP heuristic, such as TNT, produces among the best scores for both MRP and MRL, and is generally faster and more topologically accurate than other supertree methods we tested. PMID:22280525
DendroBLAST: approximate phylogenetic trees in the absence of multiple sequence alignments.
Kelly, Steven; Maini, Philip K
2013-01-01
The rapidly growing availability of genome information has created considerable demand for both fast and accurate phylogenetic inference algorithms. We present a novel method called DendroBLAST for reconstructing phylogenetic dendrograms/trees from protein sequences using BLAST. This method differs from other methods by incorporating a simple model of sequence evolution to test the effect of introducing sequence changes on the reliability of the bipartitions in the inferred tree. Using realistic simulated sequence data we demonstrate that this method produces phylogenetic trees that are more accurate than other commonly-used distance based methods though not as accurate as maximum likelihood methods from good quality multiple sequence alignments. In addition to tests on simulated data, we use DendroBLAST to generate input trees for a supertree reconstruction of the phylogeny of the Archaea. This independent analysis produces an approximate phylogeny of the Archaea that has both high precision and recall when compared to previously published analysis of the same dataset using conventional methods. Taken together these results demonstrate that approximate phylogenetic trees can be produced in the absence of multiple sequence alignments, and we propose that these trees will provide a platform for improving and informing downstream bioinformatic analysis. A web implementation of the DendroBLAST method is freely available for use at http://www.dendroblast.com/.
An automated method of tuning an attitude estimator
NASA Technical Reports Server (NTRS)
Mason, Paul A. C.; Mook, D. Joseph
1995-01-01
Attitude determination is a major element of the operation and maintenance of a spacecraft. There are several existing methods of determining the attitude of a spacecraft. One of the most commonly used methods utilizes the Kalman filter to estimate the attitude of the spacecraft. Given an accurate model of a system and adequate observations, a Kalman filter can produce accurate estimates of the attitude. If the system model, filter parameters, or observations are inaccurate, the attitude estimates may be degraded. Therefore, it is advantageous to develop a method of automatically tuning the Kalman filter to produce the accurate estimates. In this paper, a three-axis attitude determination Kalman filter, which uses only magnetometer measurements, is developed and tested using real data. The appropriate filter parameters are found via the Process Noise Covariance Estimator (PNCE). The PNCE provides an optimal criterion for determining the best filter parameters.
Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.
Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J
1993-05-01
This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)
Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal
2012-01-01
Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of those sequences that maximize likelihood under the Jukes-Cantor model is uninformative in the worst possible sense. For all inputs, all trees optimize the likelihood score. Second, we show that a greedy heuristic that uses GTR+Gamma ML to optimize the alignment and the tree can produce very poor alignments and trees. Therefore, the excellent performance of SATé-II and SATé-I is not because ML is used as an optimization criterion for choosing the best tree/alignment pair but rather due to the particular divide-and-conquer realignment techniques employed.
Alam, Md Ferdous; Haque, Asadul
2017-10-18
An accurate determination of particle-level fabric of granular soils from tomography data requires a maximum correct separation of particles. The popular marker-controlled watershed separation method is widely used to separate particles. However, the watershed method alone is not capable of producing the maximum separation of particles when subjected to boundary stresses leading to crushing of particles. In this paper, a new separation method, named as Monash Particle Separation Method (MPSM), has been introduced. The new method automatically determines the optimal contrast coefficient based on cluster evaluation framework to produce the maximum accurate separation outcomes. Finally, the particles which could not be separated by the optimal contrast coefficient were separated by integrating cuboid markers generated from the clustering by Gaussian mixture models into the routine watershed method. The MPSM was validated on a uniformly graded sand volume subjected to one-dimensional compression loading up to 32 MPa. It was demonstrated that the MPSM is capable of producing the best possible separation of particles required for the fabric analysis.
Young, David W
2015-11-01
Historically, hospital departments have computed the costs of individual tests or procedures using the ratio of cost to charges (RCC) method, which can produce inaccurate results. To determine a more accurate cost of a test or procedure, the activity-based costing (ABC) method must be used. Accurate cost calculations will ensure reliable information about the profitability of a hospital's DRGs.
Accurate thermoelastic tensor and acoustic velocities of NaCl
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu
Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less
A New Cluster Analysis-Marker-Controlled Watershed Method for Separating Particles of Granular Soils
Alam, Md Ferdous
2017-01-01
An accurate determination of particle-level fabric of granular soils from tomography data requires a maximum correct separation of particles. The popular marker-controlled watershed separation method is widely used to separate particles. However, the watershed method alone is not capable of producing the maximum separation of particles when subjected to boundary stresses leading to crushing of particles. In this paper, a new separation method, named as Monash Particle Separation Method (MPSM), has been introduced. The new method automatically determines the optimal contrast coefficient based on cluster evaluation framework to produce the maximum accurate separation outcomes. Finally, the particles which could not be separated by the optimal contrast coefficient were separated by integrating cuboid markers generated from the clustering by Gaussian mixture models into the routine watershed method. The MPSM was validated on a uniformly graded sand volume subjected to one-dimensional compression loading up to 32 MPa. It was demonstrated that the MPSM is capable of producing the best possible separation of particles required for the fabric analysis. PMID:29057823
Fast Construction of Near Parsimonious Hybridization Networks for Multiple Phylogenetic Trees.
Mirzaei, Sajad; Wu, Yufeng
2016-01-01
Hybridization networks represent plausible evolutionary histories of species that are affected by reticulate evolutionary processes. An established computational problem on hybridization networks is constructing the most parsimonious hybridization network such that each of the given phylogenetic trees (called gene trees) is "displayed" in the network. There have been several previous approaches, including an exact method and several heuristics, for this NP-hard problem. However, the exact method is only applicable to a limited range of data, and heuristic methods can be less accurate and also slow sometimes. In this paper, we develop a new algorithm for constructing near parsimonious networks for multiple binary gene trees. This method is more efficient for large numbers of gene trees than previous heuristics. This new method also produces more parsimonious results on many simulated datasets as well as a real biological dataset than a previous method. We also show that our method produces topologically more accurate networks for many datasets.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less
You, Qiushi; Li, Qingqing; Zheng, Hailing; Hu, Zhiwen; Zhou, Yang; Wang, Bing
2017-09-06
Recently, much interest has been paid to the separation of silk produced by Bombyx mori from silk produced by other species and tracing the beginnings of silk cultivation from wild silk exploitation. In this paper, significant differences between silks from Bombyx mori and other species were found by microscopy and spectroscopy, such as morphology, secondary structure, and amino acid composition. For further accurate identification, a diagnostic antibody was designed by comparing the peptide sequences of silks produced by Bombyx mori and other species. The results of the noncompetitive indirect enzyme-linked immunosorbent assay (ELISA) indicated that the antibody that showed good sensitivity and high specificity can definitely discern silk produced by Bombyx mori from silk produced by wild species. Thus, the antibody-based immunoassay has the potential to be a powerful tool for tracing the beginnings of silk cultivation. In addition, combining the sensitive, specific, and convenient ELISA technology with other conventional methods can provide more in-depth and accurate information for species identification.
Accurate modeling and evaluation of microstructures in complex materials
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman
2018-02-01
Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
Can Value-Added Measures of Teacher Performance Be Trusted?
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Wooldridge, Jeffrey M.
2015-01-01
We investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of scenarios. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. We find that no one method accurately captures…
PASTA: Ultra-Large Multiple Sequence Alignment for Nucleotide and Amino-Acid Sequences
Mirarab, Siavash; Nguyen, Nam; Guo, Sheng; Wang, Li-San; Kim, Junhyong
2015-01-01
Abstract We introduce PASTA, a new multiple sequence alignment algorithm. PASTA uses a new technique to produce an alignment given a guide tree that enables it to be both highly scalable and very accurate. We present a study on biological and simulated data with up to 200,000 sequences, showing that PASTA produces highly accurate alignments, improving on the accuracy and scalability of the leading alignment methods (including SATé). We also show that trees estimated on PASTA alignments are highly accurate—slightly better than SATé trees, but with substantial improvements relative to other methods. Finally, PASTA is faster than SATé, highly parallelizable, and requires relatively little memory. PMID:25549288
Can Value-Added Measures of Teacher Performance Be Trusted? Working Paper #18
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Woolridge, Jeffrey M.
2012-01-01
We investigate whether commonly used value-added estimation strategies can produce accurate estimates of teacher effects. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. No one method accurately captures true teacher effects in all scenarios,…
Finite difference and Runge-Kutta methods for solving vibration problems
NASA Astrophysics Data System (ADS)
Lintang Renganis Radityani, Scolastika; Mungkasi, Sudi
2017-11-01
The vibration of a storey building can be modelled into a system of second order ordinary differential equations. If the number of floors of a building is large, then the result is a large scale system of second order ordinary differential equations. The large scale system is difficult to solve, and if it can be solved, the solution may not be accurate. Therefore, in this paper, we seek for accurate methods for solving vibration problems. We compare the performance of numerical finite difference and Runge-Kutta methods for solving large scale systems of second order ordinary differential equations. The finite difference methods include the forward and central differences. The Runge-Kutta methods include the Euler and Heun methods. Our research results show that the central finite difference and the Heun methods produce more accurate solutions than the forward finite difference and the Euler methods do.
Validation and Improvement of Reliability Methods for Air Force Building Systems
focusing primarily on HVAC systems . This research used contingency analysis to assess the performance of each model for HVAC systems at six Air Force...probabilistic model produced inflated reliability calculations for HVAC systems . In light of these findings, this research employed a stochastic method, a...Nonhomogeneous Poisson Process (NHPP), in an attempt to produce accurate HVAC system reliability calculations. This effort ultimately concluded that
Automated Tumor Volumetry Using Computer-Aided Image Segmentation
Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A.; Ali, Zarina S.; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M.; Davatzikos, Christos
2015-01-01
Rationale and Objectives Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. Materials and Methods A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Results Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0–5 rating scale where 5 indicated perfect segmentation. Conclusions The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. PMID:25770633
NASA Technical Reports Server (NTRS)
1985-01-01
An accurate method of surveying the soil was developed by NASA and the Department of Agriculture. The method involves using ground penetrating radar to produce subsurface graphs. By examining printouts from the system's recorder, scientists can determine whether a site is appropriate for building, etc.
Inference Control Mechanism for Statistical Database: Frequency-Imposed Data Distortions.
ERIC Educational Resources Information Center
Liew, Chong K.; And Others
1985-01-01
Introduces two data distortion methods (Frequency-Imposed Distortion, Frequency-Imposed Probability Distortion) and uses a Monte Carlo study to compare their performance with that of other distortion methods (Point Distortion, Probability Distortion). Indications that data generated by these two methods produce accurate statistics and protect…
What can formal methods offer to digital flight control systems design
NASA Technical Reports Server (NTRS)
Good, Donald I.
1990-01-01
Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.
ERIC Educational Resources Information Center
Grasso, Stephanie M.; Peña, Elizabeth D.; Bedore, Lisa M.; Hixon, J. Gregory; Griffin, Zenzi M.
2018-01-01
Purpose: Bilinguals tend to produce cognates (e.g., "telephone" in English and "teléfono" in Spanish) more accurately than they produce noncognates ("table"/"mesa"). We tested whether the same holds for bilingual children with specific language impairment (SLI). Method: Participants included Spanish-English…
Xanthopoulou, Panagiota; Valakos, Efstratios; Youlatos, Dionisios; Nikita, Efthymia
2018-05-01
The present study tests the accuracy of commonly adopted ageing methods based on the morphology of the pubic symphysis, auricular surface and cranial sutures. These methods are examined both in their traditional form as well as in the context of transition analysis using the ADBOU software in a modern Greek documented collection consisting of 140 individuals who lived mainly in the second half of the twentieth century and come from cemeteries in the area of Athens. The auricular surface overall produced the most accurate age estimates in our material, with different methods based on this anatomical area showing varying degrees of success for different age groups. The pubic symphysis produced accurate results primarily for young adults and the same applied to cranial sutures but the latter appeared completely inappropriate for older individuals. The use of transition analysis through the ADBOU software provided less accurate results than the corresponding traditional ageing methods in our sample. Our results are in agreement with those obtained from validation studies based on material from across the world, but certain differences identified with other studies on Greek material highlight the importance of taking into account intra- and inter-population variability in age estimation. Copyright © 2018 Elsevier B.V. All rights reserved.
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, N.B.; Walker, J.F.
1990-01-01
The methods are classified into two general categories, subjective and analytical, depending on whether individual judgement is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods, and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used for streamflow-gaging stations where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice adjustment factor) may be appropriate for use for stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge ratio and multiple regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Automated tumor volumetry using computer-aided image segmentation.
Gaonkar, Bilwaj; Macyszyn, Luke; Bilello, Michel; Sadaghiani, Mohammed Salehi; Akbari, Hamed; Atthiah, Mark A; Ali, Zarina S; Da, Xiao; Zhan, Yiqang; O'Rourke, Donald; Grady, Sean M; Davatzikos, Christos
2015-05-01
Accurate segmentation of brain tumors, and quantification of tumor volume, is important for diagnosis, monitoring, and planning therapeutic intervention. Manual segmentation is not widely used because of time constraints. Previous efforts have mainly produced methods that are tailored to a particular type of tumor or acquisition protocol and have mostly failed to produce a method that functions on different tumor types and is robust to changes in scanning parameters, resolution, and image quality, thereby limiting their clinical value. Herein, we present a semiautomatic method for tumor segmentation that is fast, accurate, and robust to a wide variation in image quality and resolution. A semiautomatic segmentation method based on the geodesic distance transform was developed and validated by using it to segment 54 brain tumors. Glioblastomas, meningiomas, and brain metastases were segmented. Qualitative validation was based on physician ratings provided by three clinical experts. Quantitative validation was based on comparing semiautomatic and manual segmentations. Tumor segmentations obtained using manual and automatic methods were compared quantitatively using the Dice measure of overlap. Subjective evaluation was performed by having human experts rate the computerized segmentations on a 0-5 rating scale where 5 indicated perfect segmentation. The proposed method addresses a significant, unmet need in the field of neuro-oncology. Specifically, this method enables clinicians to obtain accurate and reproducible tumor volumes without the need for manual segmentation. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Properties of TiNi intermetallic compound industrially produced by combustion synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaieda, Yoshinari
Most TiNi shape memory intermetallic compounds are conventionally produced by the process including high frequency induction vacuum melting and casting. A gravity segregation occurs in a cast TiNi ingot because of the big difference in the specific gravity between Ti and Ni. It is difficult to control accurately the phase transformation temperature of TiNi shape memory intermetallic compound produced by the conventional process, because the martensitic transformation temperature shifts by 10K due to the change in 0.1 % of Ni content. Homogeneous TiNi intermetallic compound is produced by the industrial process including combustion synthesis method, which is a newly developedmore » manufacturing process. In the new process, phase transformation temperatures of TiNi can be controlled accurately by controlling the ratio of Ti and Ni elemental starting powders. The chemical component, the impurities and the phase transformation temperatures of the TiNi products industrially produced by the process are revealed. These properties are vitally important when combustion synthesis method is applied to an industrial mass production process for producing TiNi shape memory intermetallic compounds. TiNi shape memory products are industrially and commercially produced today the industrial process including combustion synthesis. The total production weight in a year is 30 tins in 1994.« less
The Measurement of Magnetic Fields
ERIC Educational Resources Information Center
Berridge, H. J. J.
1973-01-01
Discusses five experimental methods used by senior high school students to provide an accurate calibration curve of magnet current against the magnetic flux density produced by an electromagnet. Compares the relative merits of the five methods, both as measurements and from an educational viewpoint. (JR)
Parametrization of an Orbital-Based Linear-Scaling Quantum Force Field for Noncovalent Interactions
2015-01-01
We parametrize a linear-scaling quantum mechanical force field called mDC for the accurate reproduction of nonbonded interactions. We provide a new benchmark database of accurate ab initio interactions between sulfur-containing molecules. A variety of nonbond databases are used to compare the new mDC method with other semiempirical, molecular mechanical, ab initio, and combined semiempirical quantum mechanical/molecular mechanical methods. It is shown that the molecular mechanical force field significantly and consistently reproduces the benchmark results with greater accuracy than the semiempirical models and our mDC model produces errors twice as small as the molecular mechanical force field. The comparisons between the methods are extended to the docking of drug candidates to the Cyclin-Dependent Kinase 2 protein receptor. We correlate the protein–ligand binding energies to their experimental inhibition constants and find that the mDC produces the best correlation. Condensed phase simulation of mDC water is performed and shown to produce O–O radial distribution functions similar to TIP4P-EW. PMID:24803856
Measuring micro-organism gas production
NASA Technical Reports Server (NTRS)
Wilkins, J. R.; Pearson, A. O.; Mills, S. M.
1973-01-01
Transducer, which senses pressure buildup, is easy to assemble and use, and rate of gas produced can be measured automatically and accurately. Method can be used in research, in clinical laboratories, and for environmental pollution studies because of its ability to detect and quantify rapidly the number of gas-producing microorganisms in water, beverages, and clinical samples.
Trinder, P.; Harper, F. E.
1962-01-01
A colorimetric technique for the determination of carboxyhaemoglobin in blood is described. Carbon monoxide released from blood in a standard Conway unit reacts with palladous chloride/arsenomolybdate solution to produce a blue colour. Using 0·5 to 2 ml. of blood, the method will estimate carboxyhaemoglobin accurately at levels from 0·1% to 100% of total haemoglobin and in the presence of other abnormal pigments. A number of methods are available for the determination of carboxyhaemoglobin; none is accurate below a concentration of 1·5 g. carboxyhaemoglobin per 100 ml. but for most clinical purposes this is not important. For forensic purposes and occasionally in clinical use, an accurate determination of carboxyhaemoglobin below 750 mg. per 100 ml. may be required and no really satisfactory method is at present available. Some time ago when it was important to know whether a person who was found dead in a burning house had died before or after the fire had started, we became interested in developing a method which would determine accurately carboxyhaemoglobin at levels of 750 mg. per 100 ml. PMID:13922505
Individual versus superensemble forecasts of seasonal influenza outbreaks in the United States.
Yamana, Teresa K; Kandula, Sasikiran; Shaman, Jeffrey
2017-11-01
Recent research has produced a number of methods for forecasting seasonal influenza outbreaks. However, differences among the predicted outcomes of competing forecast methods can limit their use in decision-making. Here, we present a method for reconciling these differences using Bayesian model averaging. We generated retrospective forecasts of peak timing, peak incidence, and total incidence for seasonal influenza outbreaks in 48 states and 95 cities using 21 distinct forecast methods, and combined these individual forecasts to create weighted-average superensemble forecasts. We compared the relative performance of these individual and superensemble forecast methods by geographic location, timing of forecast, and influenza season. We find that, overall, the superensemble forecasts are more accurate than any individual forecast method and less prone to producing a poor forecast. Furthermore, we find that these advantages increase when the superensemble weights are stratified according to the characteristics of the forecast or geographic location. These findings indicate that different competing influenza prediction systems can be combined into a single more accurate forecast product for operational delivery in real time.
Individual versus superensemble forecasts of seasonal influenza outbreaks in the United States
Kandula, Sasikiran; Shaman, Jeffrey
2017-01-01
Recent research has produced a number of methods for forecasting seasonal influenza outbreaks. However, differences among the predicted outcomes of competing forecast methods can limit their use in decision-making. Here, we present a method for reconciling these differences using Bayesian model averaging. We generated retrospective forecasts of peak timing, peak incidence, and total incidence for seasonal influenza outbreaks in 48 states and 95 cities using 21 distinct forecast methods, and combined these individual forecasts to create weighted-average superensemble forecasts. We compared the relative performance of these individual and superensemble forecast methods by geographic location, timing of forecast, and influenza season. We find that, overall, the superensemble forecasts are more accurate than any individual forecast method and less prone to producing a poor forecast. Furthermore, we find that these advantages increase when the superensemble weights are stratified according to the characteristics of the forecast or geographic location. These findings indicate that different competing influenza prediction systems can be combined into a single more accurate forecast product for operational delivery in real time. PMID:29107987
Procedure for the systematic orientation of digitised cranial models. Design and validation.
Bailo, M; Baena, S; Marín, J J; Arredondo, J M; Auría, J M; Sánchez, B; Tardío, E; Falcón, L
2015-12-01
Comparison of bony pieces requires that they are oriented systematically to ensure that homologous regions are compared. Few orientation methods are highly accurate; this is particularly true for methods applied to three-dimensional models obtained by surface scanning, a technique whose special features make it a powerful tool in forensic contexts. The aim of this study was to develop and evaluate a systematic, assisted orientation method for aligning three-dimensional cranial models relative to the Frankfurt Plane, which would be produce accurate orientations independent of operator and anthropological expertise. The study sample comprised four crania of known age and sex. All the crania were scanned and reconstructed using an Eva Artec™ portable 3D surface scanner and subsequently, the position of certain characteristic landmarks were determined by three different operators using the Rhinoceros 3D surface modelling software. Intra-observer analysis showed a tendency for orientation to be more accurate when using the assisted method than when using conventional manual orientation. Inter-observer analysis showed that experienced evaluators achieve results at least as accurate if not more accurate using the assisted method than those obtained using manual orientation; while inexperienced evaluators achieved more accurate orientation using the assisted method. The method tested is a an innovative system capable of providing very precise, systematic and automatised spatial orientations of virtual cranial models relative to standardised anatomical planes independent of the operator and operator experience. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2003-01-01
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F; Burnett, Greg C; Ng, Lawrence C
2013-05-21
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Holzrichter, John F.; Burnett, Greg C.; Ng, Lawrence C.
2007-10-16
A system and method for characterizing, synthesizing, and/or canceling out acoustic signals from inanimate sound sources is disclosed. Propagating wave electromagnetic sensors monitor excitation sources in sound producing systems, such as machines, musical instruments, and various other structures. Acoustical output from these sound producing systems is also monitored. From such information, a transfer function characterizing the sound producing system is generated. From the transfer function, acoustical output from the sound producing system may be synthesized or canceled. The methods disclosed enable accurate calculation of matched transfer functions relating specific excitations to specific acoustical outputs. Knowledge of such signals and functions can be used to effect various sound replication, sound source identification, and sound cancellation applications.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Method for Accurately Calibrating a Spectrometer Using Broadband Light
NASA Technical Reports Server (NTRS)
Simmons, Stephen; Youngquist, Robert
2011-01-01
A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
NASA Technical Reports Server (NTRS)
Vonderhaar, Thomas H.; Randel, David L.; Reinke, Donald L.; Stephens, Graeme L.; Ringerud, Mark A.; Combs, Cynthia L.; Greenwald, Thomas J.; Wittmeyer, Ian L.
1995-01-01
There is a well-documented requirement for a comprehensive and accurate global moisture data set to assist many important studies in atmospheric science. Currently, atmospheric water vapor measurements are made from a variety of sources including radiosondes, aircraft and surface observations, and in recent years, by various satellite instruments. Creating a global data set from a single measuring system produces results that are useful and accurate only in specific situations and/or areas. Therefore, an accurate global moisture data set has been derived from a combination of these measurement systems. Under a NASA peer-reviewed contract, STC-METSAT produced two 5-yr (1988-1992) global data sets. One is the total column (integrated) water vapor data set and the other, a global layered water vapor data set using a combination of radiosonde observations, Television and Infrared Observation Satellite (TIROS) Operational Satellite (TOVS), and Special Sensor Microwave/Imager (SSM/I) data sets. STC-METSAT also produced a companion, global, integrated liquid water data set. The complete data set (all three products) has been named NVAP, an anachronym for NASA Water Vapor Project. STC-METSAT developed methods to process the data at a daily time scale and 1 x 1 deg spatial resolution.
ERIC Educational Resources Information Center
Mills, Myron L.
1988-01-01
A system developed for more efficient evaluation of graduate medical students' progress uses numerical scoring and a microcomputer database management system as an alternative to manual methods to produce accurate, objective, and meaningful summaries of resident evaluations. (Author/MSE)
A note on the accuracy of spectral method applied to nonlinear conservation laws
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang; Wong, Peter S.
1994-01-01
Fourier spectral method can achieve exponential accuracy both on the approximation level and for solving partial differential equations if the solutions are analytic. For a linear partial differential equation with a discontinuous solution, Fourier spectral method produces poor point-wise accuracy without post-processing, but still maintains exponential accuracy for all moments against analytic functions. In this note we assess the accuracy of Fourier spectral method applied to nonlinear conservation laws through a numerical case study. We find that the moments with respect to analytic functions are no longer very accurate. However the numerical solution does contain accurate information which can be extracted by a post-processing based on Gegenbauer polynomials.
A rapid enzymatic assay for high-throughput screening of adenosine-producing strains
Dong, Huina; Zu, Xin; Zheng, Ping; Zhang, Dawei
2015-01-01
Adenosine is a major local regulator of tissue function and industrially useful as precursor for the production of medicinal nucleoside substances. High-throughput screening of adenosine overproducers is important for industrial microorganism breeding. An enzymatic assay of adenosine was developed by combined adenosine deaminase (ADA) with indophenol method. The ADA catalyzes the cleavage of adenosine to inosine and NH3, the latter can be accurately determined by indophenol method. The assay system was optimized to deliver a good performance and could tolerate the addition of inorganic salts and many nutrition components to the assay mixtures. Adenosine could be accurately determined by this assay using 96-well microplates. Spike and recovery tests showed that this assay can accurately and reproducibly determine increases in adenosine in fermentation broth without any pretreatment to remove proteins and potentially interfering low-molecular-weight molecules. This assay was also applied to high-throughput screening for high adenosine-producing strains. The high selectivity and accuracy of the ADA assay provides rapid and high-throughput analysis of adenosine in large numbers of samples. PMID:25580842
USDA-ARS?s Scientific Manuscript database
The use of automated methods to estimate canopy cover (CC) from digital photographs has increased in recent years given its potential to produce accurate, fast and inexpensive CC measurements. Wide acceptance has been delayed because of the limitations of these methods. This work introduces a novel ...
Quantitating Organoleptic Volatile Phenols in Smoke-Exposed Vitis vinifera Berries.
Noestheden, Matthew; Thiessen, Katelyn; Dennis, Eric G; Tiet, Ben; Zandberg, Wesley F
2017-09-27
Accurate methods for quantitating volatile phenols (i.e., guaiacol, syringol, 4-ethylphenol, etc.) in smoke-exposed Vitis vinifera berries prior to fermentation are needed to predict the likelihood of perceptible smoke taint following vinification. Reported here is a complete, cross-validated analytical workflow to accurately quantitate free and glycosidically bound volatile phenols in smoke-exposed berries using liquid-liquid extraction, acid-mediated hydrolysis, and gas chromatography-tandem mass spectrometry. The reported workflow addresses critical gaps in existing methods for volatile phenols that impact quantitative accuracy, most notably the effect of injection port temperature and the variability in acid-mediated hydrolytic procedures currently used. Addressing these deficiencies will help the wine industry make accurate, informed decisions when producing wines from smoke-exposed berries.
Type- and Subtype-Specific Influenza Forecast.
Kandula, Sasikiran; Yang, Wan; Shaman, Jeffrey
2017-03-01
Prediction of the growth and decline of infectious disease incidence has advanced considerably in recent years. As these forecasts improve, their public health utility should increase, particularly as interventions are developed that make explicit use of forecast information. It is the task of the research community to increase the content and improve the accuracy of these infectious disease predictions. Presently, operational real-time forecasts of total influenza incidence are produced at the municipal and state level in the United States. These forecasts are generated using ensemble simulations depicting local influenza transmission dynamics, which have been optimized prior to forecast with observations of influenza incidence and data assimilation methods. Here, we explore whether forecasts targeted to predict influenza by type and subtype during 2003-2015 in the United States were more or less accurate than forecasts targeted to predict total influenza incidence. We found that forecasts separated by type/subtype generally produced more accurate predictions and, when summed, produced more accurate predictions of total influenza incidence. These findings indicate that monitoring influenza by type and subtype not only provides more detailed observational content but supports more accurate forecasting. More accurate forecasting can help officials better respond to and plan for current and future influenza activity. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Karell, Mara A; Langstaff, Helen K; Halazonetis, Demetrios J; Minghetti, Caterina; Frelat, Mélanie; Kranioti, Elena F
2016-09-01
The commingling of human remains often hinders forensic/physical anthropologists during the identification process, as there are limited methods to accurately sort these remains. This study investigates a new method for pair-matching, a common individualization technique, which uses digital three-dimensional models of bone: mesh-to-mesh value comparison (MVC). The MVC method digitally compares the entire three-dimensional geometry of two bones at once to produce a single value to indicate their similarity. Two different versions of this method, one manual and the other automated, were created and then tested for how well they accurately pair-matched humeri. Each version was assessed using sensitivity and specificity. The manual mesh-to-mesh value comparison method was 100 % sensitive and 100 % specific. The automated mesh-to-mesh value comparison method was 95 % sensitive and 60 % specific. Our results indicate that the mesh-to-mesh value comparison method overall is a powerful new tool for accurately pair-matching commingled skeletal elements, although the automated version still needs improvement.
Sanchez Lopez, Hector; Freschi, Fabio; Trakic, Adnan; Smith, Elliot; Herbert, Jeremy; Fuentes, Miguel; Wilson, Stephen; Liu, Limei; Repetto, Maurizio; Crozier, Stuart
2014-05-01
This article aims to present a fast, efficient and accurate multi-layer integral method (MIM) for the evaluation of complex spatiotemporal eddy currents in nonmagnetic and thin volumes of irregular geometries induced by arbitrary arrangements of gradient coils. The volume of interest is divided into a number of layers, wherein the thickness of each layer is assumed to be smaller than the skin depth and where one of the linear dimensions is much smaller than the remaining two dimensions. The diffusion equation of the current density is solved both in time-harmonic and transient domain. The experimentally measured magnetic fields produced by the coil and the induced eddy currents as well as the corresponding time-decay constants were in close agreement with the results produced by the MIM. Relevant parameters such as power loss and force induced by the eddy currents in a split cryostat were simulated using the MIM. The proposed method is capable of accurately simulating the current diffusion process inside thin volumes, such as the magnet cryostat. The method permits the priori-calculation of optimal pre-emphasis parameters. The MIM enables unified designs of gradient coil-magnet structures for an optimal mitigation of deleterious eddy current effects. Copyright © 2013 Wiley Periodicals, Inc.
Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods
Grossman, Mark W.; George, William A.
1987-01-01
A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.
Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods
Grossman, M.W.; George, W.A.
1987-07-07
A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.
Development of higher-order modal methods for transient thermal and structural analysis
NASA Technical Reports Server (NTRS)
Camarda, Charles J.; Haftka, Raphael T.
1989-01-01
A force-derivative method which produces higher-order modal solutions to transient problems is evaluated. These higher-order solutions converge to an accurate response using fewer degrees-of-freedom (eigenmodes) than lower-order methods such as the mode-displacement or mode-acceleration methods. Results are presented for non-proportionally damped structural problems as well as thermal problems modeled by finite elements.
Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R
2018-02-01
High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groeneboom, N. E.; Dahle, H., E-mail: nicolaag@astro.uio.no
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images thatmore » can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.« less
Introducing GAMER: A Fast and Accurate Method for Ray-tracing Galaxies Using Procedural Noise
NASA Astrophysics Data System (ADS)
Groeneboom, N. E.; Dahle, H.
2014-03-01
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
Method for non-invasive detection of ocular melanoma
Lambrecht, Richard M.; Packer, Samuel
1984-01-01
There is described an apparatus and method for diagnosing ocular cancer that is both non-invasive and accurate which comprises two radiation detectors positioned before each of the patient's eyes which will measure the radiation level produced in each eye after the administration of a tumor-localizing radiopharmaceutical such as gallium-67.
Method for non-invasive detection of ocular melanoma
Lambrecht, R.M.; Packer, S.
1984-10-30
An apparatus and method is disclosed for diagnosing ocular cancer that is both non-invasive and accurate. The apparatus comprises two radiation detectors positioned before each of the patient's eyes which will measure the radiation level produced in each eye after the administration of a tumor-localizing radiopharmaceutical such as gallium-67. 2 figs.
USDA-ARS?s Scientific Manuscript database
Stable hydrogen isotope methodology is used in nutrition studies to measure growth, breast milk intake, and energy requirement. Isotope ratio MS is the best instrumentation to measure the stable hydrogen isotope ratios in physiological fluids. Conventional methods to convert physiological fluids to ...
Output-Adaptive Tetrahedral Cut-Cell Validation for Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
A cut-cell approach to Computational Fluid Dynamics (CFD) that utilizes the median dual of a tetrahedral background grid is described. The discrete adjoint is also calculated, which permits adaptation based on improving the calculation of a specified output (off-body pressure signature) in supersonic inviscid flow. These predicted signatures are compared to wind tunnel measurements on and off the configuration centerline 10 body lengths below the model to validate the method for sonic boom prediction. Accurate mid-field sonic boom pressure signatures are calculated with the Euler equations without the use of hybrid grid or signature propagation methods. Highly-refined, shock-aligned anisotropic grids were produced by this method from coarse isotropic grids created without prior knowledge of shock locations. A heuristic reconstruction limiter provided stable flow and adjoint solution schemes while producing similar signatures to Barth-Jespersen and Venkatakrishnan limiters. The use of cut-cells with an output-based adaptive scheme completely automated this accurate prediction capability after a triangular mesh is generated for the cut surface. This automation drastically reduces the manual intervention required by existing methods.
Comparison of salivary collection and processing methods for quantitative HHV-8 detection.
Speicher, D J; Johnson, N W
2014-10-01
Saliva is a proved diagnostic fluid for the qualitative detection of infectious agents, but the accuracy of viral load determinations is unknown. Stabilising fluids impede nucleic acid degradation, compared with collection onto ice and then freezing, and we have shown that the DNA Genotek P-021 prototype kit (P-021) can produce high-quality DNA after 14 months of storage at room temperature. Here we evaluate the quantitative capability of 10 collection/processing methods. Unstimulated whole mouth fluid was spiked with a mixture of HHV-8 cloned constructs, 10-fold serial dilutions were produced, and samples were extracted and then examined with quantitative PCR (qPCR). Calibration curves were compared by linear regression and qPCR dynamics. All methods extracted with commercial spin columns produced linear calibration curves with large dynamic range and gave accurate viral loads. Ethanol precipitation of the P-021 does not produce a linear standard curve, and virus is lost in the cell pellet. DNA extractions from the P-021 using commercial spin columns produced linear standard curves with wide dynamic range and excellent limit of detection. When extracted with spin columns, the P-021 enables accurate viral loads down to 23 copies μl(-1) DNA. The quantitative and long-term storage capability of this system makes it ideal for study of salivary DNA viruses in resource-poor settings. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A segmentation editing framework based on shape change statistics
NASA Astrophysics Data System (ADS)
Mostapha, Mahmoud; Vicory, Jared; Styner, Martin; Pizer, Stephen
2017-02-01
Segmentation is a key task in medical image analysis because its accuracy significantly affects successive steps. Automatic segmentation methods often produce inadequate segmentations, which require the user to manually edit the produced segmentation slice by slice. Because editing is time-consuming, an editing tool that enables the user to produce accurate segmentations by only drawing a sparse set of contours would be needed. This paper describes such a framework as applied to a single object. Constrained by the additional information enabled by the manually segmented contours, the proposed framework utilizes object shape statistics to transform the failed automatic segmentation to a more accurate version. Instead of modeling the object shape, the proposed framework utilizes shape change statistics that were generated to capture the object deformation from the failed automatic segmentation to its corresponding correct segmentation. An optimization procedure was used to minimize an energy function that consists of two terms, an external contour match term and an internal shape change regularity term. The high accuracy of the proposed segmentation editing approach was confirmed by testing it on a simulated data set based on 10 in-vivo infant magnetic resonance brain data sets using four similarity metrics. Segmentation results indicated that our method can provide efficient and adequately accurate segmentations (Dice segmentation accuracy increase of 10%), with very sparse contours (only 10%), which is promising in greatly decreasing the work expected from the user.
An object tracking method based on guided filter for night fusion image
NASA Astrophysics Data System (ADS)
Qian, Xiaoyan; Wang, Yuedong; Han, Lei
2016-01-01
Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.
Tyramine and phenylethylamine production among lactic acid bacteria isolated from wine.
Landete, José María; Pardo, Isabel; Ferrer, Sergi
2007-04-20
The ability of wine lactic acid bacteria to produce tyramine and phenylethylamine was investigated by biochemical and genetic methods. An easy and accurate plate medium was developed to detect tyramine-producer strains, and a specific PCR assay that detects the presence of tdc gene was employed. All strains possessing the tdc gene were shown to produce tyramine and phenylethylamine. Wines containing high quantities of tyramine and phenylethylamine were found to contain Lactobacillus brevis or Lactobacillus hilgardii. The main tyramine producer was L. brevis. The ability to produce tyramine was absent or infrequent in the rest of the analysed wine species.
Myocardial strains from 3D displacement encoded magnetic resonance imaging
2012-01-01
Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791
NASA Astrophysics Data System (ADS)
Peng, Yahui; Jiang, Yulei; Soylu, Fatma N.; Tomek, Mark; Sensakovic, William; Oto, Aytekin
2012-02-01
Quantitative analysis of multi-parametric magnetic resonance (MR) images of the prostate, including T2-weighted (T2w) and diffusion-weighted (DW) images, requires accurate image registration. We compared two registration methods between T2w and DW images. We collected pre-operative MR images of 124 prostate cancer patients (68 patients scanned with a GE scanner and 56 with Philips scanners). A landmark-based rigid registration was done based on six prostate landmarks in both T2w and DW images identified by a radiologist. Independently, a researcher manually registered the same images. A radiologist visually evaluated the registration results by using a 5-point ordinal scale of 1 (worst) to 5 (best). The Wilcoxon signed-rank test was used to determine whether the radiologist's ratings of the results of the two registration methods were significantly different. Results demonstrated that both methods were accurate: the average ratings were 4.2, 3.3, and 3.8 for GE, Philips, and all images, respectively, for the landmark-based method; and 4.6, 3.7, and 4.2, respectively, for the manual method. The manual registration results were more accurate than the landmark-based registration results (p < 0.0001 for GE, Philips, and all images). Therefore, the manual method produces more accurate registration between T2w and DW images than the landmark-based method.
Using Vision Metrology System for Quality Control in Automotive Industries
NASA Astrophysics Data System (ADS)
Mostofi, N.; Samadzadegan, F.; Roohy, Sh.; Nozari, M.
2012-07-01
The need of more accurate measurements in different stages of industrial applications, such as designing, producing, installation, and etc., is the main reason of encouraging the industry deputy in using of industrial Photogrammetry (Vision Metrology System). With respect to the main advantages of Photogrammetric methods, such as greater economy, high level of automation, capability of noncontact measurement, more flexibility and high accuracy, a good competition occurred between this method and other industrial traditional methods. With respect to the industries that make objects using a main reference model without having any mathematical model of it, main problem of producers is the evaluation of the production line. This problem will be so complicated when both reference and product object just as a physical object is available and comparison of them will be possible with direct measurement. In such case, producers make fixtures fitting reference with limited accuracy. In practical reports sometimes available precision is not better than millimetres. We used a non-metric high resolution digital camera for this investigation and the case study that studied in this paper is a chassis of automobile. In this research, a stable photogrammetric network designed for measuring the industrial object (Both Reference and Product) and then by using the Bundle Adjustment and Self-Calibration methods, differences between the Reference and Product object achieved. These differences will be useful for the producer to improve the production work flow and bringing more accurate products. Results of this research, demonstrate the high potential of proposed method in industrial fields. Presented results prove high efficiency and reliability of this method using RMSE criteria. Achieved RMSE for this case study is smaller than 200 microns that shows the fact of high capability of implemented approach.
A. M. S. Smith; N. A. Drake; M. J. Wooster; A. T. Hudak; Z. A. Holden; C. J. Gibbons
2007-01-01
Accurate production of regional burned area maps are necessary to reduce uncertainty in emission estimates from African savannah fires. Numerous methods have been developed that map burned and unburned surfaces. These methods are typically applied to coarse spatial resolution (1 km) data to produce regional estimates of the area burned, while higher spatial resolution...
Accurate Finite Difference Algorithms
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1996-01-01
Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.
MIVOC method with temperature controla)
NASA Astrophysics Data System (ADS)
Takasugi, W.; Wakaisami, M.; Sasaki, N.; Sakuma, T.; Yamamoto, M.; Kitagawa, A.; Muramatsu, M.
2010-02-01
The Heavy Ion Medical Accelerator in Chiba at the National Institute of Radiological Sciences has been used for cancer therapy, physics, and biology experiments since 1994. Its ion sources produce carbon ion for cancer therapy. They also produce various ions (H+-Xe21+) for physics and biology experiments. Most ion species are produced from gases by an 18 GHz electron cyclotron resonance ion source. However, some of ion species is difficult to produce from stable and secure gases. Such ion species are produced by the sputtering method. However, it is necessary to reduce material consumption rate as much as possible in the case of rare and expensive stable isotopes. We have selected "metal ions from volatile compounds method" as a means to solve this problem. We tested a variety of compounds. Since each compound has a suitable temperature to obtain the optimum vapor pressure, we have developed an accurate temperature control system. We have produced ions such as F58e9+, Co9+, Mg5+, Ti10+, Si5+, and Ge12+ with the temperature control.
Efficacy of curtailment announcements as a predictor of lumber supply
Henry Spelter
2001-01-01
A practical method for tracking the effect of curtailment announcements on lumber supply is described and tested. Combining announcements of closures and curtailments with mill capacities enables the creation of accurate forward-looking assessments of lumber supply 1 to 2 months into the future. For three American and Canadian lumber- producing regions, the method...
USDA-ARS?s Scientific Manuscript database
Increased use of humic substances in agriculture has generated intense interest among producers, consumers, and regulators for an accurate and reliable method for quantification of humic (HA) and fulvic acids (FA) in raw ores and products. Here we present a thoroughly validated method, the Humic Pro...
Waterman, Kenneth C; Swanson, Jon T; Lippold, Blake L
2014-10-01
Three competing mathematical fitting models (a point-by-point estimation method, a linear fit method, and an isoconversion method) of chemical stability (related substance growth) when using high temperature data to predict room temperature shelf-life were employed in a detailed comparison. In each case, complex degradant formation behavior was analyzed by both exponential and linear forms of the Arrhenius equation. A hypothetical reaction was used where a drug (A) degrades to a primary degradant (B), which in turn degrades to a secondary degradation product (C). Calculated data with the fitting models were compared with the projected room-temperature shelf-lives of B and C, using one to four time points (in addition to the origin) for each of three accelerated temperatures. Isoconversion methods were found to provide more accurate estimates of shelf-life at ambient conditions. Of the methods for estimating isoconversion, bracketing the specification limit at each condition produced the best estimates and was considerably more accurate than when extrapolation was required. Good estimates of isoconversion produced similar shelf-life estimates fitting either linear or nonlinear forms of the Arrhenius equation, whereas poor isoconversion estimates favored one method or the other depending on which condition was most in error. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Spelleken, E; Crowe, S B; Sutherland, B; Challens, C; Kairn, T
2018-03-01
Gafchromic EBT3 film is widely used for patient specific quality assurance of complex treatment plans. Film dosimetry techniques commonly involve the use of transmission scanning to produce TIFF files, which are analysed using a non-linear calibration relationship between the dose and red channel net optical density (netOD). Numerous film calibration techniques featured in the literature have not been independently verified or evaluated. A range of previously published film dosimetry techniques were re-evaluated, to identify whether these methods produce better results than the commonly-used non-linear, netOD method. EBT3 film was irradiated at calibration doses between 0 and 4000 cGy and 25 pieces of film were irradiated at 200 cGy to evaluate uniformity. The film was scanned using two different scanners: The Epson Perfection V800 and the Epson Expression 10000XL. Calibration curves, uncertainty in the fit of the curve, overall uncertainty and uniformity were calculated following the methods described by the different calibration techniques. It was found that protocols based on a conventional film dosimetry technique produced results that were accurate and uniform to within 1%, while some of the unconventional techniques produced much higher uncertainties (> 25% for some techniques). Some of the uncommon methods produced reliable results when irradiated to the standard treatment doses (< 400 cGy), however none could be recommended as an efficient or accurate replacement for a common film analysis technique which uses transmission scanning, red colour channel analysis, netOD and a non-linear calibration curve for measuring doses up to 4000 cGy when using EBT3 film.
NASA Astrophysics Data System (ADS)
Herrington, Jason S.; Hays, Michael D.
2012-08-01
There is high demand for accurate and reliable airborne carbonyl measurement methods due to the human and environmental health impacts of carbonyls and their effects on atmospheric chemistry. Standardized 2,4-dinitrophenylhydrazine (DNPH)-based sampling methods are frequently applied for measuring gaseous carbonyls in the atmospheric environment. However, there are multiple short-comings associated with these methods that detract from an accurate understanding of carbonyl-related exposure, health effects, and atmospheric chemistry. The purpose of this brief technical communication is to highlight these method challenges and their influence on national ambient monitoring networks, and to provide a logical path forward for accurate carbonyl measurement. This manuscript focuses on three specific carbonyl compounds of high toxicological interest—formaldehyde, acetaldehyde, and acrolein. Further method testing and development, the revision of standardized methods, and the plausibility of introducing novel technology for these carbonyls are considered elements of the path forward. The consolidation of this information is important because it seems clear that carbonyl data produced utilizing DNPH-based methods are being reported without acknowledgment of the method short-comings or how to best address them.
a Method of Generating dem from Dsm Based on Airborne Insar Data
NASA Astrophysics Data System (ADS)
Lu, W.; Zhang, J.; Xue, G.; Wang, C.
2018-04-01
Traditional methods of terrestrial survey to acquire DEM cannot meet the requirement of acquiring large quantities of data in real time, but the DSM can be quickly obtained by using the dual antenna synthetic aperture radar interferometry and the DEM generated by the DSM is more fast and accurate. Therefore it is most important to acquire DEM from DSM based on airborne InSAR data. This paper aims to the method that generate DEM from DSM accurately. Two steps in this paper are applied to acquire accurate DEM. First of all, when the DSM is generated by interferometry, unavoidable factors such as overlay and shadow will produce gross errors to affect the data accuracy, so the adaptive threshold segmentation method is adopted to remove the gross errors and the threshold is selected according to the coherence of the interferometry. Secondly DEM will be generated by the progressive triangulated irregular network densification filtering algorithm. Finally, experimental results are compared with the existing high-precision DEM results. The results show that this method can effectively filter out buildings, vegetation and other objects to obtain the high-precision DEM.
High-quality animation of 2D steady vector fields.
Lefer, Wilfrid; Jobard, Bruno; Leduc, Claire
2004-01-01
Simulators for dynamic systems are now widely used in various application areas and raise the need for effective and accurate flow visualization techniques. Animation allows us to depict direction, orientation, and velocity of a vector field accurately. This paper extends a former proposal for a new approach to produce perfectly cyclic and variable-speed animations for 2D steady vector fields (see [1] and [2]). A complete animation of an arbitrary number of frames is encoded in a single image. The animation can be played using the color table animation technique, which is very effective even on low-end workstations. A cyclic set of textures can be produced as well and then encoded in a common animation format or used for texture mapping on 3D objects. As compared to other approaches, the method presented in this paper produces smoother animations and is more effective, both in memory requirements to store the animation, and in computation time.
Diazo processing of LANDSAT imagery: A low-cost instructional technique
NASA Technical Reports Server (NTRS)
Lusch, D. P.
1981-01-01
Diazo processing of LANDSAT imagery is a relatively simple and cost effective method of producing enhanced renditions of the visual LANDSAT products. This technique is capable of producing a variety of image enhancements which have value in a teaching laboratory environment. Additionally, with the appropriate equipment, applications research which relys on accurate and repeatable results is possible. Exposure and development equipment options, diazo materials, and enhancement routines are discussed.
NASA Astrophysics Data System (ADS)
Niebuhr, Cole
2018-04-01
Papers published in the astronomical community, particularly in the field of double star research, often contain plots that display the positions of the component stars relative to each other on a Cartesian coordinate plane. Due to the complexities of plotting a three-dimensional orbit into a two-dimensional image, it is often difficult to include an accurate reproduction of the orbit for comparison purposes. Methods to circumvent this obstacle do exist; however, many of these protocols result in low-quality blurred images or require specific and often expensive software. Here, a method is reported using Microsoft Paint and Microsoft Excel to produce high-quality images with an accurate reproduction of a partial orbit.
An image-space parallel convolution filtering algorithm based on shadow map
NASA Astrophysics Data System (ADS)
Li, Hua; Yang, Huamin; Zhao, Jianping
2017-07-01
Shadow mapping is commonly used in real-time rendering. In this paper, we presented an accurate and efficient method of soft shadows generation from planar area lights. First this method generated a depth map from light's view, and analyzed the depth-discontinuities areas as well as shadow boundaries. Then these areas were described as binary values in the texture map called binary light-visibility map, and a parallel convolution filtering algorithm based on GPU was enforced to smooth out the boundaries with a box filter. Experiments show that our algorithm is an effective shadow map based method that produces perceptually accurate soft shadows in real time with more details of shadow boundaries compared with the previous works.
Application of Response Surface Methods To Determine Conditions for Optimal Genomic Prediction
Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.
2017-01-01
An epistatic genetic architecture can have a significant impact on prediction accuracies of genomic prediction (GP) methods. Machine learning methods predict traits comprised of epistatic genetic architectures more accurately than statistical methods based on additive mixed linear models. The differences between these types of GP methods suggest a diagnostic for revealing genetic architectures underlying traits of interest. In addition to genetic architecture, the performance of GP methods may be influenced by the sample size of the training population, the number of QTL, and the proportion of phenotypic variability due to genotypic variability (heritability). Possible values for these factors and the number of combinations of the factor levels that influence the performance of GP methods can be large. Thus, efficient methods for identifying combinations of factor levels that produce most accurate GPs is needed. Herein, we employ response surface methods (RSMs) to find the experimental conditions that produce the most accurate GPs. We illustrate RSM with an example of simulated doubled haploid populations and identify the combination of factors that maximize the difference between prediction accuracies of best linear unbiased prediction (BLUP) and support vector machine (SVM) GP methods. The greatest impact on the response is due to the genetic architecture of the population, heritability of the trait, and the sample size. When epistasis is responsible for all of the genotypic variance and heritability is equal to one and the sample size of the training population is large, the advantage of using the SVM method vs. the BLUP method is greatest. However, except for values close to the maximum, most of the response surface shows little difference between the methods. We also determined that the conditions resulting in the greatest prediction accuracy for BLUP occurred when genetic architecture consists solely of additive effects, and heritability is equal to one. PMID:28720710
ICE-COLA: fast simulations for weak lensing observables
NASA Astrophysics Data System (ADS)
Izard, Albert; Fosalba, Pablo; Crocce, Martin
2018-01-01
Approximate methods to full N-body simulations provide a fast and accurate solution to the development of mock catalogues for the modelling of galaxy clustering observables. In this paper we extend ICE-COLA, based on an optimized implementation of the approximate COLA method, to produce weak lensing maps and halo catalogues in the light-cone using an integrated and self-consistent approach. We show that despite the approximate dynamics, the catalogues thus produced enable an accurate modelling of weak lensing observables one decade beyond the characteristic scale where the growth becomes non-linear. In particular, we compare ICE-COLA to the MICE Grand Challenge N-body simulation for some fiducial cases representative of upcoming surveys and find that, for sources at redshift z = 1, their convergence power spectra agree to within 1 per cent up to high multipoles (i.e. of order 1000). The corresponding shear two point functions, ξ+ and ξ-, yield similar accuracy down to 2 and 20 arcmin respectively, while tangential shear around a z = 0.5 lens sample is accurate down to 4 arcmin. We show that such accuracy is stable against an increased angular resolution of the weak lensing maps. Hence, this opens the possibility of using approximate methods for the joint modelling of galaxy clustering and weak lensing observables and their covariance in ongoing and future galaxy surveys.
Development of a Global Multilayered Cloud Retrieval System
NASA Technical Reports Server (NTRS)
Huang, J.; Minnis, P.; Lin, B.; Yi, Y.; Ayers, J. K.; Khaiyer, M. M.; Arduini, R.; Fan, T.-F
2004-01-01
A more rigorous multilayered cloud retrieval system has been developed to improve the determination of high cloud properties in multilayered clouds. The MCRS attempts a more realistic interpretation of the radiance field than earlier methods because it explicitly resolves the radiative transfer that would produce the observed radiances. A two-layer cloud model was used to simulate multilayered cloud radiative characteristics. Despite the use of a simplified two-layer cloud reflectance parameterization, the MCRS clearly produced a more accurate retrieval of ice water path than simple differencing techniques used in the past. More satellite data and ground observation have to be used to test the MCRS. The MCRS methods are quite appropriate for interpreting the radiances when the high cloud has a relatively large optical depth (tau(sub I) greater than 2). For thinner ice clouds, a more accurate retrieval might be possible using infrared methods. Selection of an ice cloud retrieval and a variety of other issues must be explored before a complete global application of this technique can be implemented. Nevertheless, the initial results look promising.
A two dimensional power spectral estimate for some nonstationary processes. M.S. Thesis
NASA Technical Reports Server (NTRS)
Smith, Gregory L.
1989-01-01
A two dimensional estimate for the power spectral density of a nonstationary process is being developed. The estimate will be applied to helicopter noise data which is clearly nonstationary. The acoustic pressure from the isolated main rotor and isolated tail rotor is known to be periodically correlated (PC) and the combined noise from the main and tail rotors is assumed to be correlation autoregressive (CAR). The results of this nonstationary analysis will be compared with the current method of assuming that the data is stationary and analyzing it as such. Another method of analysis is to introduce a random phase shift into the data as shown by Papoulis to produce a time history which can then be accurately modeled as stationary. This method will also be investigated for the helicopter data. A method used to determine the period of a PC process when the period is not know is discussed. The period of a PC process must be known in order to produce an accurate spectral representation for the process. The spectral estimate is developed. The bias and variability of the estimate are also discussed. Finally, the current method for analyzing nonstationary data is compared to that of using a two dimensional spectral representation. In addition, the method of phase shifting the data is examined.
NMR spectroscopy for assessing lipid oxidation
USDA-ARS?s Scientific Manuscript database
Although lipid oxidation involves a variety of chemical reactions to produce numerous substances, most of traditional methods assessing lipid oxidation measure only one kind of oxidation product. For this reason, in general, one indicator of oxidation is not enough to accurately describe the oxidati...
Mahajan, Richi V; Saran, Saurabh; Saxena, Rajendra K; Srivastava, Ayush K
2013-04-01
l-Asparaginase-producing microbes are conventionally screened on phenol red l-asparagine-containing plates. However, sometimes the contrast of the zone obtained (between yellow and pink) is not very sharp and distinct. In the present investigation, an improved method for screening of the microorganisms producing extracellular l-asparaginase is reported wherein bromothymol blue (BTB) is incorporated as pH indicator in l-asparagine-containing medium instead of phenol red. Plates containing BTB at acidic pH are yellow and turn dark blue at alkaline pH. Thus, a dense dark blue zone is formed around microbial colonies producing l-asparaginase, differentiating between enzyme producers and non-producers. The present method is more sensitive and accurate than the conventional method for screening of both fungi and bacteria producing extracellular l-asparaginase. Furthermore, BTB gives a transient green colour at neutral pH (7.0) and dark blue colour at higher pH 8.0-9.0, indicating the potency of the microorganism for l-asparaginase production. © 2013 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
Accommodating Uncertainty in Prior Distributions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Vander Wiel, Scott Alan
2017-01-19
A fundamental premise of Bayesian methodology is that a priori information is accurately summarized by a single, precisely de ned prior distribution. In many cases, especially involving informative priors, this premise is false, and the (mis)application of Bayes methods produces posterior quantities whose apparent precisions are highly misleading. We examine the implications of uncertainty in prior distributions, and present graphical methods for dealing with them.
Digital Signal Processing Methods for Ultrasonic Echoes.
Sinding, Kyle; Drapaca, Corina; Tittmann, Bernhard
2016-04-28
Digital signal processing has become an important component of data analysis needed in industrial applications. In particular, for ultrasonic thickness measurements the signal to noise ratio plays a major role in the accurate calculation of the arrival time. For this application a band pass filter is not sufficient since the noise level cannot be significantly decreased such that a reliable thickness measurement can be performed. This paper demonstrates the abilities of two regularization methods - total variation and Tikhonov - to filter acoustic and ultrasonic signals. Both of these methods are compared to a frequency based filtering for digitally produced signals as well as signals produced by ultrasonic transducers. This paper demonstrates the ability of the total variation and Tikhonov filters to accurately recover signals from noisy acoustic signals faster than a band pass filter. Furthermore, the total variation filter has been shown to reduce the noise of a signal significantly for signals with clear ultrasonic echoes. Signal to noise ratios have been increased over 400% by using a simple parameter optimization. While frequency based filtering is efficient for specific applications, this paper shows that the reduction of noise in ultrasonic systems can be much more efficient with regularization methods.
[Calculating the stark broadening of welding arc spectra by Fourier transform method].
Pan, Cheng-Gang; Hua, Xue-Ming; Zhang, Wang; Li, Fang; Xiao, Xiao
2012-07-01
It's the most effective and accurate method to calculate the electronic density of plasma by using the Stark width of the plasma spectrum. However, it's difficult to separate Stark width from the composite spectrum linear produced by several mechanisms. In the present paper, Fourier transform was used to separate the Lorentz linear from the spectrum observed, thus to get the accurate Stark width. And we calculated the distribution of the TIG welding arc plasma. This method does not need to measure arc temperature accurately, to measure the width of the plasma spectrum broadened by instrument, and has the function to reject the noise data. The results show that, on the axis, the electron density of TIG welding arc decreases with the distance from tungsten increasing, and changes from 1.21 X 10(17) cm(-3) to 1.58 x 10(17) cm(-3); in the radial, the electron density decreases with the distance from axis increasing, and near the tungsten zone the biggest electronic density is off axis.
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Estimating health state utility values for comorbid health conditions using SF-6D data.
Ara, Roberta; Brazier, John
2011-01-01
When health state utility values for comorbid health conditions are not available, data from cohorts with single conditions are used to estimate scores. The methods used can produce very different results and there is currently no consensus on which is the most appropriate approach. The objective of the current study was to compare the accuracy of five different methods within the same dataset. Data collected during five Welsh Health Surveys were subgrouped by health status. Mean short-form 6 dimension (SF-6D) scores for cohorts with a specific health condition were used to estimate mean SF-6D scores for cohorts with comorbid conditions using the additive, multiplicative, and minimum methods, the adjusted decrement estimator (ADE), and a linear regression model. The mean SF-6D for subgroups with comorbid health conditions ranged from 0.4648 to 0.6068. The linear model produced the most accurate scores for the comorbid health conditions with 88% of values accurate to within the minimum important difference for the SF-6D. The additive and minimum methods underestimated or overestimated the actual SF-6D scores respectively. The multiplicative and ADE methods both underestimated the majority of scores. However, both methods performed better when estimating scores smaller than 0.50. Although the range in actual health state utility values (HSUVs) was relatively small, our data covered the lower end of the index and the majority of previous research has involved actual HSUVs at the upper end of possible ranges. Although the linear model gave the most accurate results in our data, additional research is required to validate our findings. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values
Alves, Gelio; Yu, Yi-Kuo
2014-01-01
Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491
METHODS TO CLASSIFY ENVIRONMENTAL SAMPLES BASED ON MOLD ANALYSES BY QPCR
Quantitative PCR (QPCR) analysis of molds in indoor environmental samples produces highly accurate speciation and enumeration data. In a number of studies, eighty of the most common or potentially problematic indoor molds were identified and quantified in dust samples from homes...
TECHNICAL CHALLENGES ASSOCIATED WITH ASSESSING THE IN VITRO PULMONARY TOXICITY OF CARBON NANOTUBES
Nanotechnology continues to produce a large number of diverse engineered nanomaterials (NMs) with novel physicochemical properties for a variety of applications. Test methods that accurately assess/predict the toxicity of NMs are critically needed and it is unclear whether curren...
Energy Consumption Forecasting Using Semantic-Based Genetic Programming with Local Search Optimizer.
Castelli, Mauro; Trujillo, Leonardo; Vanneschi, Leonardo
2015-01-01
Energy consumption forecasting (ECF) is an important policy issue in today's economies. An accurate ECF has great benefits for electric utilities and both negative and positive errors lead to increased operating costs. The paper proposes a semantic based genetic programming framework to address the ECF problem. In particular, we propose a system that finds (quasi-)perfect solutions with high probability and that generates models able to produce near optimal predictions also on unseen data. The framework blends a recently developed version of genetic programming that integrates semantic genetic operators with a local search method. The main idea in combining semantic genetic programming and a local searcher is to couple the exploration ability of the former with the exploitation ability of the latter. Experimental results confirm the suitability of the proposed method in predicting the energy consumption. In particular, the system produces a lower error with respect to the existing state-of-the art techniques used on the same dataset. More importantly, this case study has shown that including a local searcher in the geometric semantic genetic programming system can speed up the search process and can result in fitter models that are able to produce an accurate forecasting also on unseen data.
A k-space method for acoustic propagation using coupled first-order equations in three dimensions.
Tillett, Jason C; Daoud, Mohammad I; Lacefield, James C; Waag, Robert C
2009-09-01
A previously described two-dimensional k-space method for large-scale calculation of acoustic wave propagation in tissues is extended to three dimensions. The three-dimensional method contains all of the two-dimensional method features that allow accurate and stable calculation of propagation. These features are spectral calculation of spatial derivatives, temporal correction that produces exact propagation in a homogeneous medium, staggered spatial and temporal grids, and a perfectly matched boundary layer. Spectral evaluation of spatial derivatives is accomplished using a fast Fourier transform in three dimensions. This computational bottleneck requires all-to-all communication; execution time in a parallel implementation is therefore sensitive to node interconnect latency and bandwidth. Accuracy of the three-dimensional method is evaluated through comparisons with exact solutions for media having spherical inhomogeneities. Large-scale calculations in three dimensions were performed by distributing the nearly 50 variables per voxel that are used to implement the method over a cluster of computers. Two computer clusters used to evaluate method accuracy are compared. Comparisons of k-space calculations with exact methods including absorption highlight the need to model accurately the medium dispersion relationships, especially in large-scale media. Accurately modeled media allow the k-space method to calculate acoustic propagation in tissues over hundreds of wavelengths.
Evaluation of selected methods for determining streamflow during periods of ice effect
Melcher, Norwood B.; Walker, J.F.
1992-01-01
Seventeen methods for estimating ice-affected streamflow are evaluated for potential use with the U.S. Geological Survey streamflow-gaging station network. The methods evaluated were identified by written responses from U.S. Geological Survey field offices and by a comprehensive literature search. The methods selected and techniques used for applying the methods are described in this report. The methods are evaluated by comparing estimated results with data collected at three streamflow-gaging stations in Iowa during the winter of 1987-88. Discharge measurements were obtained at 1- to 5-day intervals during the ice-affected periods at the three stations to define an accurate baseline record. Discharge records were compiled for each method based on data available, assuming a 6-week field schedule. The methods are classified into two general categories-subjective and analytical--depending on whether individual judgment is necessary for method application. On the basis of results of the evaluation for the three Iowa stations, two of the subjective methods (discharge ratio and hydrographic-and-climatic comparison) were more accurate than the other subjective methods and approximately as accurate as the best analytical method. Three of the analytical methods (index velocity, adjusted rating curve, and uniform flow) could potentially be used at streamflow-gaging stations, where the need for accurate ice-affected discharge estimates justifies the expense of collecting additional field data. One analytical method (ice-adjustment factor) may be appropriate for use at stations with extremely stable stage-discharge ratings and measuring sections. Further research is needed to refine the analytical methods. The discharge-ratio and multiple-regression methods produce estimates of streamflow for varying ice conditions using information obtained from the existing U.S. Geological Survey streamflow-gaging network.
Liu, Hong; Wang, Jie; Xu, Xiangyang; Song, Enmin; Wang, Qian; Jin, Renchao; Hung, Chih-Cheng; Fei, Baowei
2014-11-01
A robust and accurate center-frequency (CF) estimation (RACE) algorithm for improving the performance of the local sine-wave modeling (SinMod) method, which is a good motion estimation method for tagged cardiac magnetic resonance (MR) images, is proposed in this study. The RACE algorithm can automatically, effectively and efficiently produce a very appropriate CF estimate for the SinMod method, under the circumstance that the specified tagging parameters are unknown, on account of the following two key techniques: (1) the well-known mean-shift algorithm, which can provide accurate and rapid CF estimation; and (2) an original two-direction-combination strategy, which can further enhance the accuracy and robustness of CF estimation. Some other available CF estimation algorithms are brought out for comparison. Several validation approaches that can work on the real data without ground truths are specially designed. Experimental results on human body in vivo cardiac data demonstrate the significance of accurate CF estimation for SinMod, and validate the effectiveness of RACE in facilitating the motion estimation performance of SinMod. Copyright © 2014 Elsevier Inc. All rights reserved.
Using structure to explore the sequence alignment space of remote homologs.
Kuziemko, Andrew; Honig, Barry; Petrey, Donald
2011-10-01
Protein structure modeling by homology requires an accurate sequence alignment between the query protein and its structural template. However, sequence alignment methods based on dynamic programming (DP) are typically unable to generate accurate alignments for remote sequence homologs, thus limiting the applicability of modeling methods. A central problem is that the alignment that is "optimal" in terms of the DP score does not necessarily correspond to the alignment that produces the most accurate structural model. That is, the correct alignment based on structural superposition will generally have a lower score than the optimal alignment obtained from sequence. Variations of the DP algorithm have been developed that generate alternative alignments that are "suboptimal" in terms of the DP score, but these still encounter difficulties in detecting the correct structural alignment. We present here a new alternative sequence alignment method that relies heavily on the structure of the template. By initially aligning the query sequence to individual fragments in secondary structure elements and combining high-scoring fragments that pass basic tests for "modelability", we can generate accurate alignments within a small ensemble. Our results suggest that the set of sequences that can currently be modeled by homology can be greatly extended.
Comparison of Methods for Estimating Evapotranspiration using Remote Sensing Data
NASA Astrophysics Data System (ADS)
Beamer, J. P.; Morton, C.; Huntington, J. L.; Pohll, G.
2010-12-01
Estimating the annual evapotranspiration (ET) in arid and semi-arid environments is important for managing water resources. In this study we use remote sensing methods to estimate ET from different areas located in western and eastern Nevada. Surface energy balance (SEB) and vegetation indices (VI) are two common methods for estimating ET using satellite data. The purpose of this study is to compare these methods for estimating annual ET and highlight strengths and weaknesses in both methods. The SEB approach used is based on the Mapping Evapotranspiration at high Resolution with Internalized Calibration (METRIC) model, which estimates ET as a residual of the energy balance. METRIC has been shown to produce accurate results in agricultural and riparian settings. The VI approach used is based on statistical relationships between annual ET and various VI’s. The VI approaches have also shown to produce fairly accurate estimates of ET for various vegetation types, however consideration for spatial variations in potential ET and precipitation amount are generally ignored, leading to restrictions in their application. In this work we develop a VI approach that considers the study area potential ET and precipitation amount and compare this approach to METRIC and flux tower estimates of annual ET for several arid phreatophyte shrubs and irrigated agriculture settings.
Folta, James A.; Montcalm, Claude; Walton, Christopher
2003-01-01
A method and system for producing a thin film with highly uniform (or highly accurate custom graded) thickness on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source with controlled (and generally, time-varying) velocity. In preferred embodiments, the method includes the steps of measuring the source flux distribution (using a test piece that is held stationary while exposed to the source), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of sweep velocity modulation recipes, and determining from the predicted film thickness profiles a sweep velocity modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a practical method of accurately measuring source flux distribution, and a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal sweep velocity modulation recipe to achieve a desired thickness profile on a substrate. Preferably, the computer implements an algorithm in which many sweep velocity function parameters (for example, the speed at which each substrate spins about its center as it sweeps across the source) can be varied or set to zero.
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-01-01
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system. PMID:24912488
NASA Astrophysics Data System (ADS)
Mu, Yang; Yang, Hou-Yun; Wang, Ya-Zhou; He, Chuan-Shu; Zhao, Quan-Bao; Wang, Yi; Yu, Han-Qing
2014-06-01
Fermentative hydrogen production from wastes has many advantages compared to various chemical methods. Methodology for characterizing the hydrogen-producing activity of anaerobic mixed cultures is essential for monitoring reactor operation in fermentative hydrogen production, however there is lack of such kind of standardized methodologies. In the present study, a new index, i.e., the maximum specific hydrogen-producing activity (SHAm) of anaerobic mixed cultures, was proposed, and consequently a reliable and simple method, named SHAm test, was developed to determine it. Furthermore, the influences of various parameters on the SHAm value determination of anaerobic mixed cultures were evaluated. Additionally, this SHAm assay was tested for different types of substrates and bacterial inocula. Our results demonstrate that this novel SHAm assay was a rapid, accurate and simple methodology for determining the hydrogen-producing activity of anaerobic mixed cultures. Thus, application of this approach is beneficial to establishing a stable anaerobic hydrogen-producing system.
Hajiloo, Mohsen; Sapkota, Yadav; Mackey, John R; Robson, Paula; Greiner, Russell; Damaraju, Sambasivarao
2013-02-22
Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case-control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual's continental and sub-continental ancestry. To predict an individual's continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control's λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of 86.5% ± 2.4%, 95.6% ± 3.9%, 95.6% ± 2.1%, 98.3% ± 2.0%, and 95.9% ± 1.5%. However, ETHNOPRED was unable to produce a classifier that can accurately distinguish Chinese in Beijing vs. Chinese in Denver. ETHNOPRED is a novel technique for producing classifiers that can identify an individual's continental and sub-continental heritage, based on a small number of SNPs. We show that its learned classifiers are simple, cost-efficient, accurate, transparent, flexible, fast, applicable to large scale GWASs, and robust to missing values.
EFFECTS OF GREEN MACROALGAE ON CLASSIFICATION OF SEAGRASS IN SIDE SCAN SONAR IMAGERY
High resolution maps of seagrass beds are useful for monitoring estuarine condition, managing fish habitats, and modeling estuarine processes. Side scan sonar (SSS) is one method for producing spatially accurate seagrass maps, although it has not been used widely. Our team rece...
Ngo, Tuan Anh; Lu, Zhi; Carneiro, Gustavo
2017-01-01
We introduce a new methodology that combines deep learning and level set for the automated segmentation of the left ventricle of the heart from cardiac cine magnetic resonance (MR) data. This combination is relevant for segmentation problems, where the visual object of interest presents large shape and appearance variations, but the annotated training set is small, which is the case for various medical image analysis applications, including the one considered in this paper. In particular, level set methods are based on shape and appearance terms that use small training sets, but present limitations for modelling the visual object variations. Deep learning methods can model such variations using relatively small amounts of annotated training, but they often need to be regularised to produce good generalisation. Therefore, the combination of these methods brings together the advantages of both approaches, producing a methodology that needs small training sets and produces accurate segmentation results. We test our methodology on the MICCAI 2009 left ventricle segmentation challenge database (containing 15 sequences for training, 15 for validation and 15 for testing), where our approach achieves the most accurate results in the semi-automated problem and state-of-the-art results for the fully automated challenge. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Sobolewski, Marissa; Allen, Joshua L.; Morris-Schaffer, Keith; Klocke, Carolyn; Conrad, Katherine; Cory-Slechta, Deborah A.
2017-01-01
Prenatal stress and nutrition are well-known to alter a broad range of physiological systems, notably metabolic, endocrine and neurobehavioral function. Commonly used methods for oral administration of xenobiotics can, by acting as a stressor or altering normal nutrition intake, alter these physiological systems as well. Taken together, oral administration methods may unintentionally introduce confounding physiological effects that can mask or enhance toxicity of xenobiotics, particularly if they share biological targets. Consequently, it should be preferable to develop alternative methods without these potential confounds. The aim of this study was to determine the suitability of mealworms as an alternative treat-based method to deliver xenobiotics via the orogastric route. Accurate oral administration is contingent on motivation and preference; mice reliably preferred mealworms over wafer cookie treats. Further, ingestion of wafer cookies significantly increased mouse blood glucose levels, whereas unaltered mealworms produced no such change. Mealworms functioned effectively to orally administer glucose, as glucose-spiked mealworms produced a rise in blood glucose equivalent to the ingestion of the wafer cookie. Mealworms did not interfere with the physiological function of orally administered d-amphetamine, as both mealworm and oral gavage administered d-amphetamine showed similar alterations in locomotor behavior (mice did not fully consume d-amphetamine-dosed cookies and thus could not be compared). Collectively, the findings indicate that mealworms are a preferred and readily consumed treat, which importantly mimics environmental-relevant nutritional intake, and mealworms per se do not alter glucose metabolic pathways. Additionally, mealworms accurately delivered xenobiotics into blood circulation and did not interfere with the physiological function of administered xenobiotics. Thus mealworm-based oral administration may be a preferable and accurate route of xenobiotic administration that eliminates physiological alterations associated with other methods of delivery. PMID:27094606
Sobolewski, Marissa; Allen, Joshua L; Morris-Schaffer, Keith; Klocke, Carolyn; Conrad, Katherine; Cory-Slechta, Deborah A
2016-01-01
Prenatal stress and nutrition are well-known to alter a broad range of physiological systems, notably metabolic, endocrine and neurobehavioral function. Commonly used methods for oral administration of xenobiotics can, by acting as a stressor or altering normal nutrition intake, alter these physiological systems as well. Taken together, oral administration methods may unintentionally introduce confounding physiological effects that can mask or enhance toxicity of xenobiotics, particularly if they share biological targets. Consequently, it should be preferable to develop alternative methods without these potential confounds. The aim of this study was to determine the suitability of mealworms as an alternative treat-based method to deliver xenobiotics via the orogastric route. Accurate oral administration is contingent on motivation and preference; mice reliably preferred mealworms over wafer cookie treats. Further, ingestion of wafer cookies significantly increased mouse blood glucose levels, whereas unaltered mealworms produced no such change. Mealworms functioned effectively to orally administer glucose, as glucose-spiked mealworms produced a rise in blood glucose equivalent to the ingestion of the wafer cookie. Mealworms did not interfere with the physiological function of orally administered d-amphetamine, as both mealworm and oral gavage administered d-amphetamine showed similar alterations in locomotor behavior (mice did not fully consume d-amphetamine-dosed cookies and thus could not be compared). Collectively, the findings indicate that mealworms are a preferred and readily consumed treat, which importantly mimics environmental-relevant nutritional intake, and mealworms per se do not alter glucose metabolic pathways. Additionally, mealworms accurately delivered xenobiotics into blood circulation and did not interfere with the physiological function of administered xenobiotics. Thus mealworm-based oral administration may be a preferable and accurate route of xenobiotic administration that eliminates physiological alterations associated with other methods of delivery. Copyright © 2016. Published by Elsevier Inc.
Method to produce American Thoracic Society flow-time waveforms using a mechanical pump.
Hankinson, J L; Reynolds, J S; Das, M K; Viola, J O
1997-03-01
The American Thoracic Society (ATS) recently adopted a new set of 26 standard flow-time waveforms for use in testing both diagnostic and monitoring devices. Some of these waveforms have a higher frequency content than present in the ATS-24 standard volume-time waveforms, which, when produced by a mechanical pump, may result in a pump flow output that is less than the desired flow due to gas compression losses within the pump. To investigate the effects of gas compression, a mechanical pump was used to generate the necessary flows to test mini-Wright and Assess peak expiratory flow (PEF) meters. Flow output from the pump was measured by two different independent methods, a pneumotachometer and a method based on piston displacement and pressure measured within the pump. Measuring output flow based on piston displacement and pressure has been validated using a pneumotachometer and mini-Wright PEF meter, and found to accurately measure pump output. This method introduces less resistance (lower back-pressure) and dead space volume than using a pneumotachometer in series with the meter under test. Pump output flow was found to be lower than the desired flow both with the mini-Wright and Assess meters (for waveform No. 26, PEFs 7.1 and 10.9% lower, respectively). To compensate for losses due to gas compression, we have developed a method of deriving new input waveforms, which, when used to drive a commercially available mechanical pump, accurately and reliably produces the 26 ATS flow-time waveforms, even those with the fastest rise-times.
Resolved Star Formation in Galaxies Using Slitless Spectroscopy
NASA Astrophysics Data System (ADS)
Pirzkal, Norbert; Finkelstein, Steven L.; Larson, Rebecca L.; Malhotra, Sangeeta; Rhoads, James E.; Ryan, Russell E.; Tilvi, Vithal; FIGS Team
2018-06-01
The ability to spatially resolve individual star-formation regions in distant galaxies and simultaneously extract their physical properties via emission lines is a critical step forward in studying the evolution of galaxies. While efficient, deep slitless spectroscopic observations offer a blurry view of the summed properties of galaxies. We present our studies of resolved star formation over a wide range of redshifts, including high redshift Ly-a sources. The unique capabilities of the WFC3 IR Grism and our two-dimensional emission line method (EM2D) allows us to accurately identify the specific spatial origin of emission lines in galaxies, thus creating a spatial map of star-formation sites in any given galaxy. This method requires the use of multiple position angles on the sky to accurately derive both the location and the observed wavelengths of these emission lines. This has the added benefit of producing better defined redshifts for these sources. Building on our success in applying the EM2D method towards galaxies with [OII]. [OIII], and Ha emission lines, we have also applied EM2D to high redshift (z>6) Ly-a emitting galaxies. We are also able to produce accurate 2D emission line maps (MAP2D) of the Ly-a emission in WFC3 IR grism observations, looking for evidence that a significant amount of resonant scattering is taking place in high redshift galaxies such as in a newly identified z=7.5 Faint Infrared Galaxy Survey (FIGS) Ly-a galaxy.
Critical Path Method Networks and Their Use in Claims Analysis.
1984-01-01
produced will only be as good as the time invested and the knowledge of the scheduler. A schedule which is based on faulty logic or which contains... fundementals of putting a schedule together but also *how the construction process functions so that the delays can be accurately inserted. When
Microseismic imaging using Geometric-mean Reverse-Time Migration in Hydraulic Fracturing Monitoring
NASA Astrophysics Data System (ADS)
Yin, J.; Ng, R.; Nakata, N.
2017-12-01
Unconventional oil and gas exploration techniques such as hydraulic fracturing are associated with microseismic events related to the generation and development of fractures. For example, hydraulic fracturing, which is popular in Southern Oklahoma, produces earthquakes that are greater than magnitude 2.0. Finding the accurate locations, and mechanisms, of these events provides important information of local stress conditions, fracture distribution, hazard assessment, and economical impact. The accurate source location is also important to separate fracking-induced and wastewater disposal induced seismicity. Here, we implement a wavefield-based imaging method called Geometric-mean Reverse-Time Migration (GmRTM), which takes the advantage of accurate microseismic location based on wavefield back projection. We apply GmRTM to microseismic data collected during hydraulic fracturing for imaging microseismic source locations, and potentially, fractures. Assuming an accurate velocity model, GmRTM can improve the spatial resolution of source locations compared to HypoDD or P/S travel-time based methods. We will discuss the results from GmRTM and HypoDD using this field dataset and synthetic data.
Beck, H J; Birch, G F
2013-06-01
Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.
Predicting recreational water quality advisories: A comparison of statistical methods
Brooks, Wesley R.; Corsi, Steven R.; Fienen, Michael N.; Carvin, Rebecca B.
2016-01-01
Epidemiological studies indicate that fecal indicator bacteria (FIB) in beach water are associated with illnesses among people having contact with the water. In order to mitigate public health impacts, many beaches are posted with an advisory when the concentration of FIB exceeds a beach action value. The most commonly used method of measuring FIB concentration takes 18–24 h before returning a result. In order to avoid the 24 h lag, it has become common to ”nowcast” the FIB concentration using statistical regressions on environmental surrogate variables. Most commonly, nowcast models are estimated using ordinary least squares regression, but other regression methods from the statistical and machine learning literature are sometimes used. This study compares 14 regression methods across 7 Wisconsin beaches to identify which consistently produces the most accurate predictions. A random forest model is identified as the most accurate, followed by multiple regression fit using the adaptive LASSO.
Stoichiometric determination of moisture in edible oils by Mid-FTIR spectroscopy.
van de Voort, F R; Tavassoli-Kafrani, M H; Curtis, J M
2016-04-28
A simple and accurate method for the determination of moisture in edible oils by differential FTIR spectroscopy has been devised based on the stoichiometric reaction of the moisture in oil with toluenesulfonyl isocyanate (TSI) to produce CO2. Calibration standards were devised by gravimetrically spiking dry dioxane with water, followed by the addition of neat TSI and examination of the differential spectra relative to the dry dioxane. In the method, CO2 peak area changes are measured at 2335 cm(-1) and were shown to be related to the amount of moisture added, with any CO2 inherent to residual moisture in the dry dioxane subtracted ratioed out. CO2 volatility issues were determined to be minimal, with the overall SD of dioxane calibrations being ∼18 ppm over a range of 0-1000 ppm. Gravimetrically blended dry and water-saturated oils analysed in a similar manner produced linear CO2 responses with SD's of <15 ppm on average. One set of dry-wet blends was analysed in duplicate by FTIR and by two independent laboratories using coulometric Karl Fischer (KF) procedures. All 3 methods produced highly linear moisture relationships with SD's of 7, 16 and 28 ppm, respectively over a range of 200-1500 ppm. Although the absolute moisture values obtained by each method did not exactly coincide, each tracked the expected moisture changes proportionately. The FTIRTSI-H2O method provides a simple and accurate instrumental means of determining moisture in oils rivaling the accuracy and specificity of standard KF procedures and has the potential to be automated. It could also be applied to other hydrophobic matrices and possibly evolve into a more generalized method, if combined with polar aprotic solvent extraction. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cui, Sheng; Jin, Shang; Xia, Wenjuan; Ke, Changjian; Liu, Deming
2015-11-01
Symbol rate identification (SRI) based on asynchronous delayed sampling is accurate, cost-effective and robust to impairments. For on-off keying (OOK) signals the symbol rate can be derived from the periodicity of the second-order autocorrelation function (ACF2) of the delay tap samples. But it is found that when applied this method to advanced modulation format signals with auxiliary amplitude modulation (AAM), incorrect results may be produced because AAM has significant impact on ACF2 periodicity, which makes the symbol period harder or even unable to be correctly identified. In this paper it is demonstrated that for these signals the first order autocorrelation function (ACF1) has stronger periodicity and can be used to replace ACF2 to produce more accurate and robust results. Utilizing the characteristics of the ACFs, an improved SRI method is proposed to accommodate both OOK and advanced modulation formant signals in a transparent manner. Furthermore it is proposed that by minimizing the peak to average ratio (PAPR) of the delay tap samples with an additional tunable dispersion compensator (TDC) the limited dispersion tolerance can be expanded to desired values.
Ray tracing the Wigner distribution function for optical simulations
NASA Astrophysics Data System (ADS)
Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Joerg; Urbach, Paul
2018-01-01
We study a simulation method that uses the Wigner distribution function to incorporate wave optical effects in an established framework based on geometrical optics, i.e., a ray tracing engine. We use the method to calculate point spread functions and show that it is accurate for paraxial systems but produces unphysical results in the presence of aberrations. The cause of these anomalies is explained using an analytical model.
NASA Technical Reports Server (NTRS)
Buntine, Wray
1994-01-01
IND computer program introduces Bayesian and Markov/maximum-likelihood (MML) methods and more-sophisticated methods of searching in growing trees. Produces more-accurate class-probability estimates important in applications like diagnosis. Provides range of features and styles with convenience for casual user, fine-tuning for advanced user or for those interested in research. Consists of four basic kinds of routines: data-manipulation, tree-generation, tree-testing, and tree-display. Written in C language.
High speed inviscid compressible flow by the finite element method
NASA Technical Reports Server (NTRS)
Zienkiewicz, O. C.; Loehner, R.; Morgan, K.
1984-01-01
The finite element method and an explicit time stepping algorithm which is based on Taylor-Galerkin schemes with an appropriate artificial viscosity is combined with an automatic mesh refinement process which is designed to produce accurate steady state solutions to problems of inviscid compressible flow in two dimensions. The results of two test problems are included which demonstrate the excellent performance characteristics of the proposed procedures.
Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon
2015-01-01
The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.
Multiplex coherent raman spectroscopy detector and method
NASA Technical Reports Server (NTRS)
Joyner, Candace C. (Inventor); Patrick, Sheena T. (Inventor); Chen, Peter (Inventor); Guyer, Dean R. (Inventor)
2004-01-01
A multiplex coherent Raman spectrometer (10) and spectroscopy method rapidly detects and identifies individual components of a chemical mixture separated by a separation technique, such as gas chromatography. The spectrometer (10) and method accurately identify a variety of compounds because they produce the entire gas phase vibrational Raman spectrum of the unknown gas. This is accomplished by tilting a Raman cell (20) to produce a high-intensity, backward-stimulated, coherent Raman beam of 683 nm, which drives a degenerate optical parametric oscillator (28) to produce a broadband beam of 1100-1700 nm covering a range of more than 3000 wavenumber. This broadband beam is combined with a narrowband beam of 532 nm having a bandwidth of 0.003 wavenumbers and focused into a heated windowless cell (38) that receives gases separated by a gas chromatograph (40). The Raman radiation scattered from these gases is filtered and sent to a monochromator (50) with multichannel detection.
Multiplex coherent raman spectroscopy detector and method
Chen, Peter; Joyner, Candace C.; Patrick, Sheena T.; Guyer, Dean R.
2004-06-08
A multiplex coherent Raman spectrometer (10) and spectroscopy method rapidly detects and identifies individual components of a chemical mixture separated by a separation technique, such as gas chromatography. The spectrometer (10) and method accurately identify a variety of compounds because they produce the entire gas phase vibrational Raman spectrum of the unknown gas. This is accomplished by tilting a Raman cell (20) to produce a high-intensity, backward-stimulated, coherent Raman beam of 683 nm, which drives a degenerate optical parametric oscillator (28) to produce a broadband beam of 1100-1700 nm covering a range of more than 3000 wavenumber. This broadband beam is combined with a narrowband beam of 532 nm having a bandwidth of 0.003 wavenumbers and focused into a heated windowless cell (38) that receives gases separated by a gas chromatograph (40). The Raman radiation scattered from these gases is filtered and sent to a monochromator (50) with multichannel detection.
Kleijn, Roelco J.; van Winden, Wouter A.; Ras, Cor; van Gulik, Walter M.; Schipper, Dick; Heijnen, Joseph J.
2006-01-01
In this study we developed a new method for accurately determining the pentose phosphate pathway (PPP) split ratio, an important metabolic parameter in the primary metabolism of a cell. This method is based on simultaneous feeding of unlabeled glucose and trace amounts of [U-13C]gluconate, followed by measurement of the mass isotopomers of the intracellular metabolites surrounding the 6-phosphogluconate node. The gluconate tracer method was used with a penicillin G-producing chemostat culture of the filamentous fungus Penicillium chrysogenum. For comparison, a 13C-labeling-based metabolic flux analysis (MFA) was performed for glycolysis and the PPP of P. chrysogenum. For the first time mass isotopomer measurements of 13C-labeled primary metabolites are reported for P. chrysogenum and used for a 13C-based MFA. Estimation of the PPP split ratio of P. chrysogenum at a growth rate of 0.02 h−1 yielded comparable values for the gluconate tracer method and the 13C-based MFA method, 51.8% and 51.1%, respectively. A sensitivity analysis of the estimated PPP split ratios showed that the 95% confidence interval was almost threefold smaller for the gluconate tracer method than for the 13C-based MFA method (40.0 to 63.5% and 46.0 to 56.5%, respectively). From these results we concluded that the gluconate tracer method permits accurate determination of the PPP split ratio but provides no information about the remaining cellular metabolism, while the 13C-based MFA method permits estimation of multiple fluxes but provides a less accurate estimate of the PPP split ratio. PMID:16820467
An evaluation of methods for estimating decadal stream loads
NASA Astrophysics Data System (ADS)
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
An evaluation of methods for estimating decadal stream loads
Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-01-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
Zhang, Guodong; Brown, Eric W.; González-Escalona, Narjol
2011-01-01
Contamination of foods, especially produce, with Salmonella spp. is a major concern for public health. Several methods are available for the detection of Salmonella in produce, but their relative efficiency for detecting Salmonella in commonly consumed vegetables, often associated with outbreaks of food poisoning, needs to be confirmed. In this study, the effectiveness of three molecular methods for detection of Salmonella in six produce matrices was evaluated and compared to the FDA microbiological detection method. Samples of cilantro (coriander leaves), lettuce, parsley, spinach, tomato, and jalapeno pepper were inoculated with Salmonella serovars at two different levels (105 and <101 CFU/25 g of produce). The inoculated produce was assayed by the FDA Salmonella culture method (Bacteriological Analytical Manual) and by three molecular methods: quantitative real-time PCR (qPCR), quantitative reverse transcriptase real-time PCR (RT-qPCR), and loop-mediated isothermal amplification (LAMP). Comparable results were obtained by these four methods, which all detected as little as 2 CFU of Salmonella cells/25 g of produce. All control samples (not inoculated) were negative by the four methods. RT-qPCR detects only live Salmonella cells, obviating the danger of false-positive results from nonviable cells. False negatives (inhibition of either qPCR or RT-qPCR) were avoided by the use of either a DNA or an RNA amplification internal control (IAC). Compared to the conventional culture method, the qPCR, RT-qPCR, and LAMP assays allowed faster and equally accurate detection of Salmonella spp. in six high-risk produce commodities. PMID:21803916
3D Lunar Terrain Reconstruction from Apollo Images
NASA Technical Reports Server (NTRS)
Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.
2009-01-01
Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission
Self Calibrated Wireless Distributed Environmental Sensory Networks
Fishbain, Barak; Moreno-Centeno, Erick
2016-01-01
Recent advances in sensory and communication technologies have made Wireless Distributed Environmental Sensory Networks (WDESN) technically and economically feasible. WDESNs present an unprecedented tool for studying many environmental processes in a new way. However, the WDESNs’ calibration process is a major obstacle in them becoming the common practice. Here, we present a new, robust and efficient method for aggregating measurements acquired by an uncalibrated WDESN, and producing accurate estimates of the observed environmental variable’s true levels rendering the network as self-calibrated. The suggested method presents novelty both in group-decision-making and in environmental sensing as it offers a most valuable tool for distributed environmental monitoring data aggregation. Applying the method on an extensive real-life air-pollution dataset showed markedly more accurate results than the common practice and the state-of-the-art. PMID:27098279
NASA Astrophysics Data System (ADS)
Jin, Tao; Chen, Yiyang; Flesch, Rodolfo C. C.
2017-11-01
Harmonics pose a great threat to safe and economical operation of power grids. Therefore, it is critical to detect harmonic parameters accurately to design harmonic compensation equipment. The fast Fourier transform (FFT) is widely used for electrical popular power harmonics analysis. However, the barrier effect produced by the algorithm itself and spectrum leakage caused by asynchronous sampling often affects the harmonic analysis accuracy. This paper examines a new approach for harmonic analysis based on deducing the modifier formulas of frequency, phase angle, and amplitude, utilizing the Nuttall-Kaiser window double spectrum line interpolation method, which overcomes the shortcomings in traditional FFT harmonic calculations. The proposed approach is verified numerically and experimentally to be accurate and reliable.
Methods and techniques for measuring gas emissions from agricultural and animal feeding operations.
Hu, Enzhu; Babcock, Esther L; Bialkowski, Stephen E; Jones, Scott B; Tuller, Markus
2014-01-01
Emissions of gases from agricultural and animal feeding operations contribute to climate change, produce odors, degrade sensitive ecosystems, and pose a threat to public health. The complexity of processes and environmental variables affecting these emissions complicate accurate and reliable quantification of gas fluxes and production rates. Although a plethora of measurement technologies exist, each method has its limitations that exacerbate accurate quantification of gas fluxes. Despite a growing interest in gas emission measurements, only a few available technologies include real-time, continuous monitoring capabilities. Commonly applied state-of-the-art measurement frameworks and technologies were critically examined and discussed, and recommendations for future research to address real-time monitoring requirements for forthcoming regulation and management needs are provided.
Sommerlot, Andrew R; Pouyan Nejadhashemi, A; Woznicki, Sean A; Prohaska, Michael D
2013-10-15
Non-point source pollution from agricultural lands is a significant contributor of sediment pollution in United States lakes and streams. Therefore, quantifying the impact of individual field management strategies at the watershed-scale provides valuable information to watershed managers and conservation agencies to enhance decision-making. In this study, four methods employing some of the most cited models in field and watershed scale analysis were compared to find a practical yet accurate method for evaluating field management strategies at the watershed outlet. The models used in this study including field-scale model (the Revised Universal Soil Loss Equation 2 - RUSLE2), spatially explicit overland sediment delivery models (SEDMOD), and a watershed-scale model (Soil and Water Assessment Tool - SWAT). These models were used to develop four modeling strategies (methods) for the River Raisin watershed: Method 1) predefined field-scale subbasin and reach layers were used in SWAT model; Method 2) subbasin-scale sediment delivery ratio was employed; Method 3) results obtained from the field-scale RUSLE2 model were incorporated as point source inputs to the SWAT watershed model; and Method 4) a hybrid solution combining analyses from the RUSLE2, SEDMOD, and SWAT models. Method 4 was selected as the most accurate among the studied methods. In addition, the effectiveness of six best management practices (BMPs) in terms of the water quality improvement and associated cost were assessed. Economic analysis was performed using Method 4, and producer requested prices for BMPs were compared with prices defined by the Environmental Quality Incentives Program (EQIP). On a per unit area basis, producers requested higher prices than EQIP in four out of six BMP categories. Meanwhile, the true cost of sediment reduction at the field and watershed scales was greater than EQIP in five of six BMP categories according to producer requested prices. Copyright © 2013 Elsevier Ltd. All rights reserved.
An efficient approach to BAC based assembly of complex genomes.
Visendi, Paul; Berkman, Paul J; Hayashi, Satomi; Golicz, Agnieszka A; Bayer, Philipp E; Ruperao, Pradeep; Hurgobin, Bhavna; Montenegro, Juan; Chan, Chon-Kit Kenneth; Staňková, Helena; Batley, Jacqueline; Šimková, Hana; Doležel, Jaroslav; Edwards, David
2016-01-01
There has been an exponential growth in the number of genome sequencing projects since the introduction of next generation DNA sequencing technologies. Genome projects have increasingly involved assembly of whole genome data which produces inferior assemblies compared to traditional Sanger sequencing of genomic fragments cloned into bacterial artificial chromosomes (BACs). While whole genome shotgun sequencing using next generation sequencing (NGS) is relatively fast and inexpensive, this method is extremely challenging for highly complex genomes, where polyploidy or high repeat content confounds accurate assembly, or where a highly accurate 'gold' reference is required. Several attempts have been made to improve genome sequencing approaches by incorporating NGS methods, to variable success. We present the application of a novel BAC sequencing approach which combines indexed pools of BACs, Illumina paired read sequencing, a sequence assembler specifically designed for complex BAC assembly, and a custom bioinformatics pipeline. We demonstrate this method by sequencing and assembling BAC cloned fragments from bread wheat and sugarcane genomes. We demonstrate that our assembly approach is accurate, robust, cost effective and scalable, with applications for complete genome sequencing in large and complex genomes.
A close-range photogrammetric technique for mapping neotectonic features in trenches
Fairer, G.M.; Whitney, J.W.; Coe, J.A.
1989-01-01
Close-range photogrammetric techniques and newly available computerized plotting equipment were used to map exploratory trench walls that expose Quaternary faults in the vicinity of Yucca Mountain, Nevada. Small-scale structural, lithologic, and stratigraphic features can be rapidly mapped by the photogrammetric method. This method is more accurate and significantly more rapid than conventional trench-mapping methods, and the analytical plotter is capable of producing cartographic definition of high resolution when detailed trench maps are necessary. -from Authors
ERIC Educational Resources Information Center
Sriwantaneeyakul, Suttawan
2018-01-01
Translation ability requires many language skills to produce an accurate and complete text; however, one important skill, critical reading in the research, has been neglected. This research, therefore, employed the explanatory sequential mixed method to investigate the differences in Thai-English translation ability between students with a high…
Differential radioactivity monitor for non-invasive detection of ocular melanoma
Lambrecht, R.M.; Packer, S.
1982-09-23
There is described an apparatus and method for diagnosing ocular cancer that is both non-invasive and accurate which comprises two radiation detectors positioned before each of the patient's eyes which will measure the radiation level produced in each eye after the administration of a tumor-localizing radiopharmaceutical such as gallium-67.
Application of a rising plate meter to estimate forage yield on dairy farms in Pennsylvania
USDA-ARS?s Scientific Manuscript database
Accurately assessing pasture forage yield is necessary for producers who want to budget feed expenses and make informed pasture management decisions. Clipping and weighing forage from a known area is a direct method to measure pasture forage yield, however it is time consuming. The rising plate mete...
Advanced Space Propulsion System Flowfield Modeling
NASA Technical Reports Server (NTRS)
Smith, Sheldon
1998-01-01
Solar thermal upper stage propulsion systems currently under development utilize small low chamber pressure/high area ratio nozzles. Consequently, the resulting flow in the nozzle is highly viscous, with the boundary layer flow comprising a significant fraction of the total nozzle flow area. Conventional uncoupled flow methods which treat the nozzle boundary layer and inviscid flowfield separately by combining the two calculations via the influence of the boundary layer displacement thickness on the inviscid flowfield are not accurate enough to adequately treat highly viscous nozzles. Navier Stokes models such as VNAP2 can treat these flowfields but cannot perform a vacuum plume expansion for applications where the exhaust plume produces induced environments on adjacent structures. This study is built upon recently developed artificial intelligence methods and user interface methodologies to couple the VNAP2 model for treating viscous nozzle flowfields with a vacuum plume flowfield model (RAMP2) that is currently a part of the Plume Environment Prediction (PEP) Model. This study integrated the VNAP2 code into the PEP model to produce an accurate, practical and user friendly tool for calculating highly viscous nozzle and exhaust plume flowfields.
Bonicelli, Andrea; Xhemali, Bledar; Kranioti, Elena F.
2017-01-01
Age estimation remains one of the most challenging tasks in forensic practice when establishing a biological profile of unknown skeletonised remains. Morphological methods based on developmental markers of bones can provide accurate age estimates at a young age, but become highly unreliable for ages over 35 when all developmental markers disappear. This study explores the changes in the biomechanical properties of bone tissue and matrix, which continue to change with age even after skeletal maturity, and their potential value for age estimation. As a proof of concept we investigated the relationship of 28 variables at the macroscopic and microscopic level in rib autopsy samples from 24 individuals. Stepwise regression analysis produced a number of equations one of which with seven variables showed an R2 = 0.949; a mean residual error of 2.13 yrs ±0.4 (SD) and a maximum residual error value of 2.88 yrs. For forensic purposes, by using only bench top machines in tests which can be carried out within 36 hrs, a set of just 3 variables produced an equation with an R2 = 0.902 a mean residual error of 3.38 yrs ±2.6 (SD) and a maximum observed residual error 9.26yrs. This method outstrips all existing age-at-death methods based on ribs, thus providing a novel lab based accurate tool in the forensic investigation of human remains. The present application is optimised for fresh (uncompromised by taphonomic conditions) remains, but the potential of the principle and method is vast once the trends of the biomechanical variables are established for other environmental conditions and circumstances. PMID:28520764
A Modeling Approach to Global Land Surface Monitoring with Low Resolution Satellite Imaging
NASA Technical Reports Server (NTRS)
Hlavka, Christine A.; Dungan, Jennifer; Livingston, Gerry P.; Gore, Warren J. (Technical Monitor)
1998-01-01
The effects of changing land use/land cover on global climate and ecosystems due to greenhouse gas emissions and changing energy and nutrient exchange rates are being addressed by federal programs such as NASA's Mission to Planet Earth (MTPE) and by international efforts such as the International Geosphere-Biosphere Program (IGBP). The quantification of these effects depends on accurate estimates of the global extent of critical land cover types such as fire scars in tropical savannas and ponds in Arctic tundra. To address the requirement for accurate areal estimates, methods for producing regional to global maps with satellite imagery are being developed. The only practical way to produce maps over large regions of the globe is with data of coarse spatial resolution, such as Advanced Very High Resolution Radiometer (AVHRR) weather satellite imagery at 1.1 km resolution or European Remote-Sensing Satellite (ERS) radar imagery at 100 m resolution. The accuracy of pixel counts as areal estimates is in doubt, especially for highly fragmented cover types such as fire scars and ponds. Efforts to improve areal estimates from coarse resolution maps have involved regression of apparent area from coarse data versus that from fine resolution in sample areas, but it has proven difficult to acquire sufficient fine scale data to develop the regression. A method for computing accurate estimates from coarse resolution maps using little or no fine data is therefore needed.
Ligon, D A; Gillespie, J B; Pellegrino, P
2000-08-20
The feasibility of using a generalized stochastic inversion methodology to estimate aerosol size distributions accurately by use of spectral extinction, backscatter data, or both is examined. The stochastic method used, inverse Monte Carlo (IMC), is verified with both simulated and experimental data from aerosols composed of spherical dielectrics with a known refractive index. Various levels of noise are superimposed on the data such that the effect of noise on the stability and results of inversion can be determined. Computational results show that the application of the IMC technique to inversion of spectral extinction or backscatter data or both can produce good estimates of aerosol size distributions. Specifically, for inversions for which both spectral extinction and backscatter data are used, the IMC technique was extremely accurate in determining particle size distributions well outside the wavelength range. Also, the IMC inversion results proved to be stable and accurate even when the data had significant noise, with a signal-to-noise ratio of 3.
An investigation of new methods for estimating parameter sensitivities
NASA Technical Reports Server (NTRS)
Beltracchi, Todd J.; Gabriele, Gary A.
1989-01-01
The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.
Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar
2015-01-01
For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications. PMID:26137489
Kuroda, Yukihiro; Saito, Madoka
2010-03-01
An in vitro method to predict phospholipidosis-inducing potential of cationic amphiphilic drugs (CADs) was developed using biochemical and physicochemical assays. The following parameters were applied to principal component analysis, as well as physicochemical parameters: pK(a) and clogP; dissociation constant of CADs from phospholipid, inhibition of enzymatic phospholipid degradation, and metabolic stability of CADs. In the score plot, phospholipidosis-inducing drugs (amiodarone, propranolol, imipramine, chloroquine) were plotted locally forming the subspace for positive CADs; while non-inducing drugs (chlorpromazine, chloramphenicol, disopyramide, lidocaine) were placed scattering out of the subspace, allowing a clear discrimination between both classes of CADs. CADs that often produce false results by conventional physicochemical or cell-based assay methods were accurately determined by our method. Basic and lipophilic disopyramide could be accurately predicted as a nonphospholipidogenic drug. Moreover, chlorpromazine, which is often falsely predicted as a phospholipidosis-inducing drug by in vitro methods, could be accurately determined. Because this method uses the pharmacokinetic parameters pK(a), clogP, and metabolic stability, which are usually obtained in the early stages of drug development, the method newly requires only the two parameters, binding to phospholipid, and inhibition of lipid degradation enzyme. Therefore, this method provides a cost-effective approach to predict phospholipidosis-inducing potential of a drug. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Browning, Brian L.; Browning, Sharon R.
2009-01-01
We present methods for imputing data for ungenotyped markers and for inferring haplotype phase in large data sets of unrelated individuals and parent-offspring trios. Our methods make use of known haplotype phase when it is available, and our methods are computationally efficient so that the full information in large reference panels with thousands of individuals is utilized. We demonstrate that substantial gains in imputation accuracy accrue with increasingly large reference panel sizes, particularly when imputing low-frequency variants, and that unphased reference panels can provide highly accurate genotype imputation. We place our methodology in a unified framework that enables the simultaneous use of unphased and phased data from trios and unrelated individuals in a single analysis. For unrelated individuals, our imputation methods produce well-calibrated posterior genotype probabilities and highly accurate allele-frequency estimates. For trios, our haplotype-inference method is four orders of magnitude faster than the gold-standard PHASE program and has excellent accuracy. Our methods enable genotype imputation to be performed with unphased trio or unrelated reference panels, thus accounting for haplotype-phase uncertainty in the reference panel. We present a useful measure of imputation accuracy, allelic R2, and show that this measure can be estimated accurately from posterior genotype probabilities. Our methods are implemented in version 3.0 of the BEAGLE software package. PMID:19200528
Chen, Kevin T; Izquierdo-Garcia, David; Poynton, Clare B; Chonde, Daniel B; Catana, Ciprian
2017-03-01
To propose an MR-based method for generating continuous-valued head attenuation maps and to assess its accuracy and reproducibility. Demonstrating that novel MR-based photon attenuation correction methods are both accurate and reproducible is essential prior to using them routinely in research and clinical studies on integrated PET/MR scanners. Continuous-valued linear attenuation coefficient maps ("μ-maps") were generated by combining atlases that provided the prior probability of voxel positions belonging to a certain tissue class (air, soft tissue, or bone) and an MR intensity-based likelihood classifier to produce posterior probability maps of tissue classes. These probabilities were used as weights to generate the μ-maps. The accuracy of this probabilistic atlas-based continuous-valued μ-map ("PAC-map") generation method was assessed by calculating the voxel-wise absolute relative change (RC) between the MR-based and scaled CT-based attenuation-corrected PET images. To assess reproducibility, we performed pair-wise comparisons of the RC values obtained from the PET images reconstructed using the μ-maps generated from the data acquired at three time points. The proposed method produced continuous-valued μ-maps that qualitatively reflected the variable anatomy in patients with brain tumor and agreed well with the scaled CT-based μ-maps. The absolute RC comparing the resulting PET volumes was 1.76 ± 2.33 %, quantitatively demonstrating that the method is accurate. Additionally, we also showed that the method is highly reproducible, the mean RC value for the PET images reconstructed using the μ-maps obtained at the three visits being 0.65 ± 0.95 %. Accurate and highly reproducible continuous-valued head μ-maps can be generated from MR data using a probabilistic atlas-based approach.
Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.
Morrison, Shane A; Luttbeg, Barney; Belden, Jason B
2016-11-01
Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50 = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.
2013-01-01
Background Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case–control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. Results We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual’s continental and sub-continental ancestry. To predict an individual’s continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control’s λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of 86.5% ± 2.4%, 95.6% ± 3.9%, 95.6% ± 2.1%, 98.3% ± 2.0%, and 95.9% ± 1.5%. However, ETHNOPRED was unable to produce a classifier that can accurately distinguish Chinese in Beijing vs. Chinese in Denver. Conclusions ETHNOPRED is a novel technique for producing classifiers that can identify an individual’s continental and sub-continental heritage, based on a small number of SNPs. We show that its learned classifiers are simple, cost-efficient, accurate, transparent, flexible, fast, applicable to large scale GWASs, and robust to missing values. PMID:23432980
Billi, Fabrizio; Benya, Paul; Kavanaugh, Aaron; Adams, John; Ebramzadeh, Edward; McKellop, Harry
2012-02-01
Numerous studies indicate highly crosslinked polyethylenes reduce the wear debris volume generated by hip arthroplasty acetabular liners. This, in turns, requires new methods to isolate and characterize them. We describe a method for extracting polyethylene wear particles from bovine serum typically used in wear tests and for characterizing their size, distribution, and morphology. Serum proteins were completely digested using an optimized enzymatic digestion method that prevented the loss of the smallest particles and minimized their clumping. Density-gradient ultracentrifugation was designed to remove contaminants and recover the particles without filtration, depositing them directly onto a silicon wafer. This provided uniform distribution of the particles and high contrast against the background, facilitating accurate, automated, morphometric image analysis. The accuracy and precision of the new protocol were assessed by recovering and characterizing particles from wear tests of three types of polyethylene acetabular cups (no crosslinking and 5 Mrads and 7.5 Mrads of gamma irradiation crosslinking). The new method demonstrated important differences in the particle size distributions and morphologic parameters among the three types of polyethylene that could not be detected using prior isolation methods. The new protocol overcomes a number of limitations, such as loss of nanometer-sized particles and artifactual clumping, among others. The analysis of polyethylene wear particles produced in joint simulator wear tests of prosthetic joints is a key tool to identify the wear mechanisms that produce the particles and predict and evaluate their effects on periprosthetic tissues.
Imaging with Mass Spectrometry of Bacteria on the Exoskeleton of Fungus-Growing Ants.
Gemperline, Erin; Horn, Heidi A; DeLaney, Kellen; Currie, Cameron R; Li, Lingjun
2017-08-18
Mass spectrometry imaging is a powerful analytical technique for detecting and determining spatial distributions of molecules within a sample. Typically, mass spectrometry imaging is limited to the analysis of thin tissue sections taken from the middle of a sample. In this work, we present a mass spectrometry imaging method for the detection of compounds produced by bacteria on the outside surface of ant exoskeletons in response to pathogen exposure. Fungus-growing ants have a specialized mutualism with Pseudonocardia, a bacterium that lives on the ants' exoskeletons and helps protect their fungal garden food source from harmful pathogens. The developed method allows for visualization of bacterial-derived compounds on the ant exoskeleton. This method demonstrates the capability to detect compounds that are specifically localized to the bacterial patch on ant exoskeletons, shows good reproducibility across individual ants, and achieves accurate mass measurements within 5 ppm error when using a high-resolution, accurate-mass mass spectrometer.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
Image feature based GPS trace filtering for road network generation and road segmentation
Yuan, Jiangye; Cheriyadat, Anil M.
2015-10-19
We propose a new method to infer road networks from GPS trace data and accurately segment road regions in high-resolution aerial images. Unlike previous efforts that rely on GPS traces alone, we exploit image features to infer road networks from noisy trace data. The inferred road network is used to guide road segmentation. We show that the number of image segments spanned by the traces and the trace orientation validated with image features are important attributes for identifying GPS traces on road regions. Based on filtered traces , we construct road networks and integrate them with image features to segmentmore » road regions. Lastly, our experiments show that the proposed method produces more accurate road networks than the leading method that uses GPS traces alone, and also achieves high accuracy in segmenting road regions even with very noisy GPS data.« less
Image feature based GPS trace filtering for road network generation and road segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Jiangye; Cheriyadat, Anil M.
We propose a new method to infer road networks from GPS trace data and accurately segment road regions in high-resolution aerial images. Unlike previous efforts that rely on GPS traces alone, we exploit image features to infer road networks from noisy trace data. The inferred road network is used to guide road segmentation. We show that the number of image segments spanned by the traces and the trace orientation validated with image features are important attributes for identifying GPS traces on road regions. Based on filtered traces , we construct road networks and integrate them with image features to segmentmore » road regions. Lastly, our experiments show that the proposed method produces more accurate road networks than the leading method that uses GPS traces alone, and also achieves high accuracy in segmenting road regions even with very noisy GPS data.« less
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb, R.A.
The need to have accurate petroleum measurement is obvious. Petroleum measurement is the basis of commerce between oil producers, royalty owners, oil transporters, refiners, marketers, the Department of Revenue, and the motoring public. Furthermore, petroleum measurements are often used to detect operational problems or unwanted releases in pipelines, tanks, marine vessels, underground storage tanks, etc. Therefore, consistent, accurate petroleum measurement is an essential part of any operation. While there are several methods and different types of equipment used to perform petroleum measurement, the basic process stays the same. The basic measurement process is the act of comparing an unknown quantity,more » to a known quantity, in order to establish its magnitude. The process can be seen in a variety of forms; such as measuring for a first-down in a football game, weighing meat and produce at the grocery, or the use of an automobile odometer.« less
NASA Technical Reports Server (NTRS)
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Measuring the rate of spread of chaparral prescribed fires in northern California
S. L. Stephens; D. R. Weise; D. L. Fry; R. J. Keiffer; J. Dawson; E. Koo; J. Potts; P. J. Pagni
2008-01-01
Prescribed fire is a common method used to produce desired ecological effects in chaparral by mimicking the natural role of fire. Since prescribed fires are usually conducted in moderate fuel and weather conditions, models that accurately predict fire behavior and effects under these scenarios are important for management. In this study, explosive audio devices and...
USDA-ARS?s Scientific Manuscript database
This study investigated the ability of near-infrared spectroscopy (NIRS) to predict acrylamide content in French-fried potato. Potato flour spiked with acrylamide (50-8000 µg/kg) was used to determine if acrylamide could be accurately predicted in a potato matrix. French fries produced with various ...
Validation of a spectrophotometric procedure for determining nitrate in water samples
USDA-ARS?s Scientific Manuscript database
A single-reagent spectrophotometric procedure using vanadium (III) chloride (VCl3) was found to provide accurate and robust measurement of low levels of nitrate (lNO3-N) in agricultural runoff. Results of the VCl3 method produced data that correlated well (r=0.86; p<0.001) with NO3-N concentrations ...
Using the Nudge and Shove Methods to Adjust Item Difficulty Values.
Royal, Kenneth D
2015-01-01
In any examination, it is important that a sufficient mix of items with varying degrees of difficulty be present to produce desirable psychometric properties and increase instructors' ability to make appropriate and accurate inferences about what a student knows and/or can do. The purpose of this "teaching tip" is to demonstrate how examination items can be affected by the quality of distractors, and to present a simple method for adjusting items to meet difficulty specifications.
NASA Astrophysics Data System (ADS)
Harris, C. T.; Haw, D. W.; Handler, W. B.; Chronik, B. A.
2013-06-01
The time-varying magnetic fields created by the gradient coils in magnetic resonance imaging can produce negative effects on image quality and the system itself. Additionally, they can be a limiting factor to the introduction of non-MR devices such as cardiac pacemakers, orthopedic implants, and surgical robotics. The ability to model the induced currents produced by the switching gradient fields is key to developing methods for reducing these unwanted interactions. In this work, a framework for the calculation of induced currents on conducting surface geometries is summarized. This procedure is then compared to two separate experiments: (1) the analysis of the decay of currents induced upon a conducting cylinder by an insert gradient set within a head only 7 T MR scanner; and (2) analysis of the heat deposited into a small conductor by a uniform switching magnetic field at multiple frequencies and two distinct conductor thicknesses. The method was shown to allow the accurate modeling of the induced time-varying field decay in the first case, and was able to provide accurate estimation of the rise in temperature in the second experiment to within 30% when the skin depth was greater than or equal to the thickness of the conductor.
Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory
NASA Technical Reports Server (NTRS)
Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.
2013-01-01
Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.
Nguyen, Huy Truong; Min, Jung-Eun; Long, Nguyen Phuoc; Thanh, Ma Chi; Le, Thi Hong Van; Lee, Jeongmi; Park, Jeong Hill; Kwon, Sung Won
2017-08-05
Agarwood, the resinous heartwood produced by some Aquilaria species such as Aquilaria crassna, Aquilaria malaccensis and Aquilaria sinensis, has been traditionally and widely used in medicine, incenses and especially perfumes. However, up to now, the authentication of agarwood has been largely based on morphological characteristics, a method which is prone to errors and lacks reproducibility. Hence, in this study, we applied metabolomics and a genetic approach to the authentication of two common agarwood chips, those produced by Aquilaria crassna and Aquilaria malaccensis. Primary metabolites, secondary metabolites and DNA markers of agarwood were authenticated by 1 H NMR metabolomics, GC-MS metabolomics and DNA-based techniques, respectively. The results indicated that agarwood chips could be classified accurately by all the methods illustrated in this study. Additionally, the pros and cons of each method are also discussed. To the best of our knowledge, our research is the first study detailing all the differences in the primary and secondary metabolites, as well as the DNA markers between the agarwood produced by these two species. Copyright © 2017 Elsevier B.V. All rights reserved.
Razban, Behrooz; Nelson, Kristina Y; McMartin, Dena W; Cullimore, D Roy; Wall, Michelle; Wang, Dunling
2012-01-01
An analytical method to produce profiles of bacterial biomass fatty acid methyl esters (FAME) was developed employing rapid agitation followed by static incubation (RASI) using selective media of wastewater microbial communities. The results were compiled to produce a unique library for comparison and performance analysis at a Wastewater Treatment Plant (WWTP). A total of 146 samples from the aerated WWTP, comprising 73 samples of each secondary and tertiary effluent, were included analyzed. For comparison purposes, all samples were evaluated via a similarity index (SI) with secondary effluents producing an SI of 0.88 with 2.7% variation and tertiary samples producing an SI 0.86 with 5.0% variation. The results also highlighted significant differences between the fatty acid profiles of the tertiary and secondary effluents indicating considerable shifts in the bacterial community profile between these treatment phases. The WWTP performance results using this method were highly replicable and reproducible indicating that the protocol has potential as a performance-monitoring tool for aerated WWTPs. The results quickly and accurately reflect shifts in dominant bacterial communities that result when processes operations and performance change.
Shin, Jeong-Sook; Peng, Lei; Kang, Kyungsu; Choi, Yongsoo
2016-09-09
Direct analysis of prostaglandin-E2 (PGE2) and -D2 (PGD2) produced from a RAW264.7 cell-based reaction was performed by liquid chromatography high-resolution mass spectrometry (LC-HRMS), which was online coupled with turbulent flow chromatography (TFC). The capability of this method to accurately measure PG levels in cell reaction medium containing cytokines or proteins as a reaction byproduct was cross-validated by two conventional methods. Two methods, including an LC-HRMS method after liquid-liquid extraction (LLE) of the sample and a commercial PGE2 enzyme-linked immunosorbent assay (ELISA), showed PGE2 and/or PGD2 levels almost similar to those obtained by TFC LC-HRMS over the reaction time after LPS stimulation. After the cross-validation, significant analytical throughputs, allowing simultaneous screening and potency evaluation of 80 natural products including 60 phytochemicals and 20 natural product extracts for the inhibition of the PGD2 produced in the cell-based inflammatory reaction, were achieved using the TFC LC-HRMS method developed. Among the 60 phytochemicals screened, licochalcone A and formononetin inhibited PGD2 production the most with IC50 values of 126 and 151nM, respectively. For a reference activity, indomethacin and diclofenac were used, measuring IC50 values of 0.64 and 0.21nM, respectively. This method also found a butanol extract of Akebia quinata Decne (AQ) stem as a promising natural product for PGD2 inhibition. Direct and accurate analysis of PGs in the inflammatory cell reaction using the TFC LC-HRMS method developed enables the high-throughput screening and potency evaluation of as many as 320 samples in less than 48h without changing a TFC column. Copyright © 2016 Elsevier B.V. All rights reserved.
Application of reverse engineering in the production of individual dental abutments.
NASA Astrophysics Data System (ADS)
Yunusov, A. V.; Kashapov, R. N.; Kashapov, L. N.; Statsenko, E. O.
2017-09-01
The purpose of the research is to develop a method of manufacturing individual dental abutments for a variety of dental implants. System of industrial X-ray microtomography Phoenix V|tome|X S 240 has been applied for creation of highly accurate model of the dental abutment. Scanning of dental abutment and the optimization of model was produced. The program of milling the individual abutment with a standard conical neck of hexagon was produced for the five-axis milling machine imes - icore 450i from the materials titanium and zirconium oxide.
Improving real-time efficiency of case-based reasoning for medical diagnosis.
Park, Yoon-Joo
2014-01-01
Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.
An effective hair detection algorithm for dermoscopic melanoma images of skin lesions
NASA Astrophysics Data System (ADS)
Chakraborti, Damayanti; Kaur, Ravneet; Umbaugh, Scott; LeAnder, Robert
2016-09-01
Dermoscopic images are obtained using the method of skin surface microscopy. Pigmented skin lesions are evaluated in terms of texture features such as color and structure. Artifacts, such as hairs, bubbles, black frames, ruler-marks, etc., create obstacles that prevent accurate detection of skin lesions by both clinicians and computer-aided diagnosis. In this article, we propose a new algorithm for the automated detection of hairs, using an adaptive, Canny edge-detection method, followed by morphological filtering and an arithmetic addition operation. The algorithm was applied to 50 dermoscopic melanoma images. In order to ascertain this method's relative detection accuracy, it was compared to the Razmjooy hair-detection method [1], using segmentation error (SE), true detection rate (TDR) and false positioning rate (FPR). The new method produced 6.57% SE, 96.28% TDR and 3.47% FPR, compared to 15.751% SE, 86.29% TDR and 11.74% FPR produced by the Razmjooy method [1]. Because of the 7.27-9.99% improvement in those parameters, we conclude that the new algorithm produces much better results for detecting thick, thin, dark and light hairs. The new method proposed here, shows an appreciable difference in the rate of detecting bubbles, as well.
A scanning PIV method for fine-scale turbulence measurements
NASA Astrophysics Data System (ADS)
Lawson, John M.; Dawson, James R.
2014-12-01
A hybrid technique is presented that combines scanning PIV with tomographic reconstruction to make spatially and temporally resolved measurements of the fine-scale motions in turbulent flows. The technique uses one or two high-speed cameras to record particle images as a laser sheet is rapidly traversed across a measurement volume. This is combined with a fast method for tomographic reconstruction of the particle field for use in conjunction with PIV cross-correlation. The method was tested numerically using DNS data and with experiments in a large mixing tank that produces axisymmetric homogeneous turbulence at . A parametric investigation identifies the important parameters for a scanning PIV set-up and provides guidance to the interested experimentalist in achieving the best accuracy. Optimal sheet spacings and thicknesses are reported, and it was found that accurate results could be obtained at quite low scanning speeds. The two-camera method is the most robust to noise, permitting accurate measurements of the velocity gradients and direct determination of the dissipation rate.
Examination of a Rotorcraft Noise Prediction Method and Comparison to Flight Test Data
NASA Technical Reports Server (NTRS)
Boyd, D. Douglas, Jr.; Greenwood, Eric; Watts, Michael E.; Lopes, Leonard V.
2017-01-01
With a view that rotorcraft noise should be included in the preliminary design process, a relatively fast noise prediction method is examined in this paper. A comprehensive rotorcraft analysis is combined with a noise prediction method to compute several noise metrics of interest. These predictions are compared to flight test data. Results show that inclusion of only the main rotor noise will produce results that severely underpredict integrated metrics of interest. Inclusion of the tail rotor frequency content is essential for accurately predicting these integrated noise metrics.
Doll, Charles G.; Wright, Cherylyn W.; Morley, Shannon M.; ...
2017-02-01
In this paper, a modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Finally, analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doll, Charles G.; Wright, Cherylyn W.; Morley, Shannon M.
A modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doll, Charles G.; Wright, Cherylyn W.; Morley, Shannon M.
In this paper, a modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Finally, analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect.
NASA Astrophysics Data System (ADS)
Warger, William C., II; Newmark, Judith A.; Zhao, Bing; Warner, Carol M.; DiMarzio, Charles A.
2006-02-01
Present imaging techniques used in in vitro fertilization (IVF) clinics are unable to produce accurate cell counts in developing embryos past the eight-cell stage. We have developed a method that has produced accurate cell counts in live mouse embryos ranging from 13-25 cells by combining Differential Interference Contrast (DIC) and Optical Quadrature Microscopy. Optical Quadrature Microscopy is an interferometric imaging modality that measures the amplitude and phase of the signal beam that travels through the embryo. The phase is transformed into an image of optical path length difference, which is used to determine the maximum optical path length deviation of a single cell. DIC microscopy gives distinct cell boundaries for cells within the focal plane when other cells do not lie in the path to the objective. Fitting an ellipse to the boundary of a single cell in the DIC image and combining it with the maximum optical path length deviation of a single cell creates an ellipsoidal model cell of optical path length deviation. Subtracting the model cell from the Optical Quadrature image will either show the optical path length deviation of the culture medium or reveal another cell underneath. Once all the boundaries are used in the DIC image, the subtracted Optical Quadrature image is analyzed to determine the cell boundaries of the remaining cells. The final cell count is produced when no more cells can be subtracted. We have produced exact cell counts on 5 samples, which have been validated by Epi-Fluorescence images of Hoechst stained nuclei.
A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation
Ali Khan, Wajahat; Hur, Taeho; Muhammad Bilal, Hafiz Syed; Ul Hassan, Anees; Lee, Sungyoung
2018-01-01
The user experience (UX) is an emerging field in user research and design, and the development of UX evaluation methods presents a challenge for both researchers and practitioners. Different UX evaluation methods have been developed to extract accurate UX data. Among UX evaluation methods, the mixed-method approach of triangulation has gained importance. It provides more accurate and precise information about the user while interacting with the product. However, this approach requires skilled UX researchers and developers to integrate multiple devices, synchronize them, analyze the data, and ultimately produce an informed decision. In this paper, a method and system for measuring the overall UX over time using a triangulation method are proposed. The proposed platform incorporates observational and physiological measurements in addition to traditional ones. The platform reduces the subjective bias and validates the user’s perceptions, which are measured by different sensors through objectification of the subjective nature of the user in the UX assessment. The platform additionally offers plug-and-play support for different devices and powerful analytics for obtaining insight on the UX in terms of multiple participants. PMID:29783712
A Multimodal Deep Log-Based User Experience (UX) Platform for UX Evaluation.
Hussain, Jamil; Khan, Wajahat Ali; Hur, Taeho; Bilal, Hafiz Syed Muhammad; Bang, Jaehun; Hassan, Anees Ul; Afzal, Muhammad; Lee, Sungyoung
2018-05-18
The user experience (UX) is an emerging field in user research and design, and the development of UX evaluation methods presents a challenge for both researchers and practitioners. Different UX evaluation methods have been developed to extract accurate UX data. Among UX evaluation methods, the mixed-method approach of triangulation has gained importance. It provides more accurate and precise information about the user while interacting with the product. However, this approach requires skilled UX researchers and developers to integrate multiple devices, synchronize them, analyze the data, and ultimately produce an informed decision. In this paper, a method and system for measuring the overall UX over time using a triangulation method are proposed. The proposed platform incorporates observational and physiological measurements in addition to traditional ones. The platform reduces the subjective bias and validates the user's perceptions, which are measured by different sensors through objectification of the subjective nature of the user in the UX assessment. The platform additionally offers plug-and-play support for different devices and powerful analytics for obtaining insight on the UX in terms of multiple participants.
Leacock, William B.; Eby, Lisa A.; Stanford, Jack A.
2016-01-01
Accurately estimating population sizes is often a critical component of fisheries research and management. Although there is a growing appreciation of the importance of small-scale salmon population dynamics to the stability of salmon stock-complexes, our understanding of these populations is constrained by a lack of efficient and cost-effective monitoring tools for streams. Weirs are expensive, labor intensive, and can disrupt natural fish movements. While conventional video systems avoid some of these shortcomings, they are expensive and require excessive amounts of labor to review footage for data collection. Here, we present a novel method for quantifying salmon in small streams (<15 m wide, <1 m deep) that uses both time-lapse photography and video in a model-based double sampling scheme. This method produces an escapement estimate nearly as accurate as a video-only approach, but with substantially less labor, money, and effort. It requires servicing only every 14 days, detects salmon 24 h/day, is inexpensive, and produces escapement estimates with confidence intervals. In addition to escapement estimation, we present a method for estimating in-stream salmon abundance across time, data needed by researchers interested in predator--prey interactions or nutrient subsidies. We combined daily salmon passage estimates with stream specific estimates of daily mortality developed using previously published data. To demonstrate proof of concept for these methods, we present results from two streams in southwest Kodiak Island, Alaska in which high densities of sockeye salmon spawn. PMID:27326378
Approaches on calibration of bolometer and establishment of bolometer calibration device
NASA Astrophysics Data System (ADS)
Xia, Ming; Gao, Jianqiang; Ye, Jun'an; Xia, Junwen; Yin, Dejin; Li, Tiecheng; Zhang, Dong
2015-10-01
Bolometer is mainly used for measuring thermal radiation in the field of public places, labor hygiene, heating and ventilation and building energy conservation. The working principle of bolometer is under the exposure of thermal radiation, temperature of black absorbing layer of detector rise after absorption of thermal radiation, which makes the electromotive force produced by thermoelectric. The white light reflective layer of detector does not absorb thermal radiation, so the electromotive force produced by thermoelectric is almost zero. A comparison of electromotive force produced by thermoelectric of black absorbing layer and white reflective layer can eliminate the influence of electric potential produced by the basal background temperature change. After the electromotive force which produced by thermal radiation is processed by the signal processing unit, the indication displays through the indication display unit. The measurement unit of thermal radiation intensity is usually W/m2 or kW/m2. Its accurate and reliable value has important significance for high temperature operation, labor safety and hygiene grading management. Bolometer calibration device is mainly composed of absolute radiometer, the reference light source, electric measuring instrument. Absolute radiometer is a self-calibration type radiometer. Its working principle is using the electric power which can be accurately measured replaces radiation power to absolutely measure the radiation power. Absolute radiometer is the standard apparatus of laser low power standard device, the measurement traceability is guaranteed. Using the calibration method of comparison, the absolute radiometer and bolometer measure the reference light source in the same position alternately which can get correction factor of irradiance indication. This paper is mainly about the design and calibration method of the bolometer calibration device. The uncertainty of the calibration result is also evaluated.
Portable method of measuring gaseous acetone concentrations.
Worrall, Adam D; Bernstein, Jonathan A; Angelopoulos, Anastasios P
2013-08-15
Measurement of acetone in human breath samples has been previously shown to provide significant non-invasive diagnostic insight into the control of a patient's diabetic condition. In patients with diabetes mellitus, the body produces excess amounts of ketones such as acetone, which are then exhaled during respiration. Using various breath analysis methods has allowed for the accurate determination of acetone concentrations in exhaled breath. However, many of these methods require instrumentation and pre-concentration steps not suitable for point-of-care use. We have found that by immobilizing resorcinol reagent into a perfluorosulfonic acid polymer membrane, a controlled organic synthesis reaction occurs with acetone in a dry carrier gas. The immobilized, highly selective product of this reaction (a flavan) is found to produce a visible spectrum color change which could measure acetone concentrations to less than ppm. We here demonstrate how this approach can be used to produce a portable optical sensing device for real-time, non-invasive acetone analysis. Copyright © 2013 Elsevier B.V. All rights reserved.
Hansen, Bjoern Oest; Meyer, Etienne H; Ferrari, Camilla; Vaid, Neha; Movahedi, Sara; Vandepoele, Klaas; Nikoloski, Zoran; Mutwil, Marek
2018-03-01
Recent advances in gene function prediction rely on ensemble approaches that integrate results from multiple inference methods to produce superior predictions. Yet, these developments remain largely unexplored in plants. We have explored and compared two methods to integrate 10 gene co-function networks for Arabidopsis thaliana and demonstrate how the integration of these networks produces more accurate gene function predictions for a larger fraction of genes with unknown function. These predictions were used to identify genes involved in mitochondrial complex I formation, and for five of them, we confirmed the predictions experimentally. The ensemble predictions are provided as a user-friendly online database, EnsembleNet. The methods presented here demonstrate that ensemble gene function prediction is a powerful method to boost prediction performance, whereas the EnsembleNet database provides a cutting-edge community tool to guide experimentalists. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.
Geometrically complex 3D-printed phantoms for diffuse optical imaging.
Dempsey, Laura A; Persad, Melissa; Powell, Samuel; Chitnis, Danial; Hebden, Jeremy C
2017-03-01
Tissue-equivalent phantoms that mimic the optical properties of human and animal tissues are commonly used in diffuse optical imaging research to characterize instrumentation or evaluate an image reconstruction method. Although many recipes have been produced for generating solid phantoms with specified absorption and transport scattering coefficients at visible and near-infrared wavelengths, the construction methods are generally time-consuming and are unable to create complex geometries. We present a method of generating phantoms using a standard 3D printer. A simple recipe was devised which enables printed phantoms to be produced with precisely known optical properties. To illustrate the capability of the method, we describe the creation of an anatomically accurate, tissue-equivalent premature infant head optical phantom with a hollow brain space based on MRI atlas data. A diffuse optical image of the phantom is acquired when a high contrast target is inserted into the hollow space filled with an aqueous scattering solution.
Sequencing Cyclic Peptides by Multistage Mass Spectrometry
Mohimani, Hosein; Yang, Yu-Liang; Liu, Wei-Ting; Hsieh, Pei-Wen; Dorrestein, Pieter C.; Pevzner, Pavel A.
2012-01-01
Some of the most effective antibiotics (e.g., Vancomycin and Daptomycin) are cyclic peptides produced by non-ribosomal biosynthetic pathways. While hundreds of biomedically important cyclic peptides have been sequenced, the computational techniques for sequencing cyclic peptides are still in their infancy. Previous methods for sequencing peptide antibiotics and other cyclic peptides are based on Nuclear Magnetic Resonance spectroscopy, and require large amount (miligrams) of purified materials that, for most compounds, are not possible to obtain. Recently, development of mass spectrometry based methods has provided some hope for accurate sequencing of cyclic peptides using picograms of materials. In this paper we develop a method for sequencing of cyclic peptides by multistage mass spectrometry, and show its advantages over single stage mass spectrometry. The method is tested on known and new cyclic peptides from Bacillus brevis, Dianthus superbus and Streptomyces griseus, as well as a new family of cyclic peptides produced by marine bacteria. PMID:21751357
Geometrically complex 3D-printed phantoms for diffuse optical imaging
Dempsey, Laura A.; Persad, Melissa; Powell, Samuel; Chitnis, Danial; Hebden, Jeremy C.
2017-01-01
Tissue-equivalent phantoms that mimic the optical properties of human and animal tissues are commonly used in diffuse optical imaging research to characterize instrumentation or evaluate an image reconstruction method. Although many recipes have been produced for generating solid phantoms with specified absorption and transport scattering coefficients at visible and near-infrared wavelengths, the construction methods are generally time-consuming and are unable to create complex geometries. We present a method of generating phantoms using a standard 3D printer. A simple recipe was devised which enables printed phantoms to be produced with precisely known optical properties. To illustrate the capability of the method, we describe the creation of an anatomically accurate, tissue-equivalent premature infant head optical phantom with a hollow brain space based on MRI atlas data. A diffuse optical image of the phantom is acquired when a high contrast target is inserted into the hollow space filled with an aqueous scattering solution. PMID:28663863
Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.
Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana
2017-07-01
Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.
Detection of ESBL among ampc producing enterobacteriaceae using inhibitor-based method
Bakthavatchalu, Sasirekha; Shakthivel, Uma; Mishra, Tannu
2013-01-01
Introduction The occurrence of multiple β-lactamases among bacteria only limits the therapeutic options but also poses a challenge. A study using boronic acid (BA), an AmpC enzyme inhibitor, was designed to detect the combined expression of AmpC β-lactamases and extended-spectrum β-lactamases (ESBLs) in bacterial isolates further different phenotypic methods are compared to detect ESBL and AmpC. Methods A total of 259 clinical isolates of Enterobacteriaceae were isolated and screened for ESBL production by (i) CLSI double-disk diffusion method (ii) cefepime- clavulanic acid method (iii) boronic disk potentiation method. AmpC production was detected using cefoxitin alone and in combination with boronic acid and confirmation was done by three dimensional disk methods. Isolates were also subjected to detailed antibiotic susceptibility test. Results Among 259 isolates, 20.46% were coproducers of ESBL and AmpC, 26.45% were ESBL and 5.40% were AmpC. All of the 53 AmpC and ESBL coproducers were accurately detected by boronic acid disk potentiation method. Conclusion The BA disk test using Clinical and Laboratory Standards Institute methodology is simple and very efficient method that accurately detects the isolates that harbor both AmpCs and ESBLs. PMID:23504148
Using high hydraulic conductivity nodes to simulate seepage lakes
Anderson, Mary P.; Hunt, Randall J.; Krohelski, James T.; Chung, Kuopo
2002-01-01
In a typical ground water flow model, lakes are represented by specified head nodes requiring that lake levels be known a priori. To remove this limitation, previous researchers assigned high hydraulic conductivity (K) values to nodes that represent a lake, under the assumption that the simulated head at the nodes in the high-K zone accurately reflects lake level. The solution should also produce a constant water level across the lake. We developed a model of a simple hypothetical ground water/lake system to test whether solutions using high-K lake nodes are sensitive to the value of K selected to represent the lake. Results show that the larger the contrast between the K of the aquifer and the K of the lake nodes, the smaller the error tolerance required for the solution to converge. For our test problem, a contrast of three orders of magnitude produced a head difference across the lake of 0.005 m under a regional gradient of the order of 10−3 m/m, while a contrast of four orders of magnitude produced a head difference of 0.001 m. The high-K method was then used to simulate lake levels in Pretty Lake, Wisconsin. Results for both the hypothetical system and the application to Pretty Lake compared favorably with results using a lake package developed for MODFLOW (Merritt and Konikow 2000). While our results demonstrate that the high-K method accurately simulates lake levels, this method has more cumbersome postprocessing and longer run times than the same problem simulated using the lake package.
eulerAPE: Drawing Area-Proportional 3-Venn Diagrams Using Ellipses
Micallef, Luana; Rodgers, Peter
2014-01-01
Venn diagrams with three curves are used extensively in various medical and scientific disciplines to visualize relationships between data sets and facilitate data analysis. The area of the regions formed by the overlapping curves is often directly proportional to the cardinality of the depicted set relation or any other related quantitative data. Drawing these diagrams manually is difficult and current automatic drawing methods do not always produce appropriate diagrams. Most methods depict the data sets as circles, as they perceptually pop out as complete distinct objects due to their smoothness and regularity. However, circles cannot draw accurate diagrams for most 3-set data and so the generated diagrams often have misleading region areas. Other methods use polygons to draw accurate diagrams. However, polygons are non-smooth and non-symmetric, so the curves are not easily distinguishable and the diagrams are difficult to comprehend. Ellipses are more flexible than circles and are similarly smooth, but none of the current automatic drawing methods use ellipses. We present eulerAPE as the first method and software that uses ellipses for automatically drawing accurate area-proportional Venn diagrams for 3-set data. We describe the drawing method adopted by eulerAPE and we discuss our evaluation of the effectiveness of eulerAPE and ellipses for drawing random 3-set data. We compare eulerAPE and various other methods that are currently available and we discuss differences between their generated diagrams in terms of accuracy and ease of understanding for real world data. PMID:25032825
eulerAPE: drawing area-proportional 3-Venn diagrams using ellipses.
Micallef, Luana; Rodgers, Peter
2014-01-01
Venn diagrams with three curves are used extensively in various medical and scientific disciplines to visualize relationships between data sets and facilitate data analysis. The area of the regions formed by the overlapping curves is often directly proportional to the cardinality of the depicted set relation or any other related quantitative data. Drawing these diagrams manually is difficult and current automatic drawing methods do not always produce appropriate diagrams. Most methods depict the data sets as circles, as they perceptually pop out as complete distinct objects due to their smoothness and regularity. However, circles cannot draw accurate diagrams for most 3-set data and so the generated diagrams often have misleading region areas. Other methods use polygons to draw accurate diagrams. However, polygons are non-smooth and non-symmetric, so the curves are not easily distinguishable and the diagrams are difficult to comprehend. Ellipses are more flexible than circles and are similarly smooth, but none of the current automatic drawing methods use ellipses. We present eulerAPE as the first method and software that uses ellipses for automatically drawing accurate area-proportional Venn diagrams for 3-set data. We describe the drawing method adopted by eulerAPE and we discuss our evaluation of the effectiveness of eulerAPE and ellipses for drawing random 3-set data. We compare eulerAPE and various other methods that are currently available and we discuss differences between their generated diagrams in terms of accuracy and ease of understanding for real world data.
Multispectral photography for earth resources
NASA Technical Reports Server (NTRS)
Wenderoth, S.; Yost, E.; Kalia, R.; Anderson, R.
1972-01-01
A guide for producing accurate multispectral results for earth resource applications is presented along with theoretical and analytical concepts of color and multispectral photography. Topics discussed include: capabilities and limitations of color and color infrared films; image color measurements; methods of relating ground phenomena to film density and color measurement; sensitometry; considerations in the selection of multispectral cameras and components; and mission planning.
NASA Technical Reports Server (NTRS)
Lent, P. C. (Principal Investigator)
1976-01-01
The author has identified the following significant results. Winter and summer moose range maps of three selected areas were produced (1:63,360 scale). The analytic approach is very similar to modified clustering. Preliminary results indicate that this method is not only more accurate but considerably less expensive than supervised classification techniques.
ERIC Educational Resources Information Center
Sundara, Megha; Demuth, Katherine; Kuhl, Patricia K.
2011-01-01
Purpose: Two-year-olds produce third person singular "-s" more accurately on verbs in sentence-final position as compared with verbs in sentence-medial position. This study was designed to determine whether these sentence-position effects can be explained by perceptual factors. Method: For this purpose, the authors compared 22- and 27-month-olds'…
Simulation of Electric Propulsion Thrusters (Preprint)
2011-02-07
activity concerns the plumes produced by electric thrusters. Detailed information on the plumes is required for safe integration of the thruster...ground-based laboratory facilities. Device modelling also plays an important role in plume simulations by providing accurate boundary conditions at...methods used to model the flow of gas and plasma through electric propulsion devices. Discussion of the numerical analysis of other aspects of
An efficient intensity-based ready-to-use X-ray image stitcher.
Wang, Junchen; Zhang, Xiaohui; Sun, Zhen; Yuan, Fuzhen
2018-06-14
The limited field of view of the X-ray image intensifier makes it difficult to cover a large target area with a single X-ray image. X-ray image stitching techniques have been proposed to produce a panoramic X-ray image. This paper presents an efficient intensity-based X-ray image stitcher, which does not rely on accurate C-arm motion control or auxiliary devices and hence is ready to use in clinic. The stitcher consumes sequentially captured X-ray images with overlap areas and automatically produces a panoramic image. The gradient information for optimization of image alignment is obtained using a back-propagation scheme so that it is convenient to adopt various image warping models. The proposed stitcher has the following advantages over existing methods: (1) no additional hardware modification or auxiliary markers are needed; (2) it is robust against feature-based approaches; (3) arbitrary warping models and shapes of the region of interest are supported; (4) seamless stitching is achieved using multi-band blending. Experiments have been performed to confirm the effectiveness of the proposed method. The proposed X-ray image stitcher is efficient, accurate and ready to use in clinic. Copyright © 2018 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Karimi, F. S.; Saviz, S.; Ghoranneviss, M.; Salem, M. K.; Aghamir, F. M.
The circuit parameters are investigated in a Mather-type plasma focus device. The experiments are performed in the SABALAN-I plasma focus facility (2 kJ, 20 kV, 10 μF). A 12-turn Rogowski coil is built and used to measure the time derivative of discharge current (dI/dt). The high pressure test has been performed in this work, as alternative technique to short circuit test to determine the machine circuit parameters and calibration factor of the Rogowski coil. The operating parameters are calculated by two methods and the results show that the relative error of determined parameters by method I, are very low in comparison to method II. Thus the method I produces more accurate results than method II. The high pressure test is operated with this assumption that no plasma motion and the circuit parameters may be estimated using R-L-C theory given that C0 is known. However, for a plasma focus, even at highest permissible pressure it is found that there is significant motion, so that estimated circuit parameters not accurate. So the Lee Model code is used in short circuit mode to generate the computed current trace for fitting to the current waveform was integrated from current derivative signal taken with Rogowski coil. Hence, the dynamics of plasma is accounted for into the estimation and the static bank parameters are determined accurately.
Gray, John R.; Gartner, Jeffrey W.
2010-01-01
Traditional methods for characterizing selected properties of suspended sediments in rivers are being augmented and in some cases replaced by cost-effective surrogate instruments and methods that produce a temporally dense time series of quantifiably accurate data for use primarily in sediment-flux computations. Turbidity is the most common such surrogate technology, and the first to be sanctioned by the U.S. Geological Survey for use in producing data used in concert with water-discharge data to compute sediment concentrations and fluxes for storage in the National Water Information System. Other technologies, including laser-diffraction, digital photo-optic, acoustic-attenuation and backscatter, and pressure-difference techniques are being evaluated for producing reliable sediment concentration and, in some cases, particle-size distribution data. Each technology addresses a niche for sediment monitoring. Their performances range from compelling to disappointing. Some of these technologies have the potential to revolutionize fluvial-sediment data collection, analysis, and availability.
Numerical methods for the stochastic Landau-Lifshitz Navier-Stokes equations.
Bell, John B; Garcia, Alejandro L; Williams, Sarah A
2007-07-01
The Landau-Lifshitz Navier-Stokes (LLNS) equations incorporate thermal fluctuations into macroscopic hydrodynamics by using stochastic fluxes. This paper examines explicit Eulerian discretizations of the full LLNS equations. Several computational fluid dynamics approaches are considered (including MacCormack's two-step Lax-Wendroff scheme and the piecewise parabolic method) and are found to give good results for the variance of momentum fluctuations. However, neither of these schemes accurately reproduces the fluctuations in energy or density. We introduce a conservative centered scheme with a third-order Runge-Kutta temporal integrator that does accurately produce fluctuations in density, energy, and momentum. A variety of numerical tests, including the random walk of a standing shock wave, are considered and results from the stochastic LLNS solver are compared with theory, when available, and with molecular simulations using a direct simulation Monte Carlo algorithm.
Identification of an Efficient Gene Expression Panel for Glioblastoma Classification
Zelaya, Ivette; Laks, Dan R.; Zhao, Yining; Kawaguchi, Riki; Gao, Fuying; Kornblum, Harley I.; Coppola, Giovanni
2016-01-01
We present here a novel genetic algorithm-based random forest (GARF) modeling technique that enables a reduction in the complexity of large gene disease signatures to highly accurate, greatly simplified gene panels. When applied to 803 glioblastoma multiforme samples, this method allowed the 840-gene Verhaak et al. gene panel (the standard in the field) to be reduced to a 48-gene classifier, while retaining 90.91% classification accuracy, and outperforming the best available alternative methods. Additionally, using this approach we produced a 32-gene panel which allows for better consistency between RNA-seq and microarray-based classifications, improving cross-platform classification retention from 69.67% to 86.07%. A webpage producing these classifications is available at http://simplegbm.semel.ucla.edu. PMID:27855170
Additively Manufactured Metals in Oxygen Systems Project
NASA Technical Reports Server (NTRS)
Tylka, Jonathan
2015-01-01
Metals produced by additive manufacturing methods, such as Powder Bed Fusion Technology, are now mature enough to be considered for qualification in human spaceflight oxygen systems. The mechanical properties of metals produced through AM processes are being systematically studied. However, it is unknown whether AM metals in oxygen applications may present an increased risk of flammability or ignition as compared to wrought metals of the same metallurgical composition due to increased porosity. Per NASA-STD-6001B materials to be used in oxygen system applications shall be based on flammability and combustion test data, followed by a flammability assessment. Without systematic flammability and ignition testing in oxygen there is no credible method for NASA to accurately evaluate the risk of using AM metals in oxygen systems.
Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline
NASA Technical Reports Server (NTRS)
Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor
2010-01-01
Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.
Chattopadhyay, Sudip; Chaudhuri, Rajat K; Freed, Karl F
2011-04-28
The improved virtual orbital-complete active space configuration interaction (IVO-CASCI) method enables an economical and reasonably accurate treatment of static correlation in systems with significant multireference character, even when using a moderate basis set. This IVO-CASCI method supplants the computationally more demanding complete active space self-consistent field (CASSCF) method by producing comparable accuracy with diminished computational effort because the IVO-CASCI approach does not require additional iterations beyond an initial SCF calculation, nor does it encounter convergence difficulties or multiple solutions that may be found in CASSCF calculations. Our IVO-CASCI analytical gradient approach is applied to compute the equilibrium geometry for the ground and lowest excited state(s) of the theoretically very challenging 2,6-pyridyne, 1,2,3-tridehydrobenzene and 1,3,5-tridehydrobenzene anionic systems for which experiments are lacking, accurate quantum calculations are almost completely absent, and commonly used calculations based on single reference configurations fail to provide reasonable results. Hence, the computational complexity provides an excellent test for the efficacy of multireference methods. The present work clearly illustrates that the IVO-CASCI analytical gradient method provides a good description of the complicated electronic quasi-degeneracies during the geometry optimization process for the radicaloid anions. The IVO-CASCI treatment produces almost identical geometries as the CASSCF calculations (performed for this study) at a fraction of the computational labor. Adiabatic energy gaps to low lying excited states likewise emerge from the IVO-CASCI and CASSCF methods as very similar. We also provide harmonic vibrational frequencies to demonstrate the stability of the computed geometries.
Application of Nearly Linear Solvers to Electric Power System Computation
NASA Astrophysics Data System (ADS)
Grant, Lisa L.
To meet the future needs of the electric power system, improvements need to be made in the areas of power system algorithms, simulation, and modeling, specifically to achieve a time frame that is useful to industry. If power system time-domain simulations could run in real-time, then system operators would have situational awareness to implement online control and avoid cascading failures, significantly improving power system reliability. Several power system applications rely on the solution of a very large linear system. As the demands on power systems continue to grow, there is a greater computational complexity involved in solving these large linear systems within reasonable time. This project expands on the current work in fast linear solvers, developed for solving symmetric and diagonally dominant linear systems, in order to produce power system specific methods that can be solved in nearly-linear run times. The work explores a new theoretical method that is based on ideas in graph theory and combinatorics. The technique builds a chain of progressively smaller approximate systems with preconditioners based on the system's low stretch spanning tree. The method is compared to traditional linear solvers and shown to reduce the time and iterations required for an accurate solution, especially as the system size increases. A simulation validation is performed, comparing the solution capabilities of the chain method to LU factorization, which is the standard linear solver for power flow. The chain method was successfully demonstrated to produce accurate solutions for power flow simulation on a number of IEEE test cases, and a discussion on how to further improve the method's speed and accuracy is included.
Robust finger vein ROI localization based on flexible segmentation.
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-10-24
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.
Robust Finger Vein ROI Localization Based on Flexible Segmentation
Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun
2013-01-01
Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769
Reflection full-waveform inversion using a modified phase misfit function
NASA Astrophysics Data System (ADS)
Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe
2017-09-01
Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
A picture's worth a thousand words: a food-selection observational method.
Carins, Julia E; Rundle-Thiele, Sharyn R; Parkinson, Joy E
2016-05-04
Issue addressed: Methods are needed to accurately measure and describe behaviour so that social marketers and other behaviour change researchers can gain consumer insights before designing behaviour change strategies and so, in time, they can measure the impact of strategies or interventions when implemented. This paper describes a photographic method developed to meet these needs. Methods: Direct observation and photographic methods were developed and used to capture food-selection behaviour and examine those selections according to their healthfulness. Four meals (two lunches and two dinners) were observed at a workplace buffet-style cafeteria over a 1-week period. The healthfulness of individual meals was assessed using a classification scheme developed for the present study and based on the Australian Dietary Guidelines. Results: Approximately 27% of meals (n = 168) were photographed. Agreement was high between raters classifying dishes using the scheme, as well as between researchers when coding photographs. The subset of photographs was representative of patterns observed in the entire dining room. Diners chose main dishes in line with the proportions presented, but in opposition to the proportions presented for side dishes. Conclusions: The present study developed a rigorous observational method to investigate food choice behaviour. The comprehensive food classification scheme produced consistent classifications of foods. The photographic data collection method was found to be robust and accurate. Combining the two observation methods allows researchers and/or practitioners to accurately measure and interpret food selections. Consumer insights gained suggest that, in this setting, increasing the availability of green (healthful) offerings for main dishes would assist in improving healthfulness, whereas other strategies (e.g. promotion) may be needed for side dishes. So what?: Visual observation methods that accurately measure and interpret food-selection behaviour provide both insight for those developing healthy eating interventions and a means to evaluate the effect of implemented interventions on food selection.
Coarse-Graining Polymer Field Theory for Fast and Accurate Simulations of Directed Self-Assembly
NASA Astrophysics Data System (ADS)
Liu, Jimmy; Delaney, Kris; Fredrickson, Glenn
To design effective manufacturing processes using polymer directed self-assembly (DSA), the semiconductor industry benefits greatly from having a complete picture of stable and defective polymer configurations. Field-theoretic simulations are an effective way to study these configurations and predict defect populations. Self-consistent field theory (SCFT) is a particularly successful theory for studies of DSA. Although other models exist that are faster to simulate, these models are phenomenological or derived through asymptotic approximations, often leading to a loss of accuracy relative to SCFT. In this study, we employ our recently-developed method to produce an accurate coarse-grained field theory for diblock copolymers. The method uses a force- and stress-matching strategy to map output from SCFT simulations into parameters for an optimized phase field model. This optimized phase field model is just as fast as existing phenomenological phase field models, but makes more accurate predictions of polymer self-assembly, both in bulk and in confined systems. We study the performance of this model under various conditions, including its predictions of domain spacing, morphology and defect formation energies. Samsung Electronics.
A temperature match based optimization method for daily load prediction considering DLC effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Z.
This paper presents a unique optimization method for short term load forecasting. The new method is based on the optimal template temperature match between the future and past temperatures. The optimal error reduction technique is a new concept introduced in this paper. Two case studies show that for hourly load forecasting, this method can yield results as good as the rather complicated Box-Jenkins Transfer Function method, and better than the Box-Jenkins method; for peak load prediction, this method is comparable in accuracy to the neural network method with back propagation, and can produce more accurate results than the multi-linear regressionmore » method. The DLC effect on system load is also considered in this method.« less
Tian, Kai; Chen, Xiaowei; Luan, Binquan; Singh, Prashant; Yang, Zhiyu; Gates, Kent S; Lin, Mengshi; Mustapha, Azlin; Gu, Li-Qun
2018-05-22
Accurate and rapid detection of single-nucleotide polymorphism (SNP) in pathogenic mutants is crucial for many fields such as food safety regulation and disease diagnostics. Current detection methods involve laborious sample preparations and expensive characterizations. Here, we investigated a single locked nucleic acid (LNA) approach, facilitated by a nanopore single-molecule sensor, to accurately determine SNPs for detection of Shiga toxin producing Escherichia coli (STEC) serotype O157:H7, and cancer-derived EGFR L858R and KRAS G12D driver mutations. Current LNA applications that require incorporation and optimization of multiple LNA nucleotides. But we found that in the nanopore system, a single LNA introduced in the probe is sufficient to enhance the SNP discrimination capability by over 10-fold, allowing accurate detection of the pathogenic mutant DNA mixed in a large amount of the wild-type DNA. Importantly, the molecular mechanistic study suggests that such a significant improvement is due to the effect of the single-LNA that both stabilizes the fully matched base-pair and destabilizes the mismatched base-pair. This sensitive method, with a simplified, low cost, easy-to-operate LNA design, could be generalized for various applications that need rapid and accurate identification of single-nucleotide variations.
Lung vessel segmentation in CT images using graph-cuts
NASA Astrophysics Data System (ADS)
Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.
2016-03-01
Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.
Baker, Jannah; White, Nicole; Mengersen, Kerrie
2014-11-20
Spatial analysis is increasingly important for identifying modifiable geographic risk factors for disease. However, spatial health data from surveys are often incomplete, ranging from missing data for only a few variables, to missing data for many variables. For spatial analyses of health outcomes, selection of an appropriate imputation method is critical in order to produce the most accurate inferences. We present a cross-validation approach to select between three imputation methods for health survey data with correlated lifestyle covariates, using as a case study, type II diabetes mellitus (DM II) risk across 71 Queensland Local Government Areas (LGAs). We compare the accuracy of mean imputation to imputation using multivariate normal and conditional autoregressive prior distributions. Choice of imputation method depends upon the application and is not necessarily the most complex method. Mean imputation was selected as the most accurate method in this application. Selecting an appropriate imputation method for health survey data, after accounting for spatial correlation and correlation between covariates, allows more complete analysis of geographic risk factors for disease with more confidence in the results to inform public policy decision-making.
Cerebellar ataxia: abnormal control of interaction torques across multiple joints.
Bastian, A J; Martin, T A; Keating, J G; Thach, W T
1996-07-01
1. We studied seven subjects with cerebellar lesions and seven control subjects as they made reaching movements in the sagittal plane to a target directly in front of them. Reaches were made under three different conditions: 1) "slow-accurate," 2) "fast-accurate," and 3) "fast as possible." All subjects were videotaped moving in a sagittal plane with markers on the index finger, wrist, elbow, and shoulder. Marker positions were digitized and then used to calculate joint angles. For each of the shoulder, elbow and wrist joints, inverse dynamics equations based on a three-segment limb model were used to estimate the net torque (sum of components) and each of the component torques. The component torques consisted of the torque due to gravity, the dynamic interaction torques induced passively by the movement of the adjacent joint, and the torque produced by the muscles and passive tissue elements (sometimes called "residual" torque). 2. A kinematic analysis of the movement trajectory and the change in joint angles showed that the reaches of subjects with cerebellar lesions were abnormal compared with reaches of control subjects. In both the slow-accurate and fast-accurate conditions the cerebellar subjects made abnormally curved wrist paths; the curvature was greater in the slow-accurate condition. During the slow-accurate condition, cerebellar subjects showed target undershoot and tended to move one joint at a time (decomposition). During the fast-accurate reaches, the cerebellar subjects showed target overshoot. Additionally, in the fast-accurate condition, cerebellar subjects moved the joints at abnormal rates relative to one another, but the movements were less decomposed. Only three subjects were tested in the fast as possible condition; this condition was analyzed only to determine maximal reaching speeds of subjects with cerebellar lesions. Cerebellar subjects moved more slowly than controls in all three conditions. 3. A kinetic analysis of torques generated at each joint during the slow-accurate reaches and the fast-accurate reaches revealed that subjects with cerebellar lesions produced very different torque profiles compared with control subjects. In the slow-accurate condition, the cerebellar subjects produced abnormal elbow muscle torques that prevented the normal elbow extension early in the reach. In the fast-accurate condition, the cerebellar subjects produced inappropriate levels of shoulder muscle torque and also produced elbow muscle torques that did not very appropriately with the dynamic interaction torques that occurred at the elbow. Lack of appropriate muscle torque resulted in excessive contributions of the dynamic interaction torque during the fast-accurate reaches. 4. The inability to produce muscle torques that predict, accommodate, and compensate for the dynamic interaction torques appears to be an important cause of the classic kinematic deficits shown by cerebellar subjects during attempted reaching. These kinematic deficits include incoordination of the shoulder and the elbow joints, a curved trajectory, and overshoot. In the fast-accurate condition, cerebellar subjects often made inappropriate muscle torques relative to the dynamic interaction torques. Because of this, interaction torques often determined the pattern of incoordination of the elbow and shoulder that produced the curved trajectory and target overshoot. In the slow-accurate condition, we reason that the cerebellar subjects may use a decomposition strategy so as to simplify the movement and not have to control both joints simultaneously. From these results, we suggest that a major role of the cerebellum is in generating muscle torques at a joint that will predict the interaction torques being generated by other moving joints and compensate for them as they occur.
Optimal application of Morrison's iterative noise removal for deconvolution. Appendices
NASA Technical Reports Server (NTRS)
Ioup, George E.; Ioup, Juliette W.
1987-01-01
Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.
NASA Astrophysics Data System (ADS)
Van Gordon, M.; Van Gordon, S.; Min, A.; Sullivan, J.; Weiner, Z.; Tappan, G. G.
2017-12-01
Using support vector machine (SVM) learning and high-accuracy hand-classified maps, we have developed a publicly available land cover classification tool for the West African Sahel. Our classifier produces high-resolution and regionally calibrated land cover maps for the Sahel, representing a significant contribution to the data available for this region. Global land cover products are unreliable for the Sahel, and accurate land cover data for the region are sparse. To address this gap, the U.S. Geological Survey and the Regional Center for Agriculture, Hydrology and Meteorology (AGRHYMET) in Niger produced high-quality land cover maps for the region via hand-classification of Landsat images. This method produces highly accurate maps, but the time and labor required constrain the spatial and temporal resolution of the data products. By using these hand-classified maps alongside SVM techniques, we successfully increase the resolution of the land cover maps by 1-2 orders of magnitude, from 2km-decadal resolution to 30m-annual resolution. These high-resolution regionally calibrated land cover datasets, along with the classifier we developed to produce them, lay the foundation for major advances in studies of land surface processes in the region. These datasets will provide more accurate inputs for food security modeling, hydrologic modeling, analyses of land cover change and climate change adaptation efforts. The land cover classification tool we have developed will be publicly available for use in creating additional West Africa land cover datasets with future remote sensing data and can be adapted for use in other parts of the world.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moridis, G.
1992-03-01
The Laplace Transform Boundary Element (LTBE) method is a recently introduced numerical method, and has been used for the solution of diffusion-type PDEs. It completely eliminates the time dependency of the problem and the need for time discretization, yielding solutions numerical in space and semi-analytical in time. In LTBE solutions are obtained in the Laplace spare, and are then inverted numerically to yield the solution in time. The Stehfest and the DeHoog formulations of LTBE, based on two different inversion algorithms, are investigated. Both formulations produce comparable, extremely accurate solutions.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method
NASA Astrophysics Data System (ADS)
Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.
2018-05-01
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
Evaluation of Piloted Inputs for Onboard Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Martos, Borja
2013-01-01
Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.
Doll, Charles G; Wright, Cherylyn W; Morley, Shannon M; Wright, Bob W
2017-04-01
A modified version of the Direct LSC method to correct for quenching effect was investigated for the determination of bio-originated fuel content in fuel samples produced from multiple biological starting materials. The modified method was found to be accurate in determining the percent bio-originated fuel to within 5% of the actual value for samples with quenching effects ≤43%. Analysis of highly quenched samples was possible when diluted with the exception of one sample with a 100% quenching effect. Copyright © 2017. Published by Elsevier Ltd.
The effect of sampling techniques used in the multiconfigurational Ehrenfest method.
Symonds, C; Kattirtzi, J A; Shalashilin, D V
2018-05-14
In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.
Optical Breast Shape Capture and Finite Element Mesh Generation for Electrical Impedance Tomography
Forsyth, J.; Borsic, A.; Halter, R.J.; Hartov, A.; Paulsen, K.D.
2011-01-01
X-Ray mammography is the standard for breast cancer screening. The development of alternative imaging modalities is desirable because Mammograms expose patients to ionizing radiation. Electrical Impedance Tomography (EIT) may be used to determine tissue conductivity, a property which is an indicator of cancer presence. EIT is also a low-cost imaging solution and does not involve ionizing radiation. In breast EIT, impedance measurements are made using electrodes placed on the surface of the patient’s breast. The complex conductivity of the volume of the breast is estimated by a reconstruction algorithm. EIT reconstruction is a severely ill-posed inverse problem. As a result, noisy instrumentation and incorrect modelling of the electrodes and domain shape produce significant image artefacts. In this paper, we propose a method that has the potential to reduce these errors by accurately modelling the patient breast shape. A 3D hand-held optical scanner is used to acquire the breast geometry and electrode positions. We develop methods for processing the data from the scanner and producing volume meshes accurately matching the breast surface and electrode locations, which can be used for image reconstruction. We demonstrate this method for a plaster breast phantom and a human subject. Using this approach will allow patient-specific finite element meshes to be generated which has the potential to improve the clinical value of EIT for breast cancer diagnosis. PMID:21646711
NASA Astrophysics Data System (ADS)
2017-11-01
To deal with these problems investigators usually rely on a calibration method that makes use of a substance with an accurately known set of interatomic distances. The procedure consists of carrying out a diffraction experiment on the chosen calibrating substance, determining the value of the distances with use of the nominal (meter) value of the voltage, and then correcting the nominal voltage by an amount that produces the distances in the calibration substance. Examples of gases that have been used for calibration are carbon dioxide, carbon tetrachloride, carbon disulfide, and benzene; solids such as zinc oxide smoke (powder) deposited on a screen or slit have also been used. The question implied by the use of any standard molecule is, how accurate are the interatomic distance values assigned to the standard? For example, a solid calibrant is subject to heating by the electron beam, possibly producing unknown changes in the lattice constants, and polyatomic gaseous molecules require corrections for vibrational averaging ("shrinkage") effects that are uncertain at best. It has lately been necessary for us to investigate this matter in connection with on-going studies of several molecules in which size is the most important issue. These studies indicated that our usual method for retrieval of data captured on film needed improvement. The following is an account of these two issues - the accuracy of the distances assigned to the chosen standard molecule, and the improvements in our methods of retrieving the scattered intensity data.
Radiographic localization of unerupted mandibular anterior teeth.
Jacobs, S G
2000-10-01
The parallax method and the use of 2 radiographs taken at right angles to each other are the 2 methods generally used to accurately localize teeth. For the parallax method, the combination of a rotational panoramic radiograph with an occlusal radiograph is recommended. This combination involves a vertical x-ray tube shift. Three case reports are presented that illustrate: (1) how this combination can accurately localize unerupted mandibular anterior teeth, (2) how a deceptive appearance of the labiolingual position of the unerupted tooth can be produced in an occlusal radiograph, (3) how increasing the vertical angle of the tube for the occlusal radiograph makes the tube shift easier to discern, (4) why occlusal radiographs are preferable to periapical radiographs for tube shifts, and (5) how localization can also be carried out with 2 radiographs at right angles to each other, one of which is an occlusal radiograph taken with the x-ray tube directed along the long axis of the reference tooth.
Evaluation of algorithms for geological thermal-inertia mapping
NASA Technical Reports Server (NTRS)
Miller, S. H.; Watson, K.
1977-01-01
The errors incurred in producing a thermal inertia map are of three general types: measurement, analysis, and model simplification. To emphasize the geophysical relevance of these errors, they were expressed in terms of uncertainty in thermal inertia and compared with the thermal inertia values of geologic materials. Thus the applications and practical limitations of the technique were illustrated. All errors were calculated using the parameter values appropriate to a site at the Raft River, Id. Although these error values serve to illustrate the magnitudes that can be expected from the three general types of errors, extrapolation to other sites should be done using parameter values particular to the area. Three surface temperature algorithms were evaluated: linear Fourier series, finite difference, and Laplace transform. In terms of resulting errors in thermal inertia, the Laplace transform method is the most accurate (260 TIU), the forward finite difference method is intermediate (300 TIU), and the linear Fourier series method the least accurate (460 TIU).
NASA Astrophysics Data System (ADS)
Salleh, S. A.; Rahman, A. S. A. Abd; Othman, A. N.; Mohd, W. M. N. Wan
2018-02-01
As different approach produces different results, it is crucial to determine the methods that are accurate in order to perform analysis towards the event. This research aim is to compare the Rank Reciprocal (MCDM) and Artificial Neural Network (ANN) analysis techniques in determining susceptible zones of landslide hazard. The study is based on data obtained from various sources such as local authority; Dewan Bandaraya Kuala Lumpur (DBKL), Jabatan Kerja Raya (JKR) and other agencies. The data were analysed and processed using Arc GIS. The results were compared by quantifying the risk ranking and area differential. It was also compared with the zonation map classified by DBKL. The results suggested that ANN method gives better accuracy compared to MCDM with 18.18% higher accuracy assessment of the MCDM approach. This indicated that ANN provides more reliable results and it is probably due to its ability to learn from the environment thus portraying realistic and accurate result.
NASA Astrophysics Data System (ADS)
Rokni Deilmai, B.; Ahmad, B. Bin; Zabihi, H.
2014-06-01
Mapping is essential for the analysis of the land use and land cover, which influence many environmental processes and properties. For the purpose of the creation of land cover maps, it is important to minimize error. These errors will propagate into later analyses based on these land cover maps. The reliability of land cover maps derived from remotely sensed data depends on an accurate classification. In this study, we have analyzed multispectral data using two different classifiers including Maximum Likelihood Classifier (MLC) and Support Vector Machine (SVM). To pursue this aim, Landsat Thematic Mapper data and identical field-based training sample datasets in Johor Malaysia used for each classification method, which results indicate in five land cover classes forest, oil palm, urban area, water, rubber. Classification results indicate that SVM was more accurate than MLC. With demonstrated capability to produce reliable cover results, the SVM methods should be especially useful for land cover classification.
Problems with heterogeneous and non-isotropic media or distorted grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyman, J.; Shashkov, M.; Steinberg, S.
1996-08-01
This paper defines discretizations of the divergence and flux operators that produce symmetric, positive-definite, and accurate approximations to steady-state diffusion problems. Because discontinuous material properties and highly distorted grids are allowed, the flux operator, rather than the gradient, is used as a fundamental operator to be discretized. Resulting finite-difference scheme is similar to those obtained from the mixed finite-element method.
A multiscale forecasting method for power plant fleet management
NASA Astrophysics Data System (ADS)
Chen, Hongmei
In recent years the electric power industry has been challenged by a high level of uncertainty and volatility brought on by deregulation and globalization. A power producer must minimize the life cycle cost while meeting stringent safety and regulatory requirements and fulfilling customer demand for high reliability. Therefore, to achieve true system excellence, a more sophisticated system-level decision-making process with a more accurate forecasting support system to manage diverse and often widely dispersed generation units as a single, easily scaled and deployed fleet system in order to fully utilize the critical assets of a power producer has been created as a response. The process takes into account the time horizon for each of the major decision actions taken in a power plant and develops methods for information sharing between them. These decisions are highly interrelated and no optimal operation can be achieved without sharing information in the overall process. The process includes a forecasting system to provide information for planning for uncertainty. A new forecasting method is proposed, which utilizes a synergy of several modeling techniques properly combined at different time-scales of the forecasting objects. It can not only take advantages of the abundant historical data but also take into account the impact of pertinent driving forces from the external business environment to achieve more accurate forecasting results. Then block bootstrap is utilized to measure the bias in the estimate of the expected life cycle cost which will actually be needed to drive the business for a power plant in the long run. Finally, scenario analysis is used to provide a composite picture of future developments for decision making or strategic planning. The decision-making process is applied to a typical power producer chosen to represent challenging customer demand during high-demand periods. The process enhances system excellence by providing more accurate market information, evaluating the impact of external business environment, and considering cross-scale interactions between decision actions. Along with this process, system operation strategies, maintenance schedules, and capacity expansion plans that guide the operation of the power plant are optimally identified, and the total life cycle costs are estimated.
Surveyor assay to diagnose persistent Müllerian duct syndrome in Miniature Schnauzers.
Kim, Young June; Kwon, Hyuk Jin; Byun, Hyuk Soo; Yeom, Donguk; Choi, Jea-Hong; Kim, Joong-Hyun; Shim, Hosup
2017-12-31
Persistent Müllerian duct syndrome (PMDS) is a pseudohermaphroditism in males characterized by the presence of Müllerian duct derivatives. As PMDS dogs often lack clinical symptoms, a molecular diagnosis is essential to identify the syndrome in these animals. In this study, a new molecular method using DNA mismatch-specific Surveyor nuclease was developed. The Surveyor nuclease assay identified the AMHR2 mutation that produced PMDS in a Miniature Schnauzer as accurately as that obtained by using the conventional method based on restriction digestion. As an alternative to the current molecular diagnostic method, the new method may result in increased accuracy when detecting PMDS.
Surveyor assay to diagnose persistent Müllerian duct syndrome in Miniature Schnauzers
Kim, Young June; Kwon, Hyuk Jin; Byun, Hyuk Soo; Yeom, Donguk; Choi, Jea-Hong; Kim, Joong-Hyun
2017-01-01
Persistent Müllerian duct syndrome (PMDS) is a pseudohermaphroditism in males characterized by the presence of Müllerian duct derivatives. As PMDS dogs often lack clinical symptoms, a molecular diagnosis is essential to identify the syndrome in these animals. In this study, a new molecular method using DNA mismatch-specific Surveyor nuclease was developed. The Surveyor nuclease assay identified the AMHR2 mutation that produced PMDS in a Miniature Schnauzer as accurately as that obtained by using the conventional method based on restriction digestion. As an alternative to the current molecular diagnostic method, the new method may result in increased accuracy when detecting PMDS. PMID:27515263
Specialized CFD Grid Generation Methods for Near-Field Sonic Boom Prediction
NASA Technical Reports Server (NTRS)
Park, Michael A.; Campbell, Richard L.; Elmiligui, Alaa; Cliff, Susan E.; Nayani, Sudheer N.
2014-01-01
Ongoing interest in analysis and design of low sonic boom supersonic transports re- quires accurate and ecient Computational Fluid Dynamics (CFD) tools. Specialized grid generation techniques are employed to predict near- eld acoustic signatures of these con- gurations. A fundamental examination of grid properties is performed including grid alignment with ow characteristics and element type. The issues a ecting the robustness of cylindrical surface extrusion are illustrated. This study will compare three methods in the extrusion family of grid generation methods that produce grids aligned with the freestream Mach angle. These methods are applied to con gurations from the First AIAA Sonic Boom Prediction Workshop.
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.
Calculation of Thermally-Induced Displacements in Spherically Domed Ion Engine Grids
NASA Technical Reports Server (NTRS)
Soulas, George C.
2006-01-01
An analytical method for predicting the thermally-induced normal and tangential displacements of spherically domed ion optics grids under an axisymmetric thermal loading is presented. A fixed edge support that could be thermally expanded is used for this analysis. Equations for the displacements both normal and tangential to the surface of the spherical shell are derived. A simplified equation for the displacement at the center of the spherical dome is also derived. The effects of plate perforation on displacements and stresses are determined by modeling the perforated plate as an equivalent solid plate with modified, or effective, material properties. Analytical model results are compared to the results from a finite element model. For the solid shell, comparisons showed that the analytical model produces results that closely match the finite element model results. The simplified equation for the normal displacement of the spherical dome center is also found to accurately predict this displacement. For the perforated shells, the analytical solution and simplified equation produce accurate results for materials with low thermal expansion coefficients.
Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2014-07-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen-Geiger climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number methods are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Noise Hampers Children’s Expressive Word Learning
Riley, Kristine Grohne; McGregor, Karla K.
2013-01-01
Purpose To determine the effects of noise and speech style on word learning in typically developing school-age children. Method Thirty-one participants ages 9;0 (years; months) to 10;11 attempted to learn 2 sets of 8 novel words and their referents. They heard all of the words 13 times each within meaningful narrative discourse. Signal-to-noise ratio (noise vs. quiet) and speech style (plain vs. clear) were manipulated such that half of the children heard the new words in broadband white noise and half heard them in quiet; within those conditions, each child heard one set of words produced in a plain speech style and another set in a clear speech style. Results Children who were trained in quiet learned to produce the word forms more accurately than those who were trained in noise. Clear speech resulted in more accurate word form productions than plain speech, whether the children had learned in noise or quiet. Learning from clear speech in noise and plain speech in quiet produced comparable results. Conclusion Noise limits expressive vocabulary growth in children, reducing the quality of word form representation in the lexicon. Clear speech input can aid expressive vocabulary growth in children, even in noisy environments. PMID:22411494
Mattucci, Stephen F E; Cronin, Duane S
2015-01-01
Experimental testing on cervical spine ligaments provides important data for advanced numerical modeling and injury prediction; however, accurate characterization of individual ligament response and determination of average mechanical properties for specific ligaments has not been adequately addressed in the literature. Existing methods are limited by a number of arbitrary choices made during the curve fits that often misrepresent the characteristic shape response of the ligaments, which is important for incorporation into numerical models to produce a biofidelic response. A method was developed to represent the mechanical properties of individual ligaments using a piece-wise curve fit with first derivative continuity between adjacent regions. The method was applied to published data for cervical spine ligaments and preserved the shape response (toe, linear, and traumatic regions) up to failure, for strain rates of 0.5s(-1), 20s(-1), and 150-250s(-1), to determine the average force-displacement curves. Individual ligament coefficients of determination were 0.989 to 1.000 demonstrating excellent fit. This study produced a novel method in which a set of experimental ligament material property data exhibiting scatter was fit using a characteristic curve approach with a toe, linear, and traumatic region, as often observed in ligaments and tendons, and could be applied to other biological material data with a similar characteristic shape. The resultant average cervical spine ligament curves provide an accurate representation of the raw test data and the expected material property effects corresponding to varying deformation rates. Copyright © 2014 Elsevier Ltd. All rights reserved.
A preliminary ferritic-martensitic stainless steel constitution diagram
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balmforth, M.C.; Lippold, J.C.
1998-01-01
This paper describes preliminary research to develop a constitution diagram that will more accurately predict the microstructure of ferritic and martensitic stainless steel weld deposits. A button melting technique was used to produce a wide range of compositions using mixtures of conventional ferritic and martensitic stainless steels, including types 403, 409, 410, 430, 439 and 444. These samples were prepared metallographically, and the vol-% ferrite and martensite was determined quantitatively. In addition, the hardness and ferrite number (FN) were measured. Using this data, a preliminary constitution diagram is proposed that provides a more accurate method for predicting the microstructures ofmore » arc welds in ferritic and martensitic stainless steels.« less
Woodson, Kristina E; Sable, Craig A; Cross, Russell R; Pearson, Gail D; Martin, Gerard R
2004-11-01
Live transmission of echocardiograms over integrated services digital network lines is accurate and has led to improvements in the delivery of pediatric cardiology care. Permanent archiving of the live studies has not previously been reported. Specific obstacles to permanent storage of telemedicine files have included the ability to produce accurate images without a significant increase in storage requirements. We evaluated the accuracy of Motion Pictures Expert Group (MPEG) digitization of incoming video streams and assessed the storage requirements of these files for infants in a real-time pediatric tele-echocardiography program. All major cardiac diagnoses were correctly diagnosed by review of MPEG images. MPEG file size ranged from 11.1 to 182 MB (56.5 +/- 29.9 MB). MPEG digitization during live neonatal telemedicine is accurate and provides an efficient method for storage. This modality has acceptable storage requirements; file sizes are comparable to other digital modalities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, D.; Levine, S.L.; Luoma, J.
1992-01-01
The Three Mile Island unit 1 core reloads have been designed using fast but accurate scoping codes, PSUI-LEOPARD and ADMARC. PSUI-LEOPARD has been normalized to EPRI-CPM2 results and used to calculate the two-group constants, whereas ADMARC is a modern two-dimensional, two-group diffusion theory nodal code. Problems in accuracy were encountered for cycles 8 and higher as the core lifetime was increased beyond 500 effective full-power days. This is because the heavier loaded cores in both {sup 235}U and {sup 10}B have harder neutron spectra, which produces a change in the transport effect in the baffle reflector region, and the burnablemore » poison (BP) simulations were not accurate enough for the cores containing the increased amount of {sup 10}B required in the BP rods. In the authors study, a technique has been developed to take into account the change in the transport effect in the baffle region by modifying the fast neutron diffusion coefficient as a function of cycle length and core exposure or burnup. A more accurate BP simulation method is also developed, using integral transport theory and CPM2 data, to calculate the BP contribution to the equivalent fuel assembly (supercell) two-group constants. The net result is that the accuracy of the scoping codes is as good as that produced by CASMO/SIMULATE or CPM2/SIMULATE when comparing with measured data.« less
Microstructural Characterization and Modeling of SLM Superalloy 718
NASA Technical Reports Server (NTRS)
Smith, Tim M.; Sudbrack, Chantal K.; Bonacuse, Pete; Rogers, Richard
2017-01-01
Superalloy 718 is an excellent candidate for selective laser melting (SLM) fabrication due to a combination of excellent mechanical properties and workability. Predicting and validating the microstructure of SLM-fabricated Superalloy 718 after potential post heat-treatment paths is an important step towards producing components comparable to those made using conventional methods. At present, obtaining accurate volume fraction and size measurements of gamma-double-prime, gamma-prime and delta precipitates has been challenging due to their size, low volume fractions, and similar chemistries. A technique combining high resolution distortion corrected SEM imaging and with x-ray energy dispersive spectroscopy has been developed to accurately and independently measure the size and volume fractions of the three precipitates. These results were further validated using x-ray diffraction and phase extraction methods and compared to the precipitation kinetics predicted by PANDAT and JMatPro. Discrepancies are discussed in context of materials properties, model assumptions, sampling, and experimental errors.
Remote sensing techniques for prediction of watershed runoff
NASA Technical Reports Server (NTRS)
Blanchard, B. J.
1975-01-01
Hydrologic parameters of watersheds for use in mathematical models and as design criteria for flood detention structures are sometimes difficult to quantify using conventional measuring systems. The advent of remote sensing devices developed in the past decade offers the possibility that watershed characteristics such as vegetative cover, soils, soil moisture, etc., may be quantified rapidly and economically. Experiments with visible and near infrared data from the LANDSAT-1 multispectral scanner indicate a simple technique for calibration of runoff equation coefficients is feasible. The technique was tested on 10 watersheds in the Chickasha area and test results show more accurate runoff coefficients were obtained than with conventional methods. The technique worked equally as well using a dry fall scene. The runoff equation coefficients were then predicted for 22 subwatersheds with flood detention structures. Predicted values were again more accurate than coefficients produced by conventional methods.
Accurate mask-based spatially regularized correlation filter for visual tracking
NASA Astrophysics Data System (ADS)
Gu, Xiaodong; Xu, Xinping
2017-01-01
Recently, discriminative correlation filter (DCF)-based trackers have achieved extremely successful results in many competitions and benchmarks. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier. However, this assumption will produce unwanted boundary effects, which severely degrade the tracking performance. Correlation filters with limited boundaries and spatially regularized DCFs were proposed to reduce boundary effects. However, their methods used the fixed mask or predesigned weights function, respectively, which was unsuitable for large appearance variation. We propose an accurate mask-based spatially regularized correlation filter for visual tracking. Our augmented objective can reduce the boundary effect even in large appearance variation. In our algorithm, the masking matrix is converted into the regularized function that acts on the correlation filter in frequency domain, which makes the algorithm fast convergence. Our online tracking algorithm performs favorably against state-of-the-art trackers on OTB-2015 Benchmark in terms of efficiency, accuracy, and robustness.
Tripp, Jennifer A; McCullagh, James S O; Hedges, Robert E M
2006-01-01
Analysis of stable and radioactive isotopes from bone collagen provides useful information to archaeologists about the origin and age of bone artifacts. Isolation and analysis of single amino acids from the proteins can provide additional and more accurate information by removing contamination and separating a bulk isotope signal into its constituent parts. In this paper, we report a new method for the separation and isolation of underivatized amino acids from bone collagen, and their analysis by isotope ratio MS and accelerator MS. RP chromatography is used to separate the amino acids with nonpolar side chains, followed by an ion pair separation to isolate the remaining amino acids. The method produces single amino acids with little or no contamination from the separation process and allows for the measurement of accurate stable isotope ratios and pure samples for radiocarbon dating.
Accurate, Streamlined Analysis of mRNA Translation by Sucrose Gradient Fractionation
Aboulhouda, Soufiane; Di Santo, Rachael; Therizols, Gabriel; Weinberg, David
2017-01-01
The efficiency with which proteins are produced from mRNA molecules can vary widely across transcripts, cell types, and cellular states. Methods that accurately assay the translational efficiency of mRNAs are critical to gaining a mechanistic understanding of post-transcriptional gene regulation. One way to measure translational efficiency is to determine the number of ribosomes associated with an mRNA molecule, normalized to the length of the coding sequence. The primary method for this analysis of individual mRNAs is sucrose gradient fractionation, which physically separates mRNAs based on the number of bound ribosomes. Here, we describe a streamlined protocol for accurate analysis of mRNA association with ribosomes. Compared to previous protocols, our method incorporates internal controls and improved buffer conditions that together reduce artifacts caused by non-specific mRNA–ribosome interactions. Moreover, our direct-from-fraction qRT-PCR protocol eliminates the need for RNA purification from gradient fractions, which greatly reduces the amount of hands-on time required and facilitates parallel analysis of multiple conditions or gene targets. Additionally, no phenol waste is generated during the procedure. We initially developed the protocol to investigate the translationally repressed state of the HAC1 mRNA in S. cerevisiae, but we also detail adapted procedures for mammalian cell lines and tissues. PMID:29170751
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saenz, D; Stathakis, S; Kirby, N
Purpose: Deformable image registration (DIR) has widespread uses in radiotherapy for applications such as dose accumulation studies, multi-modality image fusion, and organ segmentation. The quality assurance (QA) of such algorithms, however, remains largely unimplemented. This work aims to determine how detailed a physical phantom needs to be to accurately perform QA of a DIR algorithm. Methods: Virtual prostate and head-and-neck phantoms, made from patient images, were used for this study. Both sets consist of an undeformed and deformed image pair. The images were processed to create additional image pairs with one through five homogeneous tissue levels using Otsu’s method. Realisticmore » noise was then added to each image. The DIR algorithms from MIM and Velocity (Deformable Multipass) were applied to the original phantom images and the processed ones. The resulting deformations were then compared to the known warping. A higher number of tissue levels creates more contrast in an image and enables DIR algorithms to produce more accurate results. For this reason, error (distance between predicted and known deformation) is utilized as a metric to evaluate how many levels are required for a phantom to be a realistic patient proxy. Results: For the prostate image pairs, the mean error decreased from 1–2 tissue levels and remained constant for 3+ levels. The mean error reduction was 39% and 26% for Velocity and MIM respectively. For head and neck, mean error fell similarly through 2 levels and flattened with total reduction of 16% and 49% for Velocity and MIM. For Velocity, 3+ levels produced comparable accuracy as the actual patient images, whereas MIM showed further accuracy improvement. Conclusion: The number of tissue levels needed to produce an accurate patient proxy depends on the algorithm. For Velocity, three levels were enough, whereas five was still insufficient for MIM.« less
Elastic-Waveform Inversion with Compressive Sensing for Sparse Seismic Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Youzuo; Huang, Lianjie
2015-01-28
Accurate velocity models of compressional- and shear-waves are essential for geothermal reservoir characterization and microseismic imaging. Elastic-waveform inversion of multi-component seismic data can provide high-resolution inversion results of subsurface geophysical properties. However, the method requires seismic data acquired using dense source and receiver arrays. In practice, seismic sources and/or geophones are often sparsely distributed on the surface and/or in a borehole, such as 3D vertical seismic profiling (VSP) surveys. We develop a novel elastic-waveform inversion method with compressive sensing for inversion of sparse seismic data. We employ an alternating-minimization algorithm to solve the optimization problem of our new waveform inversionmore » method. We validate our new method using synthetic VSP data for a geophysical model built using geologic features found at the Raft River enhanced-geothermal-system (EGS) field. We apply our method to synthetic VSP data with a sparse source array and compare the results with those obtained with a dense source array. Our numerical results demonstrate that the velocity models produced with our new method using a sparse source array are almost as accurate as those obtained using a dense source array.« less
NASA Astrophysics Data System (ADS)
Nguyen, Dang Van; Li, Jing-Rebecca; Grebenkov, Denis; Le Bihan, Denis
2014-04-01
The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch-Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements method that allows jumps in the solution at the cell interfaces by using double nodes. Using a transformation of the Bloch-Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge-Kutta-Chebyshev time-stepping method. Our proposed method is second order accurate in space and second order accurate in time. We implemented this method on the FEniCS C++ platform and show time and spatial convergence results. Finally, this method is applied to study some relevant questions in diffusion MRI.
A Peroxidase-linked Spectrophotometric Assay for the Detection of Monoamine Oxidase Inhibitors
Zhi, Kangkang; Yang, Zhongduo; Sheng, Jie; Shu, Zongmei; Shi, Yin
2016-01-01
To develop a new more accurate spectrophotometric method for detecting monoamine oxidase inhibitors from plant extracts, a series of amine substrates were selected and their ability to be oxidized by monoamine oxidase was evaluated by the HPLC method and a new substrate was used to develop a peroxidase-linked spectrophotometric assay. 4-(Trifluoromethyl) benzylamine (11) was proved to be an excellent substrate for peroxidase-linked spectrophotometric assay. Therefore, a new peroxidase-linked spectrophotometric assay was set up. The principle of the method is that the MAO converts 11 into aldehyde, ammonia and hydrogen peroxide. In the presence of peroxidase, the hydrogen peroxide will oxidize 4-aminoantipyrine into oxidised 4-aminoantipyrine which can condense with vanillic acid to give a red quinoneimine dye. The production of the quinoneimine dye was detected at 490 nm by a microplate reader. The ⊿OD value between the blank group and blank negative control group in this new method is twice as much as that in Holt’s method, which enables the procedure to be more accurate and avoids the produce of false positive results. The new method will be helpful for researchers to screening monoamine oxidase inhibitors from deep-color plant extracts. PMID:27610153
A Peroxidase-linked Spectrophotometric Assay for the Detection of Monoamine Oxidase Inhibitors.
Zhi, Kangkang; Yang, Zhongduo; Sheng, Jie; Shu, Zongmei; Shi, Yin
2016-01-01
To develop a new more accurate spectrophotometric method for detecting monoamine oxidase inhibitors from plant extracts, a series of amine substrates were selected and their ability to be oxidized by monoamine oxidase was evaluated by the HPLC method and a new substrate was used to develop a peroxidase-linked spectrophotometric assay. 4-(Trifluoromethyl) benzylamine (11) was proved to be an excellent substrate for peroxidase-linked spectrophotometric assay. Therefore, a new peroxidase-linked spectrophotometric assay was set up. The principle of the method is that the MAO converts 11 into aldehyde, ammonia and hydrogen peroxide. In the presence of peroxidase, the hydrogen peroxide will oxidize 4-aminoantipyrine into oxidised 4-aminoantipyrine which can condense with vanillic acid to give a red quinoneimine dye. The production of the quinoneimine dye was detected at 490 nm by a microplate reader. The ⊿OD value between the blank group and blank negative control group in this new method is twice as much as that in Holt's method, which enables the procedure to be more accurate and avoids the produce of false positive results. The new method will be helpful for researchers to screening monoamine oxidase inhibitors from deep-color plant extracts.
GREENHOUSE, BRYAN; MYRICK, ALISSA; DOKOMAJILAR, CHRISTIAN; WOO, JONATHAN M.; CARLSON, ELAINE J.; ROSENTHAL, PHILIP J.; DORSEY, GRANT
2006-01-01
Genotyping methods for Plasmodium falciparum drug efficacy trials have not been standardized and may fail to accurately distinguish recrudescence from new infection, especially in high transmission areas where polyclonal infections are common. We developed a simple method for genotyping using previously identified microsatellites and capillary electrophoresis, validated this method using mixtures of laboratory clones, and applied the method to field samples. Two microsatellite markers produced accurate results for single-clone but not polyclonal samples. Four other microsatellite markers were as sensitive as, and more specific than, commonly used genotyping techniques based on merozoite surface proteins 1 and 2. When applied to samples from 15 patients in Burkina Faso with recurrent parasitemia after treatment with sulphadoxine-pyrimethamine, the addition of these four microsatellite markers to msp1 and msp2 genotyping resulted in a reclassification of outcomes that strengthened the association between dhfr 59R, an anti-folate resistance mutation, and recrudescence (P = 0.31 versus P = 0.03). Four microsatellite markers performed well on polyclonal samples and may provide a valuable addition to genotyping for clinical drug efficacy studies in high transmission areas. PMID:17123974
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohd, Shukri; Holford, Karen M.; Pullin, Rhys
2014-02-12
Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup usingmore » H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.« less
Muñoz, Mario A; Smith-Miles, Kate A
2017-01-01
This article presents a method for the objective assessment of an algorithm's strengths and weaknesses. Instead of examining the performance of only one or more algorithms on a benchmark set, or generating custom problems that maximize the performance difference between two algorithms, our method quantifies both the nature of the test instances and the algorithm performance. Our aim is to gather information about possible phase transitions in performance, that is, the points in which a small change in problem structure produces algorithm failure. The method is based on the accurate estimation and characterization of the algorithm footprints, that is, the regions of instance space in which good or exceptional performance is expected from an algorithm. A footprint can be estimated for each algorithm and for the overall portfolio. Therefore, we select a set of features to generate a common instance space, which we validate by constructing a sufficiently accurate prediction model. We characterize the footprints by their area and density. Our method identifies complementary performance between algorithms, quantifies the common features of hard problems, and locates regions where a phase transition may lie.
Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.
2010-01-01
We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603
Comparison of GEOS-5 AGCM Planetary Boundary Layer Depths Computed with Various Definitions
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, E. L.; Molod, A.
2014-01-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Koppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Comparison of GEOS-5 AGCM planetary boundary layer depths computed with various definitions
NASA Astrophysics Data System (ADS)
McGrath-Spangler, E. L.; Molod, A.
2014-03-01
Accurate models of planetary boundary layer (PBL) processes are important for forecasting weather and climate. The present study compares seven methods of calculating PBL depth in the GEOS-5 atmospheric general circulation model (AGCM) over land. These methods depend on the eddy diffusion coefficients, bulk and local Richardson numbers, and the turbulent kinetic energy. The computed PBL depths are aggregated to the Köppen climate classes, and some limited comparisons are made using radiosonde profiles. Most methods produce similar midday PBL depths, although in the warm, moist climate classes, the bulk Richardson number method gives midday results that are lower than those given by the eddy diffusion coefficient methods. Additional analysis revealed that methods sensitive to turbulence driven by radiative cooling produce greater PBL depths, this effect being most significant during the evening transition. Nocturnal PBLs based on Richardson number are generally shallower than eddy diffusion coefficient based estimates. The bulk Richardson number estimate is recommended as the PBL height to inform the choice of the turbulent length scale, based on the similarity to other methods during the day, and the improved nighttime behavior.
Student beats the teacher: deep neural networks for lateral ventricles segmentation in brain MR
NASA Astrophysics Data System (ADS)
Ghafoorian, Mohsen; Teuwen, Jonas; Manniesing, Rashindra; Leeuw, Frank-Erik d.; van Ginneken, Bram; Karssemeijer, Nico; Platel, Bram
2018-03-01
Ventricular volume and its progression are known to be linked to several brain diseases such as dementia and schizophrenia. Therefore accurate measurement of ventricle volume is vital for longitudinal studies on these disorders, making automated ventricle segmentation algorithms desirable. In the past few years, deep neural networks have shown to outperform the classical models in many imaging domains. However, the success of deep networks is dependent on manually labeled data sets, which are expensive to acquire especially for higher dimensional data in the medical domain. In this work, we show that deep neural networks can be trained on muchcheaper-to-acquire pseudo-labels (e.g., generated by other automated less accurate methods) and still produce more accurate segmentations compared to the quality of the labels. To show this, we use noisy segmentation labels generated by a conventional region growing algorithm to train a deep network for lateral ventricle segmentation. Then on a large manually annotated test set, we show that the network significantly outperforms the conventional region growing algorithm which was used to produce the training labels for the network. Our experiments report a Dice Similarity Coefficient (DSC) of 0.874 for the trained network compared to 0.754 for the conventional region growing algorithm (p < 0.001).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez-Serra, Maria Victoria
2016-09-12
The research objective of this proposal is the computational modeling of the metal-electrolyte interface purely from first principles. The accurate calculation of the electrostatic potential at electrically biased metal-electrolyte interfaces is a current challenge for periodic “ab-initio” simulations. It is also an essential requisite for predicting the correspondence between the macroscopic voltage and the microscopic interfacial charge distribution in electrochemical fuel cells. This interfacial charge distribution is the result of the chemical bonding between solute and metal atoms, and therefore cannot be accurately calculated with the use of semi-empirical classical force fields. The project aims to study in detail themore » structure and dynamics of aqueous electrolytes at metallic interfaces taking into account the effect of the electrode potential. Another side of the project is to produce an accurate method to simulate the water/metal interface. While both experimental and theoretical surface scientists have made a lot of progress on the understanding and characterization of both atomistic structures and reactions at the solid/vacuum interface, the theoretical description of electrochemical interfaces is still lacking behind. A reason for this is that a complete and accurate first principles description of both the liquid and the metal interfaces is still computationally too expensive and complex, since their characteristics are governed by the explicit atomic and electronic structure built at the interface as a response to environmental conditions. This project will characterize in detail how different theoretical levels of modeling describer the metal/water interface. In particular the role of van der Waals interactions will be carefully analyzed and prescriptions to perform accurate simulations will be produced.« less
Viscoacoustic anisotropic full waveform inversion
NASA Astrophysics Data System (ADS)
Qu, Yingming; Li, Zhenchun; Huang, Jianping; Li, Jinli
2017-01-01
A viscoacoustic vertical transverse isotropic (VTI) quasi-differential wave equation, which takes account for both the viscosity and anisotropy of media, is proposed for wavefield simulation in this study. The finite difference method is used to solve the equations, for which the attenuation terms are solved in the wavenumber domain, and all remaining terms in the time-space domain. To stabilize the adjoint wavefield, robust regularization operators are applied to the wave equation to eliminate the high-frequency component of the numerical noise produced during the backward propagation of the viscoacoustic wavefield. Based on these strategies, we derive the corresponding gradient formula and implement a viscoacoustic VTI full waveform inversion (FWI). Numerical tests verify that our proposed viscoacoustic VTI FWI can produce accurate and stable inversion results for viscoacoustic VTI data sets. In addition, we test our method's sensitivity to velocity, Q, and anisotropic parameters. Our results show that the sensitivity to velocity is much higher than that to Q and anisotropic parameters. As such, our proposed method can produce acceptable inversion results as long as the Q and anisotropic parameters are within predefined thresholds.
Joint Center Estimation Using Single-Frame Optimization: Part 1: Numerical Simulation.
Frick, Eric; Rahmatalla, Salam
2018-04-04
The biomechanical models used to refine and stabilize motion capture processes are almost invariably driven by joint center estimates, and any errors in joint center calculation carry over and can be compounded when calculating joint kinematics. Unfortunately, accurate determination of joint centers is a complex task, primarily due to measurements being contaminated by soft-tissue artifact (STA). This paper proposes a novel approach to joint center estimation implemented via sequential application of single-frame optimization (SFO). First, the method minimizes the variance of individual time frames’ joint center estimations via the developed variance minimization method to obtain accurate overall initial conditions. These initial conditions are used to stabilize an optimization-based linearization of human motion that determines a time-varying joint center estimation. In this manner, the complex and nonlinear behavior of human motion contaminated by STA can be captured as a continuous series of unique rigid-body realizations without requiring a complex analytical model to describe the behavior of STA. This article intends to offer proof of concept, and the presented method must be further developed before it can be reasonably applied to human motion. Numerical simulations were introduced to verify and substantiate the efficacy of the proposed methodology. When directly compared with a state-of-the-art inertial method, SFO reduced the error due to soft-tissue artifact in all cases by more than 45%. Instead of producing a single vector value to describe the joint center location during a motion capture trial as existing methods often do, the proposed method produced time-varying solutions that were highly correlated ( r > 0.82) with the true, time-varying joint center solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Newsom, Rob
2016-03-01
In March and April of 2015, the ARM Doppler lidar that was formerly operated at the Tropical Western Pacific site in Darwin, Australia (S/N 0710-08) was deployed to the Boulder Atmospheric Observatory (BAO) for the eXperimental Planetary boundary-layer Instrument Assessment (XPIA) field campaign. The goal of the XPIA field campaign was to investigate methods of using multiple Doppler lidars to obtain high-resolution three-dimensional measurements of winds and turbulence in the atmospheric boundary layer, and to characterize the uncertainties in these measurements. The ARM Doppler lidar was one of many Doppler lidar systems that participated in this study. During XPIA themore » 300-m tower at the BAO site was instrumented with well-calibrated sonic anemometers at six levels. These sonic anemometers provided highly accurate reference measurements against which the lidars could be compared. Thus, the deployment of the ARM Doppler lidar during XPIA offered a rare opportunity for the ARM program to characterize the uncertainties in their lidar wind measurements. Results of the lidar-tower comparison indicate that the lidar wind speed measurements are essentially unbiased (~1cm s-1), with a random error of approximately 50 cm s-1. Two methods of uncertainty estimation were tested. The first method was found to produce uncertainties that were too low. The second method produced estimates that were more accurate and better indicators of data quality. As of December 2015, the first method is being used by the ARM Doppler lidar wind value-added product (VAP). One outcome of this work will be to update this VAP to use the second method for uncertainty estimation.« less
Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C
2015-06-08
Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.
Neural Network Design on the SRC-6 Reconfigurable Computer
2006-12-01
fingerprint identification. In this field, automatic identification methods are used to save time, especially for the purpose of fingerprint matching in...grid widths and lengths and therefore was useful in producing an accurate canvas with which to create sample training images. The added benefit of...tools available free of charge and readily accessible on the computer, it was simple to design bitmap data files visually on a canvas and then
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yang; Xiao, Jianyuan; Zhang, Ruili
Hamiltonian time integrators for the Vlasov-Maxwell equations are developed by a Hamiltonian splitting technique. The Hamiltonian functional is split into five parts, which produces five exactly solvable subsystems. Each subsystem is a Hamiltonian system equipped with the Morrison-Marsden-Weinstein Poisson bracket. Compositions of the exact solutions provide Poisson structure preserving/Hamiltonian methods of arbitrary high order for the Vlasov-Maxwell equations. They are then accurate and conservative over a long time because of the Poisson-preserving nature.
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that have resulted from this work. A review of computational aeroacoustics has recently been given by Lele.
Zeng, Xiaozheng; McGough, Robert J.
2009-01-01
The angular spectrum approach is evaluated for the simulation of focused ultrasound fields produced by large thermal therapy arrays. For an input pressure or normal particle velocity distribution in a plane, the angular spectrum approach rapidly computes the output pressure field in a three dimensional volume. To determine the optimal combination of simulation parameters for angular spectrum calculations, the effect of the size, location, and the numerical accuracy of the input plane on the computed output pressure is evaluated. Simulation results demonstrate that angular spectrum calculations performed with an input pressure plane are more accurate than calculations with an input velocity plane. Results also indicate that when the input pressure plane is slightly larger than the array aperture and is located approximately one wavelength from the array, angular spectrum simulations have very small numerical errors for two dimensional planar arrays. Furthermore, the root mean squared error from angular spectrum simulations asymptotically approaches a nonzero lower limit as the error in the input plane decreases. Overall, the angular spectrum approach is an accurate and robust method for thermal therapy simulations of large ultrasound phased arrays when the input pressure plane is computed with the fast nearfield method and an optimal combination of input parameters. PMID:19425640
TU-AB-BRA-02: An Efficient Atlas-Based Synthetic CT Generation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, X
2016-06-15
Purpose: A major obstacle for MR-only radiotherapy is the need to generate an accurate synthetic CT (sCT) from MR image(s) of a patient for the purposes of dose calculation and DRR generation. We propose here an accurate and efficient atlas-based sCT generation method, which has a computation speed largely independent of the number of atlases used. Methods: Atlas-based sCT generation requires a set of atlases with co-registered CT and MR images. Unlike existing methods that align each atlas to the new patient independently, we first create an average atlas and pre-align every atlas to the average atlas space. When amore » new patient arrives, we compute only one deformable image registration to align the patient MR image to the average atlas, which indirectly aligns the patient to all pre-aligned atlases. A patch-based non-local weighted fusion is performed in the average atlas space to generate the sCT for the patient, which is then warped back to the original patient space. We further adapt a PatchMatch algorithm that can quickly find top matches between patches of the patient image and all atlas images, which makes the patch fusion step also independent of the number of atlases used. Results: Nineteen brain tumour patients with both CT and T1-weighted MR images are used as testing data and a leave-one-out validation is performed. Each sCT generated is compared against the original CT image of the same patient on a voxel-by-voxel basis. The proposed method produces a mean absolute error (MAE) of 98.6±26.9 HU overall. The accuracy is comparable with a conventional implementation scheme, but the computation time is reduced from over an hour to four minutes. Conclusion: An average atlas space patch fusion approach can produce highly accurate sCT estimations very efficiently. Further validation on dose computation accuracy and using a larger patient cohort is warranted. The author is a full time employee of Elekta, Inc.« less
NASA Astrophysics Data System (ADS)
Kern, Sara E.; Lin, Lora A.; Fricke, Frederick L.
2014-08-01
U.S. food imports have been increasing steadily for decades, intensifying the need for a rapid and sensitive screening technique. A method has been developed that uses foam disks to sample the surface of incoming produce. This work provides complimentary information to the extensive amount of published pesticide fragmentation data collected using LCMS systems (Sack et al. Journal of Agricultural and Food Chemistry, 59, 6383-6411, 2011; Mol et al. Analytical and Bioanalytical Chemistry, 403, 2891-2908, 2012). The disks are directly analyzed using transmission-mode direct analysis in real time (DART) ambient pressure desorption ionization coupled to a high resolution accurate mass-mass spectrometer (HRAM-MS). In order to provide more certainty in the identification of the pesticides detected, a library of accurate mass fragments and isotopes of the protonated parent molecular ion (the [M+H]+) has been developed. The HRAM-MS is equipped with a quadrupole mass filter, providing the capability of "data-dependent" fragmentation, as opposed to "all -ion" fragmentation (where all of the ions enter a collision chamber and are fragmented at once). A temperature gradient for the DART helium stream and multiple collision energies were employed to detect and fragment 164 pesticides of varying chemical classes, sizes, and polarities. The accurate mass information of precursor ([M+H]+ ion) and fragment ions is essential in correctly identifying chemical contaminants on the surface of imported produce. Additionally, the inclusion of isotopes of the [M+H]+ in the database adds another metric to the confirmation process. The fragmentation data were collected using a Q-Exactive mass spectrometer and were added to a database used to process data collected with an Exactive mass spectrometer, an instrument that is more readily available for this screening application. The commodities investigated range from smooth-skinned produce such as apples to rougher surfaces like broccoli. The minimal sample preparation and absence of chromatography has shortened the analysis time to about 15 min per sample, and the simplicity and robustness of the technique make it ideal for rapid screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ulvestad, A.; Menickelly, M.; Wild, S. M.
Defects such as dislocations impact materials properties and their response during external stimuli. Imaging these defects in their native operating conditions to establish the structure-function relationship and, ultimately, to improve performance via defect engineering has remained a considerable challenge for both electron-based and x-ray-based imaging techniques. While Bragg coherent x-ray diffractive imaging (BCDI) is successful in many cases, nuances in identifying the dislocations has left manual identification as the preferred method. Derivative-based methods are also used, but they can be inaccurate and are computationally inefficient. Here we demonstrate a derivative-free method that is both more accurate and more computationally efficientmore » than either derivative-or human-based methods for identifying 3D dislocation lines in nanocrystal images produced by BCDI. We formulate the problem as a min-max optimization problem and show exceptional accuracy for experimental images. We demonstrate a 227x speedup for a typical experimental dataset with higher accuracy over current methods. We discuss the possibility of using this algorithm as part of a sparsity-based phase retrieval process. We also provide MATLAB code for use by other researchers.« less
NASA Astrophysics Data System (ADS)
Laaidi, M.
The pollen of anemogamous plants is responsible for half the allergic diseases, that is to say a prevalence of 10% in the French population. Poaceæ produce the first allergenic pollen almost everywhere. The work described in this article aimed to validate forecast methods for the use of physicians and allergic people who need accurate and early information on the first appearance of pollen in the air. The methods were based on meteorological parameters, mainly temperature. Four volumetric Hirst traps were used from 1995 to 1998, situated in two departments of Burgundy. Two of the methods tested proved to be of particular interest: the sum of the temperatures and the sum of Q10 values, an agrometeorological coefficient integrating temperature. A multiple regression, using maximum temperature and rainfall, was also performed but it gave slightly less accurate results. A χ2-test was then used to compare the accuracy of the three methods. It was found that the date of onset of the pollen season could be predicted early enough to be useful in medical practice. Results were verified in 1999, and the research must be continued to obtain better statistical validity.
NASA Astrophysics Data System (ADS)
Ulvestad, A.; Menickelly, M.; Wild, S. M.
2018-01-01
Defects such as dislocations impact materials properties and their response during external stimuli. Imaging these defects in their native operating conditions to establish the structure-function relationship and, ultimately, to improve performance via defect engineering has remained a considerable challenge for both electron-based and x-ray-based imaging techniques. While Bragg coherent x-ray diffractive imaging (BCDI) is successful in many cases, nuances in identifying the dislocations has left manual identification as the preferred method. Derivative-based methods are also used, but they can be inaccurate and are computationally inefficient. Here we demonstrate a derivative-free method that is both more accurate and more computationally efficient than either derivative- or human-based methods for identifying 3D dislocation lines in nanocrystal images produced by BCDI. We formulate the problem as a min-max optimization problem and show exceptional accuracy for experimental images. We demonstrate a 227x speedup for a typical experimental dataset with higher accuracy over current methods. We discuss the possibility of using this algorithm as part of a sparsity-based phase retrieval process. We also provide MATLAB code for use by other researchers.
Deformable registration of CT and cone-beam CT with local intensity matching.
Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon
2017-02-07
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
Deformable registration of CT and cone-beam CT with local intensity matching
NASA Astrophysics Data System (ADS)
Park, Seyoun; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon
2017-02-01
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
Identification of oocyte progenitor cells in the zebrafish ovary.
Draper, Bruce W
2012-01-01
Zebrafish breed year round and females are capable of producing thousands of eggs during their lifetime. This amazing fecundity is due to the fact that the adult ovary, contains premeiotic oocyte progenitor cells, called oogonia, which produce a continuous supply of new oocytes throughout adult life. Oocyte progenitor cells can be easily identified based on their expression of Vasa, and their characteristic nuclear morphology. Thus, the zebrafish ovary provides a unique and powerful system to study the genetic regulation of oocyte production in a vertebrate animal. A method is presented here for identifying oocyte progenitor cells in the zebrafish ovary using whole-mount confocal immunofluorescence that is simple and accurate.
NASA Technical Reports Server (NTRS)
Richardson, A. J.; Escobar, D. E.; Gausman, H. W.; Everitt, J. H. (Principal Investigator)
1982-01-01
The accuracy was assessed for an atmospheric correction method that depends on clear water bodies to infer solar and atmospheric parameters for radiative transfer equations by measuring the reflectance signature of four prominent south Texas rangeland plants with the LANDSAT satellite multispectral scanner (MSS) and a ground based spectroradiometer. The rangeland plant reflectances produced by the two sensors were correlated with no significant deviation of the slope from unity or of the intercept from zero. These results indicated that the atmospheric correction produced LANDSAT MSS estimates of rangeland plant reflectances that are as accurate as the ground based spectroradiometer.
Calibration uncertainty for Advanced LIGO's first and second observing runs
NASA Astrophysics Data System (ADS)
Cahillane, Craig; Betzwieser, Joe; Brown, Duncan A.; Goetz, Evan; Hall, Evan D.; Izumi, Kiwamu; Kandhasamy, Shivaraj; Karki, Sudarshan; Kissel, Jeff S.; Mendell, Greg; Savage, Richard L.; Tuyenbayev, Darkhan; Urban, Alex; Viets, Aaron; Wade, Madeline; Weinstein, Alan J.
2017-11-01
Calibration of the Advanced LIGO detectors is the quantification of the detectors' response to gravitational waves. Gravitational waves incident on the detectors cause phase shifts in the interferometer laser light which are read out as intensity fluctuations at the detector output. Understanding this detector response to gravitational waves is crucial to producing accurate and precise gravitational wave strain data. Estimates of binary black hole and neutron star parameters and tests of general relativity require well-calibrated data, as miscalibrations will lead to biased results. We describe the method of producing calibration uncertainty estimates for both LIGO detectors in the first and second observing runs.
An accurate, fast, and scalable solver for high-frequency wave propagation
NASA Astrophysics Data System (ADS)
Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.
2017-12-01
In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.
Guillerme, Thomas; Cooper, Natalie
2016-05-01
Analyses of living and fossil taxa are crucial for understanding biodiversity through time. The total evidence method allows living and fossil taxa to be combined in phylogenies, using molecular data for living taxa and morphological data for living and fossil taxa. With this method, substantial overlap of coded anatomical characters among living and fossil taxa is vital for accurately inferring topology. However, although molecular data for living species are widely available, scientists generating morphological data mainly focus on fossils. Therefore, there are fewer coded anatomical characters in living taxa, even in well-studied groups such as mammals. We investigated the number of coded anatomical characters available in phylogenetic matrices for living mammals and how these were phylogenetically distributed across orders. Eleven of 28 mammalian orders have less than 25% species with available characters; this has implications for the accurate placement of fossils, although the issue is less pronounced at higher taxonomic levels. In most orders, species with available characters are randomly distributed across the phylogeny, which may reduce the impact of the problem. We suggest that increased morphological data collection efforts for living taxa are needed to produce accurate total evidence phylogenies. © 2016 The Authors.
Barrett, Christian L.; Cho, Byung-Kwan
2011-01-01
Immuno-precipitation of protein–DNA complexes followed by microarray hybridization is a powerful and cost-effective technology for discovering protein–DNA binding events at the genome scale. It is still an unresolved challenge to comprehensively, accurately and sensitively extract binding event information from the produced data. We have developed a novel strategy composed of an information-preserving signal-smoothing procedure, higher order derivative analysis and application of the principle of maximum entropy to address this challenge. Importantly, our method does not require any input parameters to be specified by the user. Using genome-scale binding data of two Escherichia coli global transcription regulators for which a relatively large number of experimentally supported sites are known, we show that ∼90% of known sites were resolved to within four probes, or ∼88 bp. Over half of the sites were resolved to within two probes, or ∼38 bp. Furthermore, we demonstrate that our strategy delivers significant quantitative and qualitative performance gains over available methods. Such accurate and sensitive binding site resolution has important consequences for accurately reconstructing transcriptional regulatory networks, for motif discovery, for furthering our understanding of local and non-local factors in protein–DNA interactions and for extending the usefulness horizon of the ChIP-chip platform. PMID:21051353
Murray, Shauna A.; Wiese, Maria; Stüken, Anke; Brett, Steve; Kellmann, Ralf; Hallegraeff, Gustaaf; Neilan, Brett A.
2011-01-01
The recent identification of genes involved in the production of the potent neurotoxin and keystone metabolite saxitoxin (STX) in marine eukaryotic phytoplankton has allowed us for the first time to develop molecular genetic methods to investigate the chemical ecology of harmful algal blooms in situ. We present a novel method for detecting and quantifying the potential for STX production in marine environmental samples. Our assay detects a domain of the gene sxtA that encodes a unique enzyme putatively involved in the sxt pathway in marine dinoflagellates, sxtA4. A product of the correct size was recovered from nine strains of four species of STX-producing Alexandrium and Gymnodinium catenatum and was not detected in the non-STX-producing Alexandrium species, other dinoflagellate cultures, or an environmental sample that did not contain known STX-producing species. However, sxtA4 was also detected in the non-STX-producing strain of Alexandrium tamarense, Tasmanian ribotype. We investigated the copy number of sxtA4 in three strains of Alexandrium catenella and found it to be relatively constant among strains. Using our novel method, we detected and quantified sxtA4 in three environmental blooms of Alexandrium catenella that led to STX uptake in oysters. We conclude that this method shows promise as an accurate, fast, and cost-effective means of quantifying the potential for STX production in marine samples and will be useful for biological oceanographic research and harmful algal bloom monitoring. PMID:21841034
Singlet oxygen detection in biological systems: Uses and limitations.
Koh, Eugene; Fluhr, Robert
2016-07-02
The study of singlet oxygen in biological systems is challenging in many ways. Singlet oxygen is a relatively unstable ephemeral molecule, and its properties make it highly reactive with many biomolecules, making it difficult to quantify accurately. Several methods have been developed to study this elusive molecule, but most studies thus far have focused on those conditions that produce relatively large amounts of singlet oxygen. However, the need for more sensitive methods is required as one begins to explore the levels of singlet oxygen required in signaling and regulatory processes. Here we discuss the various methods used in the study of singlet oxygen, and outline their uses and limitations.
Novel method to sample very high power CO2 lasers: II Continuing Studies
NASA Astrophysics Data System (ADS)
Eric, John; Seibert, Daniel B., II; Green, Lawrence I.
2005-04-01
For the past 28 years, the Laser Hardened Materials Evaluation Laboratory (LHMEL) at the Wright-Patterson Air Force Base, OH, has worked with CO2 lasers capable of producing continuous energy up to 150 kW. These lasers are used in a number of advanced materials processing applications that require accurate spatial energy measurements of the laser. Conventional non-electronic methods are not satisfactory for determining the spatial energy profile. This paper describes continuing efforts in qualifying the new method in which a continuous, real-time electronic spatial energy profile can be obtained for very high power, (VHP) CO2 lasers.
Development of Control System for Hydrolysis Crystallization Process
NASA Astrophysics Data System (ADS)
Wan, Feng; Shi, Xiao-Ming; Feng, Fang-Fang
2016-05-01
Sulfate method for producing titanium dioxide is commonly used in China, but the determination of crystallization time is artificially which leads to a big error and is harmful to the operators. In this paper a new method for determining crystallization time is proposed. The method adopts the red laser as the light source, uses the silicon photocell as reflection light receiving component, using optical fiber as the light transmission element, differential algorithm is adopted in the software to realize the determination of the crystallizing time. The experimental results show that the method can realize the determination of crystallization point automatically and accurately, can replace manual labor and protect the health of workers, can be applied to practice completely.
Numerical optimization methods for controlled systems with parameters
NASA Astrophysics Data System (ADS)
Tyatyushkin, A. I.
2017-10-01
First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.
Velásquez, A V; da Silva, G G; Sousa, D O; Oliveira, C A; Martins, C M M R; Dos Santos, P P M; Balieiro, J C C; Rennó, F P; Fukushima, R S
2018-04-18
Feed intake assessment is a valuable tool for herd management decisions. The use of markers, either internal or external, is currently the most used technique for estimating feed intake in production animals. The experiment used 10 multiparous Holstein cows fed a corn silage-based diet, with 55:45 forage-to-concentrate ratio, the average fecal recovery (FR) of TiO 2 was higher than FR of Cr 2 O 3 , and both FR were more than unity. With internal markers, acetyl bromide lignin and cutin FR were lower than unity, and average FR for indigestible neutral detergent fiber (iNDF) and indigestible acid detergent fiber (iADF) was 1.5. The FR was unaffected by the fecal sampling procedure and appears to be an intrinsic property of each molecule and how it interacts with digesta. Of the 2 external markers, only Cr 2 O 3 produced accurate fecal output (FO) estimates and the same happened to dry matter digestibility (DMD) when iNDF and iADF were used. Estimates for DMD and FO were affected by sampling procedure; 72-h bulk [sub-sample from total feces collection (TFC)] sampling consistently produced accurate results. The grab (sub-samples taken at specific times during the day) sampling procedures were accurate when using either of the indigestible fibers (iNDF or iADF) to estimate DMD. However, grab sampling procedures can only be recommended when concomitant TFC is performed on at least one animal per treatment to determine FR. Under these conditions, Cr 2 O 3 is a suitable marker for estimating FO, and iNDF and iADF are adequate for estimating DMD. Moreover, the Cr 2 O 3 +iADF marker pair produces accurate dry matter intake estimates and deserves further attention in ruminant nutrition studies. The method of dosing the external markers is extremely important and greatly affects and determines results. Whichever the method, it must allow the animals to display normal feeding behavior and not affect performance. The grab sampling procedures can replace TFC (once FR is established), which may open new possibilities for pasture-based or collectively housed animals. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Residual gravimetric method to measure nebulizer output.
Vecellio None, Laurent; Grimbert, Daniel; Bordenave, Joelle; Benoit, Guy; Furet, Yves; Fauroux, Brigitte; Boissinot, Eric; De Monte, Michele; Lemarié, Etienne; Diot, Patrice
2004-01-01
The aim of this study was to assess a residual gravimetric method based on weighing dry filters to measure the aerosol output of nebulizers. This residual gravimetric method was compared to assay methods based on spectrophotometric measurement of terbutaline (Bricanyl, Astra Zeneca, France), high-performance liquid chromatography (HPLC) measurement of tobramycin (Tobi, Chiron, U.S.A.), and electrochemical measurements of NaF (as defined by the European standard). Two breath-enhanced jet nebulizers, one standard jet nebulizer, and one ultrasonic nebulizer were tested. Output produced by the residual gravimetric method was calculated by weighing the filters both before and after aerosol collection and by filter drying corrected by the proportion of drug contained in total solute mass. Output produced by the electrochemical, spectrophotometric, and HPLC methods was determined after assaying the drug extraction filter. The results demonstrated a strong correlation between the residual gravimetric method (x axis) and assay methods (y axis) in terms of drug mass output (y = 1.00 x -0.02, r(2) = 0.99, n = 27). We conclude that a residual gravimetric method based on dry filters, when validated for a particular agent, is an accurate way of measuring aerosol output.
Dickinson, J.E.; James, S.C.; Mehl, S.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Faunt, C.C.; Eddebbarh, A.-A.
2007-01-01
A flexible, robust method for linking parent (regional-scale) and child (local-scale) grids of locally refined models that use different numerical methods is developed based on a new, iterative ghost-node method. Tests are presented for two-dimensional and three-dimensional pumped systems that are homogeneous or that have simple heterogeneity. The parent and child grids are simulated using the block-centered finite-difference MODFLOW and control-volume finite-element FEHM models, respectively. The models are solved iteratively through head-dependent (child model) and specified-flow (parent model) boundary conditions. Boundary conditions for models with nonmatching grids or zones of different hydraulic conductivity are derived and tested against heads and flows from analytical or globally-refined models. Results indicate that for homogeneous two- and three-dimensional models with matched grids (integer number of child cells per parent cell), the new method is nearly as accurate as the coupling of two MODFLOW models using the shared-node method and, surprisingly, errors are slightly lower for nonmatching grids (noninteger number of child cells per parent cell). For heterogeneous three-dimensional systems, this paper compares two methods for each of the two sets of boundary conditions: external heads at head-dependent boundary conditions for the child model are calculated using bilinear interpolation or a Darcy-weighted interpolation; specified-flow boundary conditions for the parent model are calculated using model-grid or hydrogeologic-unit hydraulic conductivities. Results suggest that significantly more accurate heads and flows are produced when both Darcy-weighted interpolation and hydrogeologic-unit hydraulic conductivities are used, while the other methods produce larger errors at the boundary between the regional and local models. The tests suggest that, if posed correctly, the ghost-node method performs well. Additional testing is needed for highly heterogeneous systems. ?? 2007 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minelli, Annalisa, E-mail: Annalisa.Minelli@univ-brest.fr; Marchesini, Ivan, E-mail: Ivan.Marchesini@irpi.cnr.it; Taylor, Faith E., E-mail: Faith.Taylor@kcl.ac.uk
Although there are clear economic and environmental incentives for producing energy from solar and wind power, there can be local opposition to their installation due to their impact upon the landscape. To date, no international guidelines exist to guide quantitative visual impact assessment of these facilities, making the planning process somewhat subjective. In this paper we demonstrate the development of a method and an Open Source GIS tool to quantitatively assess the visual impact of these facilities using line-of-site techniques. The methods here build upon previous studies by (i) more accurately representing the shape of energy producing facilities, (ii) takingmore » into account the distortion of the perceived shape and size of facilities caused by the location of the observer, (iii) calculating the possible obscuring of facilities caused by terrain morphology and (iv) allowing the combination of various facilities to more accurately represent the landscape. The tool has been applied to real and synthetic case studies and compared to recently published results from other models, and demonstrates an improvement in accuracy of the calculated visual impact of facilities. The tool is named r.wind.sun and is freely available from GRASS GIS AddOns. - Highlights: • We develop a tool to quantify wind turbine and photovoltaic panel visual impact. • The tool is freely available to download and edit as a module of GRASS GIS. • The tool takes into account visual distortion of the shape and size of objects. • The accuracy of calculation of visual impact is improved over previous methods.« less
Self-similar slip distributions on irregular shaped faults
NASA Astrophysics Data System (ADS)
Herrero, A.; Murphy, S.
2018-06-01
We propose a strategy to place a self-similar slip distribution on a complex fault surface that is represented by an unstructured mesh. This is possible by applying a strategy based on the composite source model where a hierarchical set of asperities, each with its own slip function which is dependent on the distance from the asperity centre. Central to this technique is the efficient, accurate computation of distance between two points on the fault surface. This is known as the geodetic distance problem. We propose a method to compute the distance across complex non-planar surfaces based on a corollary of the Huygens' principle. The difference between this method compared to others sample-based algorithms which precede it is the use of a curved front at a local level to calculate the distance. This technique produces a highly accurate computation of the distance as the curvature of the front is linked to the distance from the source. Our local scheme is based on a sequence of two trilaterations, producing a robust algorithm which is highly precise. We test the strategy on a planar surface in order to assess its ability to keep the self-similarity properties of a slip distribution. We also present a synthetic self-similar slip distribution on a real slab topography for a M8.5 event. This method for computing distance may be extended to the estimation of first arrival times in both complex 3D surfaces or 3D volumes.
Imputation for multisource data with comparison and assessment techniques
Casleton, Emily Michele; Osthus, David Allen; Van Buren, Kendra Lu
2017-12-27
Missing data are prevalent issue in analyses involving data collection. The problem of missing data is exacerbated for multisource analysis, where data from multiple sensors are combined to arrive at a single conclusion. In this scenario, it is more likely to occur and can lead to discarding a large amount of data collected; however, the information from observed sensors can be leveraged to estimate those values not observed. We propose two methods for imputation of multisource data, both of which take advantage of potential correlation between data from different sensors, through ridge regression and a state-space model. These methods, asmore » well as the common median imputation, are applied to data collected from a variety of sensors monitoring an experimental facility. Performance of imputation methods is compared with the mean absolute deviation; however, rather than using this metric to solely rank themethods,we also propose an approach to identify significant differences. Imputation techniqueswill also be assessed by their ability to produce appropriate confidence intervals, through coverage and length, around the imputed values. Finally, performance of imputed datasets is compared with a marginalized dataset through a weighted k-means clustering. In general, we found that imputation through a dynamic linearmodel tended to be the most accurate and to produce the most precise confidence intervals, and that imputing the missing values and down weighting them with respect to observed values in the analysis led to the most accurate performance.« less
Imputation for multisource data with comparison and assessment techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casleton, Emily Michele; Osthus, David Allen; Van Buren, Kendra Lu
Missing data are prevalent issue in analyses involving data collection. The problem of missing data is exacerbated for multisource analysis, where data from multiple sensors are combined to arrive at a single conclusion. In this scenario, it is more likely to occur and can lead to discarding a large amount of data collected; however, the information from observed sensors can be leveraged to estimate those values not observed. We propose two methods for imputation of multisource data, both of which take advantage of potential correlation between data from different sensors, through ridge regression and a state-space model. These methods, asmore » well as the common median imputation, are applied to data collected from a variety of sensors monitoring an experimental facility. Performance of imputation methods is compared with the mean absolute deviation; however, rather than using this metric to solely rank themethods,we also propose an approach to identify significant differences. Imputation techniqueswill also be assessed by their ability to produce appropriate confidence intervals, through coverage and length, around the imputed values. Finally, performance of imputed datasets is compared with a marginalized dataset through a weighted k-means clustering. In general, we found that imputation through a dynamic linearmodel tended to be the most accurate and to produce the most precise confidence intervals, and that imputing the missing values and down weighting them with respect to observed values in the analysis led to the most accurate performance.« less
Rapid lysostaphin test to differentiate Staphylococcus and Micrococcus species.
Geary, C; Stevens, M
1986-01-01
A rapid, simple lysostaphin lysis susceptibility test to differentiate the genera Staphylococcus and Micrococcus was evaluated. Of 181 strains from culture collections, 95 of 95 Staphylococcus strains were lysed, and 79 of 79 Micrococcus strains were not lysed. The seven Planococcus strains were resistant. Clinical isolates (890) were tested with lysostaphin and for the ability to produce acid from glycerol in the presence of erythromycin. Overall agreement between the methods was 99.2%. All clinical Micrococcus strains (43) were resistant to lysostaphin, and all clinical Staphylococcus strains (847) were susceptible. Seven of the Staphylococcus strains did not produce acid from glycerol in the presence of erythromycin. This lysostaphin test provides results in 2 h. It is easier to perform than previously described lysostaphin lysis methods. It is also more rapid and accurate than the glycerol-erythromycin test. PMID:3519667
Seo, Miyeong; Kim, Byungjoo; Baek, Song-Yee
2015-07-01
Patulin, a mycotoxin produced by several molds in fruits, has been frequently detected in apple products. Therefore, regulatory bodies have established recommended maximum permitted patulin concentrations for each type of apple product. Although several analytical methods have been adopted to determine patulin in food, quality control of patulin analysis is not easy, as reliable certified reference materials (CRMs) are not available. In this study, as a part of a project for developing CRMs for patulin analysis, we developed isotope dilution liquid chromatography-tandem mass spectrometry (ID-LC/MS/MS) as a higher-order reference method for the accurate value-assignment of CRMs. (13)C7-patulin was used as internal standard. Samples were extracted with ethyl acetate to improve recovery. For further sample cleanup with solid-phase extraction (SPE), the HLB SPE cartridge was chosen after comparing with several other types of SPE cartridges. High-performance liquid chromatography was performed on a multimode column for proper retention and separation of highly polar and water-soluble patulin from sample interferences. Sample extracts were analyzed by LC/MS/MS with electrospray ionization in negative ion mode with selected reaction monitoring of patulin and (13)C7-patulin at m/z 153→m/z 109 and m/z 160→m/z 115, respectively. The validity of the method was tested by measuring gravimetrically fortified samples of various apple products. In addition, the repeatability and the reproducibility of the method were tested to evaluate the performance of the method. The method was shown to provide accurate measurements in the 3-40 μg/kg range with a relative expanded uncertainty of around 1%.
The Production and Study of Antiprotons and Cold Antihydrogen
2006-12-01
Proceedings, 730 3-12 (2004). Publications for 2005 "Atoms Made Entirely of Antimatter : Two Methods Produce Slow Antihydrogen" (Review Paper) G. Gabrielse...8217stunning’ scientific accomplishment of creating antimatter , according to Provost Steven Hyman." "As the head of an international team of physicists at CERN...lower than previously realized," Hyman said. These techniques allow for extremely accurate measurements of the properties of matter and antimatter
Predicting Carbonate Species Ionic Conductivity in Alkaline Anion Exchange Membranes
2012-06-01
This method has been used previously with both PEM and AEM fuel cells and demonstrated its ability to accurately predict ionic conductivity [2,9,24...water. In an AMFC, the mobile species is a hydroxide ion (OH - ) and in a PEM fuel cell , the proton is solvated with a water molecule forming...membrane synthesis techniques have produced polymer electrolyte membranes that are capable of transporting anions in alkaline membrane fuel cells
Ghassabi Kondalaji, Samaneh; Khakinejad, Mahdiar; Tafreshian, Amirmahdi; J Valentine, Stephen
2017-05-01
Collision cross-section (CCS) measurements with a linear drift tube have been utilized to study the gas-phase conformers of a model peptide (acetyl-PAAAAKAAAAKAAAAKAAAAK). Extensive molecular dynamics (MD) simulations have been conducted to derive an advanced protocol for the generation of a comprehensive pool of in-silico structures; both higher energy and more thermodynamically stable structures are included to provide an unbiased sampling of conformational space. MD simulations at 300 K are applied to the in-silico structures to more accurately describe the gas-phase transport properties of the ion conformers including their dynamics. Different methods used previously for trajectory method (TM) CCS calculation employing the Mobcal software [1] are evaluated. A new method for accurate CCS calculation is proposed based on clustering and data mining techniques. CCS values are calculated for all in-silico structures, and those with matching CCS values are chosen as candidate structures. With this approach, more than 300 candidate structures with significant structural variation are produced; although no final gas-phase structure is proposed here, in a second installment of this work, gas-phase hydrogen deuterium exchange data will be utilized as a second criterion to select among these structures as well as to propose relative populations for these ion conformers. Here the need to increase conformer diversity and accurate CCS calculation is demonstrated and the advanced methods are discussed. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Ghassabi Kondalaji, Samaneh; Khakinejad, Mahdiar; Tafreshian, Amirmahdi; J. Valentine, Stephen
2017-05-01
Collision cross-section (CCS) measurements with a linear drift tube have been utilized to study the gas-phase conformers of a model peptide (acetyl-PAAAAKAAAAKAAAAKAAAAK). Extensive molecular dynamics (MD) simulations have been conducted to derive an advanced protocol for the generation of a comprehensive pool of in-silico structures; both higher energy and more thermodynamically stable structures are included to provide an unbiased sampling of conformational space. MD simulations at 300 K are applied to the in-silico structures to more accurately describe the gas-phase transport properties of the ion conformers including their dynamics. Different methods used previously for trajectory method (TM) CCS calculation employing the Mobcal software [1] are evaluated. A new method for accurate CCS calculation is proposed based on clustering and data mining techniques. CCS values are calculated for all in-silico structures, and those with matching CCS values are chosen as candidate structures. With this approach, more than 300 candidate structures with significant structural variation are produced; although no final gas-phase structure is proposed here, in a second installment of this work, gas-phase hydrogen deuterium exchange data will be utilized as a second criterion to select among these structures as well as to propose relative populations for these ion conformers. Here the need to increase conformer diversity and accurate CCS calculation is demonstrated and the advanced methods are discussed.
Integration of Point Clouds Dataset from Different Sensors
NASA Astrophysics Data System (ADS)
Abdullah, C. K. A. F. Che Ku; Baharuddin, N. Z. S.; Ariff, M. F. M.; Majid, Z.; Lau, C. L.; Yusoff, A. R.; Idris, K. M.; Aspuri, A.
2017-02-01
Laser Scanner technology become an option in the process of collecting data nowadays. It is composed of Airborne Laser Scanner (ALS) and Terrestrial Laser Scanner (TLS). ALS like Phoenix AL3-32 can provide accurate information from the viewpoint of rooftop while TLS as Leica C10 can provide complete data for building facade. However if both are integrated, it is able to produce more accurate data. The focus of this study is to integrate both types of data acquisition of ALS and TLS and determine the accuracy of the data obtained. The final results acquired will be used to generate models of three-dimensional (3D) buildings. The scope of this study is focusing on data acquisition of UTM Eco-home through laser scanning methods such as ALS which scanning on the roof and the TLS which scanning on building façade. Both device is used to ensure that no part of the building that are not scanned. In data integration process, both are registered by the selected points among the manmade features which are clearly visible in Cyclone 7.3 software. The accuracy of integrated data is determined based on the accuracy assessment which is carried out using man-made registration methods. The result of integration process can achieve below 0.04m. This integrated data then are used to generate a 3D model of UTM Eco-home building using SketchUp software. In conclusion, the combination of the data acquisition integration between ALS and TLS would produce the accurate integrated data and able to use for generate a 3D model of UTM eco-home. For visualization purposes, the 3D building model which generated is prepared in Level of Detail 3 (LOD3) which recommended by City Geographic Mark-Up Language (CityGML).
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox’s proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer. PMID:29326496
Lin, Chen-Yen; Halabi, Susan
2017-01-01
We propose a minimand perturbation method to derive the confidence regions for the regularized estimators for the Cox's proportional hazards model. Although the regularized estimation procedure produces a more stable point estimate, it remains challenging to provide an interval estimator or an analytic variance estimator for the associated point estimate. Based on the sandwich formula, the current variance estimator provides a simple approximation, but its finite sample performance is not entirely satisfactory. Besides, the sandwich formula can only provide variance estimates for the non-zero coefficients. In this article, we present a generic description for the perturbation method and then introduce a computation algorithm using the adaptive least absolute shrinkage and selection operator (LASSO) penalty. Through simulation studies, we demonstrate that our method can better approximate the limiting distribution of the adaptive LASSO estimator and produces more accurate inference compared with the sandwich formula. The simulation results also indicate the possibility of extending the applications to the adaptive elastic-net penalty. We further demonstrate our method using data from a phase III clinical trial in prostate cancer.
The Analysis Performance Method Naive Bayes Andssvm Determine Pattern Groups of Disease
NASA Astrophysics Data System (ADS)
Sitanggang, Rianto; Tulus; Situmorang, Zakarias
2017-12-01
Information is a very important element and into the daily needs of the moment, to get a precise and accurate information is not easy, this research can help decision makers and make a comparison. Researchers perform data mining techniques to analyze the performance of methods and algorithms naïve Bayes methods Smooth Support Vector Machine (ssvm) in the grouping of the disease.The pattern of disease that is often suffered by people in the group can be in the detection area of the collection of information contained in the medical record. Medical records have infromasi disease by patients in coded according to standard WHO. Processing of medical record data to find patterns of this group of diseases that often occur in this community take the attribute address, sex, type of disease, and age. Determining the next analysis is grouping of four ersebut attribute. From the results of research conducted on the dataset fever diabete mellitus, naïve Bayes method produces an average value of 99% and an accuracy and SSVM method produces an average value of 93% accuracy
Tensor-based Dictionary Learning for Spectral CT Reconstruction
Zhang, Yanbo; Wang, Ge
2016-01-01
Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628
Surface-from-gradients without discrete integrability enforcement: A Gaussian kernel approach.
Ng, Heung-Sun; Wu, Tai-Pang; Tang, Chi-Keung
2010-11-01
Representative surface reconstruction algorithms taking a gradient field as input enforce the integrability constraint in a discrete manner. While enforcing integrability allows the subsequent integration to produce surface heights, existing algorithms have one or more of the following disadvantages: They can only handle dense per-pixel gradient fields, smooth out sharp features in a partially integrable field, or produce severe surface distortion in the results. In this paper, we present a method which does not enforce discrete integrability and reconstructs a 3D continuous surface from a gradient or a height field, or a combination of both, which can be dense or sparse. The key to our approach is the use of kernel basis functions, which transfer the continuous surface reconstruction problem into high-dimensional space, where a closed-form solution exists. By using the Gaussian kernel, we can derive a straightforward implementation which is able to produce results better than traditional techniques. In general, an important advantage of our kernel-based method is that the method does not suffer discretization and finite approximation, both of which lead to surface distortion, which is typical of Fourier or wavelet bases widely adopted by previous representative approaches. We perform comparisons with classical and recent methods on benchmark as well as challenging data sets to demonstrate that our method produces accurate surface reconstruction that preserves salient and sharp features. The source code and executable of the system are available for downloading.
Bilbao, Aivett; Zhang, Ying; Varesio, Emmanuel; Luban, Jeremy; Strambio-De-Castillia, Caterina; Lisacek, Frédérique; Hopfgartner, Gérard
2016-01-01
Data-independent acquisition LC-MS/MS techniques complement supervised methods for peptide quantification. However, due to the wide precursor isolation windows, these techniques are prone to interference at the fragment ion level, which in turn is detrimental for accurate quantification. The “non-outlier fragment ion” (NOFI) ranking algorithm has been developed to assign low priority to fragment ions affected by interference. By using the optimal subset of high priority fragment ions these interfered fragment ions are effectively excluded from quantification. NOFI represents each fragment ion as a vector of four dimensions related to chromatographic and MS fragmentation attributes and applies multivariate outlier detection techniques. Benchmarking conducted on a well-defined quantitative dataset (i.e. the SWATH Gold Standard), indicates that NOFI on average is able to accurately quantify 11-25% more peptides than the commonly used Top-N library intensity ranking method. The sum of the area of the Top3-5 NOFIs produces similar coefficients of variation as compared to the library intensity method but with more accurate quantification results. On a biologically relevant human dendritic cell digest dataset, NOFI properly assigns low priority ranks to 85% of annotated interferences, resulting in sensitivity values between 0.92 and 0.80 against 0.76 for the Spectronaut interference detection algorithm. PMID:26412574
A rapid analytical method for predicting the oxygen demand of wastewater.
Fogelman, Shoshana; Zhao, Huijun; Blumenstein, Michael
2006-11-01
In this study, an investigation was undertaken to determine whether the predictive accuracy of an indirect, multiwavelength spectroscopic technique for rapidly determining oxygen demand (OD) values is affected by the use of unfiltered and turbid samples, as well as by the use of absorbance values measured below 200 nm. The rapid OD technique was developed that uses UV-Vis spectroscopy and artificial neural networks (ANNs) to indirectly determine chemical oxygen demand (COD) levels. It was found that the most accurate results were obtained when a spectral range of 190-350 nm was provided as data input to the ANN, and when using unfiltered samples below a turbidity range of 150 NTU. This is because high correlations of above 0.90 were obtained with the data using the standard COD method. This indicates that samples can be measured directly without the additional need for preprocessing by filtering. Samples with turbidity values higher than 150 NTU were found to produce poor correlations with the standard COD method, which made them unsuitable for accurate, real-time, on-line monitoring of OD levels.
Chambert, Thierry A.; Waddle, J. Hardin; Miller, David A.W.; Walls, Susan; Nichols, James D.
2018-01-01
The development and use of automated species-detection technologies, such as acoustic recorders, for monitoring wildlife are rapidly expanding. Automated classification algorithms provide a cost- and time-effective means to process information-rich data, but often at the cost of additional detection errors. Appropriate methods are necessary to analyse such data while dealing with the different types of detection errors.We developed a hierarchical modelling framework for estimating species occupancy from automated species-detection data. We explore design and optimization of data post-processing procedures to account for detection errors and generate accurate estimates. Our proposed method accounts for both imperfect detection and false positive errors and utilizes information about both occurrence and abundance of detections to improve estimation.Using simulations, we show that our method provides much more accurate estimates than models ignoring the abundance of detections. The same findings are reached when we apply the methods to two real datasets on North American frogs surveyed with acoustic recorders.When false positives occur, estimator accuracy can be improved when a subset of detections produced by the classification algorithm is post-validated by a human observer. We use simulations to investigate the relationship between accuracy and effort spent on post-validation, and found that very accurate occupancy estimates can be obtained with as little as 1% of data being validated.Automated monitoring of wildlife provides opportunity and challenges. Our methods for analysing automated species-detection data help to meet key challenges unique to these data and will prove useful for many wildlife monitoring programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Dang Van; NeuroSpin, Bat145, Point Courrier 156, CEA Saclay Center, 91191 Gif-sur-Yvette Cedex; Li, Jing-Rebecca, E-mail: jingrebecca.li@inria.fr
2014-04-15
The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch–Torrey partial differential equation (PDE). In addition, steady-state Laplace PDEs can be formulated to produce the homogenized diffusion tensor that describes the diffusion characteristics of the medium in the long time limit. In spatial domains that model biological tissues at the cellular level, these two types of PDEs have to be completed with permeability conditions on the cellular interfaces. To solve these PDEs, we implemented a finite elements method that allows jumps in the solution atmore » the cell interfaces by using double nodes. Using a transformation of the Bloch–Torrey PDE we reduced oscillations in the searched-for solution and simplified the implementation of the boundary conditions. The spatial discretization was then coupled to the adaptive explicit Runge–Kutta–Chebyshev time-stepping method. Our proposed method is second order accurate in space and second order accurate in time. We implemented this method on the FEniCS C++ platform and show time and spatial convergence results. Finally, this method is applied to study some relevant questions in diffusion MRI.« less
Magnetohydrodynamic generator experimental studies
NASA Technical Reports Server (NTRS)
Pierson, E. S.
1972-01-01
The results for an experimental study of a one wavelength MHD induction generator operating on a liquid flow are presented. First the design philosophy and the experimental generator design are summarized, including a description of the flow loop and instrumentation. Next a Fourier series method of treating the fact that the magnetic flux density produced by the stator is not a pure traveling sinusoid is described and some results summarized. This approach appears to be of interest after revisions are made, but the initial results are not accurate. Finally, some of the experimental data is summarized for various methods of excitation.
Improving Acoustic Models by Watching Television
NASA Technical Reports Server (NTRS)
Witbrock, Michael J.; Hauptmann, Alexander G.
1998-01-01
Obtaining sufficient labelled training data is a persistent difficulty for speech recognition research. Although well transcribed data is expensive to produce, there is a constant stream of challenging speech data and poor transcription broadcast as closed-captioned television. We describe a reliable unsupervised method for identifying accurately transcribed sections of these broadcasts, and show how these segments can be used to train a recognition system. Starting from acoustic models trained on the Wall Street Journal database, a single iteration of our training method reduced the word error rate on an independent broadcast television news test set from 62.2% to 59.5%.
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
Redding, David W; Lucas, Tim C D; Blackburn, Tim M; Jones, Kate E
2017-01-01
Statistical approaches for inferring the spatial distribution of taxa (Species Distribution Models, SDMs) commonly rely on available occurrence data, which is often clumped and geographically restricted. Although available SDM methods address some of these factors, they could be more directly and accurately modelled using a spatially-explicit approach. Software to fit models with spatial autocorrelation parameters in SDMs are now widely available, but whether such approaches for inferring SDMs aid predictions compared to other methodologies is unknown. Here, within a simulated environment using 1000 generated species' ranges, we compared the performance of two commonly used non-spatial SDM methods (Maximum Entropy Modelling, MAXENT and boosted regression trees, BRT), to a spatial Bayesian SDM method (fitted using R-INLA), when the underlying data exhibit varying combinations of clumping and geographic restriction. Finally, we tested how any recommended methodological settings designed to account for spatially non-random patterns in the data impact inference. Spatial Bayesian SDM method was the most consistently accurate method, being in the top 2 most accurate methods in 7 out of 8 data sampling scenarios. Within high-coverage sample datasets, all methods performed fairly similarly. When sampling points were randomly spread, BRT had a 1-3% greater accuracy over the other methods and when samples were clumped, the spatial Bayesian SDM method had a 4%-8% better AUC score. Alternatively, when sampling points were restricted to a small section of the true range all methods were on average 10-12% less accurate, with greater variation among the methods. Model inference under the recommended settings to account for autocorrelation was not impacted by clumping or restriction of data, except for the complexity of the spatial regression term in the spatial Bayesian model. Methods, such as those made available by R-INLA, can be successfully used to account for spatial autocorrelation in an SDM context and, by taking account of random effects, produce outputs that can better elucidate the role of covariates in predicting species occurrence. Given that it is often unclear what the drivers are behind data clumping in an empirical occurrence dataset, or indeed how geographically restricted these data are, spatially-explicit Bayesian SDMs may be the better choice when modelling the spatial distribution of target species.
Improved Correction of Misclassification Bias With Bootstrap Imputation.
van Walraven, Carl
2018-07-01
Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.
Lee, Chang-Ro; Lee, Jung Hun; Park, Kwang Seung; Kim, Young Bae; Jeong, Byeong Chul; Lee, Sang Hee
2016-01-01
The emergence of carbapenem-resistant Gram-negative pathogens poses a serious threat to public health worldwide. In particular, the increasing prevalence of carbapenem-resistant Klebsiella pneumoniae is a major source of concern. K. pneumoniae carbapenemases (KPCs) and carbapenemases of the oxacillinase-48 (OXA-48) type have been reported worldwide. New Delhi metallo-β-lactamase (NDM) carbapenemases were originally identified in Sweden in 2008 and have spread worldwide rapidly. In this review, we summarize the epidemiology of K. pneumoniae producing three carbapenemases (KPCs, NDMs, and OXA-48-like). Although the prevalence of each resistant strain varies geographically, K. pneumoniae producing KPCs, NDMs, and OXA-48-like carbapenemases have become rapidly disseminated. In addition, we used recently published molecular and genetic studies to analyze the mechanisms by which these three carbapenemases, and major K. pneumoniae clones, such as ST258 and ST11, have become globally prevalent. Because carbapenemase-producing K. pneumoniae are often resistant to most β-lactam antibiotics and many other non-β-lactam molecules, the therapeutic options available to treat infection with these strains are limited to colistin, polymyxin B, fosfomycin, tigecycline, and selected aminoglycosides. Although, combination therapy has been recommended for the treatment of severe carbapenemase-producing K. pneumoniae infections, the clinical evidence for this strategy is currently limited, and more accurate randomized controlled trials will be required to establish the most effective treatment regimen. Moreover, because rapid and accurate identification of the carbapenemase type found in K. pneumoniae may be difficult to achieve through phenotypic antibiotic susceptibility tests, novel molecular detection techniques are currently being developed. PMID:27379038
Gaussian mixture models as flux prediction method for central receivers
NASA Astrophysics Data System (ADS)
Grobler, Annemarie; Gauché, Paul; Smit, Willie
2016-05-01
Flux prediction methods are crucial to the design and operation of central receiver systems. Current methods such as the circular and elliptical (bivariate) Gaussian prediction methods are often used in field layout design and aiming strategies. For experimental or small central receiver systems, the flux profile of a single heliostat often deviates significantly from the circular and elliptical Gaussian models. Therefore a novel method of flux prediction was developed by incorporating the fitting of Gaussian mixture models onto flux profiles produced by flux measurement or ray tracing. A method was also developed to predict the Gaussian mixture model parameters of a single heliostat for a given time using image processing. Recording the predicted parameters in a database ensures that more accurate predictions are made in a shorter time frame.
Yi, B; Rao, B; Ding, Y H; Li, M; Xu, H Y; Zhang, M; Zhuang, G; Pan, Y
2014-11-01
The dynamic resonant magnetic perturbation (DRMP) system has been developed for the J-TEXT tokamak to study the interaction between the rotating perturbation magnetic field and the plasma. When the DRMP coils are energized by two phase sinusoidal currents with the same frequency, a 2/1 rotating resonant magnetic perturbation component will be generated. But at the same time, a small perturbation component rotating in the opposite direction is also produced because of the control error of the currents. This small component has bad influence on the experiment investigations. Actually, the mode spectrum of the generated DRMP can be optimized with an accurate control of phase difference between the two currents. In this paper, a new phase control method based on a novel all-digital phase-locked loop (ADPLL) is proposed. The proposed method features accurate phase control and flexible phase adjustment. Modeling and analysis of the proposed ADPLL is presented to guide the design of the parameters of the phase controller in order to obtain a better performance. Testing results verify the effectiveness of the ADPLL and validity of the method applying to the DRMP system.
NASA Astrophysics Data System (ADS)
Yi, B.; Rao, B.; Ding, Y. H.; Li, M.; Xu, H. Y.; Zhang, M.; Zhuang, G.; Pan, Y.
2014-11-01
The dynamic resonant magnetic perturbation (DRMP) system has been developed for the J-TEXT tokamak to study the interaction between the rotating perturbation magnetic field and the plasma. When the DRMP coils are energized by two phase sinusoidal currents with the same frequency, a 2/1 rotating resonant magnetic perturbation component will be generated. But at the same time, a small perturbation component rotating in the opposite direction is also produced because of the control error of the currents. This small component has bad influence on the experiment investigations. Actually, the mode spectrum of the generated DRMP can be optimized with an accurate control of phase difference between the two currents. In this paper, a new phase control method based on a novel all-digital phase-locked loop (ADPLL) is proposed. The proposed method features accurate phase control and flexible phase adjustment. Modeling and analysis of the proposed ADPLL is presented to guide the design of the parameters of the phase controller in order to obtain a better performance. Testing results verify the effectiveness of the ADPLL and validity of the method applying to the DRMP system.
Determining the mean hydraulic gradient of ground water affected by tidal fluctuations
Serfes, Michael E.
1991-01-01
Tidal fluctuations in surface-water bodies produce progressive pressure waves in adjacent aquifers. As these pressure waves propagate inland, ground-water levels and hydraulic gradients continuously fluctuate, creating a situation where a single set of water-level measurements cannot be used to accurately characterize ground-water flow. For example, a time series of water levels measured in a confined aquifer in Atlantic City, New Jersey, showed that the hydraulic gradient ranged from .01 to .001 with a 22-degree change in direction during a tidal day of approximately 25 hours. At any point where ground water tidally fluctuates, the magnitude and direction of the hydraulic gradient fluctuates about the mean or regional hydraulic gradient. The net effect of these fluctuations on ground-water flow can be determined using the mean hydraulic gradient, which can be calculated by comparing mean ground- and surface-water elevations. Filtering methods traditionally used to determine daily mean sea level can be similarly applied to ground water to determine mean levels. Method (1) uses 71 consecutive hourly water-level observations to accurately determine the mean level. Method (2) approximates the mean level using only 25 consecutive hourly observations; however, there is a small error associated with this method.
Otero-Millan, Jorge; Roberts, Dale C; Lasker, Adrian; Zee, David S; Kheradmand, Amir
2015-01-01
Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines.
NASA Astrophysics Data System (ADS)
Gholami, Ali; Golestaneh, Mahshid; Andalib, Zeinab
2018-03-01
Cocamidopropyl betaine (CAPB) is a zwitterionic surfactant that is synthesized using coconut oil and usually supplied in form of an aqueous solution with 25-37% w/w. In this study, a novel method based on UV-visible spectroscopy is developed for an accurate determination of CAPB synthesized from coconut oil. Eriochrome Black T (EBT) as a specific color indicator was added to CAPB and a red shift and color change were observed. This shift leads in increasing wavelength selectivity of the method. The change in the color intensity depends on the concentration of CAPB. By measuring the absorbance of a solution containing CAPB, its concentration was measured. After optimizing all the effective parameters, CAPB was detected in commercial real samples. Using the proposed approach, limit of quantification (LOQ) and relative standard deviation (RSD) were obtained about 4.30 × 10- 5 M and 4.8% respectively. None of unreacted materials or by-products, which were produced in the synthesis of CAPB, showed any interference in the determination of CAPB. This shows that the proposed method is specific and accurate, and can potentially be used for quantitative determination of CAPB in commercial samples with satisfactory results.
Otero-Millan, Jorge; Roberts, Dale C.; Lasker, Adrian; Zee, David S.; Kheradmand, Amir
2015-01-01
Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines. PMID:26587699
NASA Astrophysics Data System (ADS)
Barsoom, B. N.; Abdelsamad, A. M. E.; Adib, N. M.
2006-07-01
A simple and accurate spectrophotometric method for the determination of arbutin (glycosylated hydroquinone) is described. It is based on the oxidation of arbutin by periodate in presence of iodate. Excess periodate causes liberation of iodine at pH 8.0. The unreacted periodate is determined by measurement of the liberated iodine spectrophotometrically in the wavelength range (300-500 nm). A calibration curve was constructed for more accurate results and the correlation coefficient of linear regression analysis was -0.9778. The precision of this method was better than 6.17% R.S.D. ( n = 3). Regression analysis of Bear-Lambert plot shows good correlation in the concentration range 25-125 ug/ml. The identification limit was determined to be 25 ug/ml a detailed study of the reaction conditions was carried out, including effect of changing pH, time, temperature and volume of periodate. Analyzing pure and authentic samples containing arbutin tested the validity of the proposed method which has an average percent recovery of 100.86%. An alternative method is also proposed which involves a complexation reaction between arbutin and ferric chloride solution. The produced complex which is yellowish-green in color was determined spectophotometrically.
Performance of three reflectance calibration methods for airborne hyperspectral spectrometer data.
Miura, Tomoaki; Huete, Alfredo R
2009-01-01
In this study, the performances and accuracies of three methods for converting airborne hyperspectral spectrometer data to reflectance factors were characterized and compared. The "reflectance mode (RM)" method, which calibrates a spectrometer against a white reference panel prior to mounting on an aircraft, resulted in spectral reflectance retrievals that were biased and distorted. The magnitudes of these bias errors and distortions varied significantly, depending on time of day and length of the flight campaign. The "linear-interpolation (LI)" method, which converts airborne spectrometer data by taking a ratio of linearly-interpolated reference values from the preflight and post-flight reference panel readings, resulted in precise, but inaccurate reflectance retrievals. These reflectance spectra were not distorted, but were subject to bias errors of varying magnitudes dependent on the flight duration length. The "continuous panel (CP)" method uses a multi-band radiometer to obtain continuous measurements over a reference panel throughout the flight campaign, in order to adjust the magnitudes of the linear-interpolated reference values from the preflight and post-flight reference panel readings. Airborne hyperspectral reflectance retrievals obtained using this method were found to be the most accurate and reliable reflectance calibration method. The performances of the CP method in retrieving accurate reflectance factors were consistent throughout time of day and for various flight durations. Based on the dataset analyzed in this study, the uncertainty of the CP method has been estimated to be 0.0025 ± 0.0005 reflectance units for the wavelength regions not affected by atmospheric absorptions. The RM method can produce reasonable results only for a very short-term flight (e.g., < 15 minutes) conducted around a local solar noon. The flight duration should be kept shorter than 30 minutes for the LI method to produce results with reasonable accuracies. An important advantage of the CP method is that the method can be used for long-duration flight campaigns (e.g., 1-2 hours). Although this study focused on reflectance calibration of airborne spectrometer data, the methods evaluated in this study and the results obtained are directly applicable to ground spectrometer measurements.
Highly accurate surface maps from profilometer measurements
NASA Astrophysics Data System (ADS)
Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.
2013-04-01
Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.
[An individual facial shield for a sportsman with an orofacial injury].
de Baat, C; Peters, R; van Iperen-Keiman, C M; de Vleeschouwer, M
2005-05-01
Facial shields are used when practising contact sports, high speed sports, sports using hard balls, sticks or bats, sports using protective shields or covers, and sports using hard boardings around the sports ground. Examples of facial shields are commercially available, per branch of sport standardised helmets. Fabricating individual protective shields is primarily restricted to mouth guards. In individual cases a more extensive facial shield is demanded, for instance in case of a surgically stabilised facial bone fracture. In order to be able to fabricate an extensive individual facial shield, an accurate to the nearest model of the anterior part of the head is required. An accurate model can be provided by making an impression of the face, which is poured in dental stone. Another method is producing a stereolithographic model using computertomography or magnetic resonance imaging. On the accurate model the facial shield can be designed and fabricated from a strictly safe material, such as polyvinylchloride or polycarbonate.
King, Andrew W; Baskerville, Adam L; Cox, Hazel
2018-03-13
An implementation of the Hartree-Fock (HF) method using a Laguerre-based wave function is described and used to accurately study the ground state of two-electron atoms in the fixed nucleus approximation, and by comparison with fully correlated (FC) energies, used to determine accurate electron correlation energies. A variational parameter A is included in the wave function and is shown to rapidly increase the convergence of the energy. The one-electron integrals are solved by series solution and an analytical form is found for the two-electron integrals. This methodology is used to produce accurate wave functions, energies and expectation values for the helium isoelectronic sequence, including at low nuclear charge just prior to electron detachment. Additionally, the critical nuclear charge for binding two electrons within the HF approach is calculated and determined to be Z HF C =1.031 177 528.This article is part of the theme issue 'Modern theoretical chemistry'. © 2018 The Author(s).
Measuring signal-to-noise ratio in partially parallel imaging MRI
Goerner, Frank L.; Clarke, Geoffrey D.
2011-01-01
Purpose: To assess five different methods of signal-to-noise ratio (SNR) measurement for partially parallel imaging (PPI) acquisitions. Methods: Measurements were performed on a spherical phantom and three volunteers using a multichannel head coil a clinical 3T MRI system to produce echo planar, fast spin echo, gradient echo, and balanced steady state free precession image acquisitions. Two different PPI acquisitions, generalized autocalibrating partially parallel acquisition algorithm and modified sensitivity encoding with acceleration factors (R) of 2–4, were evaluated and compared to nonaccelerated acquisitions. Five standard SNR measurement techniques were investigated and Bland–Altman analysis was used to determine agreement between the various SNR methods. The estimated g-factor values, associated with each method of SNR calculation and PPI reconstruction method, were also subjected to assessments that considered the effects on SNR due to reconstruction method, phase encoding direction, and R-value. Results: Only two SNR measurement methods produced g-factors in agreement with theoretical expectations (g ≥ 1). Bland–Altman tests demonstrated that these two methods also gave the most similar results relative to the other three measurements. R-value was the only factor of the three we considered that showed significant influence on SNR changes. Conclusions: Non-signal methods used in SNR evaluation do not produce results consistent with expectations in the investigated PPI protocols. Two of the methods studied provided the most accurate and useful results. Of these two methods, it is recommended, when evaluating PPI protocols, the image subtraction method be used for SNR calculations due to its relative accuracy and ease of implementation. PMID:21978049
Validation of the ANSR Listeria method for detection of Listeria spp. in environmental samples.
Wendorf, Michael; Feldpausch, Emily; Pinkava, Lisa; Luplow, Karen; Hosking, Edan; Norton, Paul; Biswas, Preetha; Mozola, Mark; Rice, Jennifer
2013-01-01
ANSR Listeria is a new diagnostic assay for detection of Listeria spp. in sponge or swab samples taken from a variety of environmental surfaces. The method is an isothermal nucleic acid amplification assay based on the nicking enzyme amplification reaction technology. Following single-step sample enrichment for 16-24 h, the assay is completed in 40 min, requiring only simple instrumentation. In inclusivity testing, 48 of 51 Listeria strains tested positive, with only the three strains of L. grayi producing negative results. Further investigation showed that L. grayi is reactive in the ANSR assay, but its ability to grow under the selective enrichment conditions used in the method is variable. In exclusivity testing, 32 species of non-Listeria, Gram-positive bacteria all produced negative ANSR assay results. Performance of the ANSR method was compared to that of the U.S. Department of Agriculture-Food Safety and Inspection Service reference culture procedure for detection of Listeria spp. in sponge or swab samples taken from inoculated stainless steel, plastic, ceramic tile, sealed concrete, and rubber surfaces. Data were analyzed using Chi-square and probability of detection models. Only one surface, stainless steel, showed a significant difference in performance between the methods, with the ANSR method producing more positive results. Results of internal trials were supported by findings from independent laboratory testing. The ANSR Listeria method can be used as an accurate, rapid, and simple alternative to standard culture methods for detection of Listeria spp. in environmental samples.
SIMS: A Hybrid Method for Rapid Conformational Analysis
Gipson, Bryant; Moll, Mark; Kavraki, Lydia E.
2013-01-01
Proteins are at the root of many biological functions, often performing complex tasks as the result of large changes in their structure. Describing the exact details of these conformational changes, however, remains a central challenge for computational biology due the enormous computational requirements of the problem. This has engendered the development of a rich variety of useful methods designed to answer specific questions at different levels of spatial, temporal, and energetic resolution. These methods fall largely into two classes: physically accurate, but computationally demanding methods and fast, approximate methods. We introduce here a new hybrid modeling tool, the Structured Intuitive Move Selector (sims), designed to bridge the divide between these two classes, while allowing the benefits of both to be seamlessly integrated into a single framework. This is achieved by applying a modern motion planning algorithm, borrowed from the field of robotics, in tandem with a well-established protein modeling library. sims can combine precise energy calculations with approximate or specialized conformational sampling routines to produce rapid, yet accurate, analysis of the large-scale conformational variability of protein systems. Several key advancements are shown, including the abstract use of generically defined moves (conformational sampling methods) and an expansive probabilistic conformational exploration. We present three example problems that sims is applied to and demonstrate a rapid solution for each. These include the automatic determination of “active” residues for the hinge-based system Cyanovirin-N, exploring conformational changes involving long-range coordinated motion between non-sequential residues in Ribose-Binding Protein, and the rapid discovery of a transient conformational state of Maltose-Binding Protein, previously only determined by Molecular Dynamics. For all cases we provide energetic validations using well-established energy fields, demonstrating this framework as a fast and accurate tool for the analysis of a wide range of protein flexibility problems. PMID:23935893
El-Yazbi, F A; Abdine, H H; Shaalan, R A
1999-06-01
Three sensitive and accurate methods are presented for the determination of benazepril in its dosage forms. The first method uses derivative spectrophotometry to resolve the interference due to formulation matrix. The second method depends on the color formed by the reaction of the drug with bromocresol green (BCG). The third one utilizes the reaction of benazepril, after alkaline hydrolysis, with 3-methylbenzothialozone (MBTH) hydrazone where the produced color is measured at 593 nm. The latter method was extended to develop a stability-indicating method for this drug. Moreover, the derivative method was applied for the determination of benazepril in its combination with hydrochlorothiazide. The proposed methods were applied for the analysis of benazepril in the pure form and in tablets. The coefficient of variation was less than 2%.
Baradez, Marc-Olivier; Marshall, Damian
2011-01-01
The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells. PMID:22028809
Baradez, Marc-Olivier; Marshall, Damian
2011-01-01
The transition from traditional culture methods towards bioreactor based bioprocessing to produce cells in commercially viable quantities for cell therapy applications requires the development of robust methods to ensure the quality of the cells produced. Standard methods for measuring cell quality parameters such as viability provide only limited information making process monitoring and optimisation difficult. Here we describe a 3D image-based approach to develop cell distribution maps which can be used to simultaneously measure the number, confluency and morphology of cells attached to microcarriers in a stirred tank bioreactor. The accuracy of the cell distribution measurements is validated using in silico modelling of synthetic image datasets and is shown to have an accuracy >90%. Using the cell distribution mapping process and principal component analysis we show how cell growth can be quantitatively monitored over a 13 day bioreactor culture period and how changes to manufacture processes such as initial cell seeding density can significantly influence cell morphology and the rate at which cells are produced. Taken together, these results demonstrate how image-based analysis can be incorporated in cell quality control processes facilitating the transition towards bioreactor based manufacture for clinical grade cells.
Andrus, E. Cowles; Carter, Edward P.
1930-01-01
1. A method is described for determining the refractory period of the dog's auricle during the normal sinus rhythm. The advantages of the method are: (a) The total stimulating effects of repeated induction shocks are avoided. (b) The action current is recorded from a point one millimeter or less from the point of stimulation. (c) Alterations in the spontaneous rate of the auricle do not interfere with the accurate determination of the refractory period. 2. The values obtained for the normal refractory period and the changes produced by atropine and by stimulation of the vagus agree closely with those of previous observers. 3. The automatic features of the method make possible the determination of the refractory period under adrenalin. This drug brings about a distinct shortening of the refractory period but less than that produced by stimulation of the vagus. 4. During vagal stimulation a single induction shock, introduced soon after the end of the refractory period, frequently produces auricular fibrillation. The cause of this irregularity is discussed and its relation to clinical auricular fibrillation is suggested. PMID:19869696
Holes in the ocean: Filling voids in bathymetric lidar data
NASA Astrophysics Data System (ADS)
Coleman, John B.; Yao, Xiaobai; Jordan, Thomas R.; Madden, Marguertie
2011-04-01
The mapping of coral reefs may be efficiently accomplished by the use of airborne laser bathymetry. However, there are often data holes within the bathymetry data which must be filled in order to produce a complete representation of the coral habitat. This study presents a method to fill these data holes through data merging and interpolation. The method first merges ancillary digital sounding data with airborne laser bathymetry data in order to populate data points in all areas but particularly those of data holes. What follows is to generate an elevation surface by spatial interpolation based on the merged data points obtained in the first step. We conduct a case study of the Dry Tortugas National Park in Florida and produced an enhanced digital elevation model in the ocean with this method. Four interpolation techniques, including Kriging, natural neighbor, spline, and inverse distance weighted, are implemented and evaluated on their ability to accurately and realistically represent the shallow-water bathymetry of the study area. The natural neighbor technique is found to be the most effective. Finally, this enhanced digital elevation model is used in conjunction with Ikonos imagery to produce a complete, three-dimensional visualization of the study area.
Computing the absolute Gibbs free energy in atomistic simulations: Applications to defects in solids
NASA Astrophysics Data System (ADS)
Cheng, Bingqing; Ceriotti, Michele
2018-02-01
The Gibbs free energy is the fundamental thermodynamic potential underlying the relative stability of different states of matter under constant-pressure conditions. However, computing this quantity from atomic-scale simulations is far from trivial, so the potential energy of a system is often used as a proxy. In this paper, we use a combination of thermodynamic integration methods to accurately evaluate the Gibbs free energies associated with defects in crystals, including the vacancy formation energy in bcc iron, and the stacking fault energy in fcc nickel, iron, and cobalt. We quantify the importance of entropic and anharmonic effects in determining the free energies of defects at high temperatures, and show that the potential energy approximation as well as the harmonic approximation may produce inaccurate or even qualitatively wrong results. Our calculations manifest the necessity to employ accurate free energy methods such as thermodynamic integration to estimate the stability of crystallographic defects at high temperatures.
Planetary-scale surface water detection from space
NASA Astrophysics Data System (ADS)
Donchyts, G.; Baart, F.; Winsemius, H.; Gorelick, N.
2017-12-01
Accurate, efficient and high-resolution methods of surface water detection are needed for a better water management. Datasets on surface water extent and dynamics are crucial for a better understanding of natural and human-made processes, and as an input data for hydrological and hydraulic models. In spite of considerable progress in the harmonization of freely available satellite data, producing accurate and efficient higher-level surface water data products remains very challenging. This presentation will provide an overview of existing methods for surface water extent and change detection from multitemporal and multi-sensor satellite imagery. An algorithm to detect surface water changes from multi-temporal satellite imagery will be demonstrated as well as its open-source implementation (http://aqua-monitor.deltares.nl). This algorithm was used to estimate global surface water changes at high spatial resolution. These changes include climate change, land reclamation, reservoir construction/decommissioning, erosion/accretion, and many other. This presentation will demonstrate how open satellite data and open platforms such as Google Earth Engine have helped with this research.
Prapamontol, Tippawan; Sutan, Kunrunya; Laoyang, Sompong; Hongsibsong, Surat; Lee, Grace; Yano, Yukiko; Hunter, Ronald Elton; Ryan, P Barry; Barr, Dana Boyd; Panuwet, Parinya
2014-01-01
We report two analytical methods for the measurement of dialkylphosphate (DAP) metabolites of organophosphate pesticides in human urine. These methods were independently developed/modified and implemented in two separate laboratories and cross validated. The aim was to develop simple, cost effective, and reliable methods that could use available resources and sample matrices in Thailand and the United States. While several methods already exist, we found that direct application of these methods required modification of sample preparation and chromatographic conditions to render accurate, reliable data. The problems encountered with existing methods were attributable to urinary matrix interferences, and differences in the pH of urine samples and reagents used during the extraction and derivatization processes. Thus, we provide information on key parameters that require attention during method modification and execution that affect the ruggedness of the methods. The methods presented here employ gas chromatography (GC) coupled with either flame photometric detection (FPD) or electron impact ionization-mass spectrometry (EI-MS) with isotopic dilution quantification. The limits of detection were reported from 0.10ng/mL urine to 2.5ng/mL urine (for GC-FPD), while the limits of quantification were reported from 0.25ng/mL urine to 2.5ng/mL urine (for GC-MS), for all six common DAP metabolites (i.e., dimethylphosphate, dimethylthiophosphate, dimethyldithiophosphate, diethylphosphate, diethylthiophosphate, and diethyldithiophosphate). Each method showed a relative recovery range of 94-119% (for GC-FPD) and 92-103% (for GC-MS), and relative standard deviations (RSD) of less than 20%. Cross-validation was performed on the same set of urine samples (n=46) collected from pregnant women residing in the agricultural areas of northern Thailand. The results from split sample analysis from both laboratories agreed well for each metabolite, suggesting that each method can produce comparable data. In addition, results from analyses of specimens from the German External Quality Assessment Scheme (G-EQUAS) suggested that the GC-FPD method produced accurate results that can be reasonably compared to other studies. Copyright © 2013 Elsevier GmbH. All rights reserved.
NASA Astrophysics Data System (ADS)
Andryani, Diyah Septi; Bustamam, Alhadi; Lestari, Dian
2017-03-01
Clustering aims to classify the different patterns into groups called clusters. In this clustering method, we use n-mers frequency to calculate the distance matrix which is considered more accurate than using the DNA alignment. The clustering results could be used to discover biologically important sub-sections and groups of genes. Many clustering methods have been developed, while hard clustering methods considered less accurate than fuzzy clustering methods, especially if it is used for outliers data. Among fuzzy clustering methods, fuzzy c-means is one the best known for its accuracy and simplicity. Fuzzy c-means clustering uses membership function variable, which refers to how likely the data could be members into a cluster. Fuzzy c-means clustering works using the principle of minimizing the objective function. Parameters of membership function in fuzzy are used as a weighting factor which is also called the fuzzier. In this study we implement hybrid clustering using fuzzy c-means and divisive algorithm which could improve the accuracy of cluster membership compare to traditional partitional approach only. In this study fuzzy c-means is used in the first step to find partition results. Furthermore divisive algorithms will run on the second step to find sub-clusters and dendogram of phylogenetic tree. To find the best number of clusters is determined using the minimum value of Davies Bouldin Index (DBI) of the cluster results. In this research, the results show that the methods introduced in this paper is better than other partitioning methods. Finally, we found 3 clusters with DBI value of 1.126628 at first step of clustering. Moreover, DBI values after implementing the second step of clustering are always producing smaller IDB values compare to the results of using first step clustering only. This condition indicates that the hybrid approach in this study produce better performance of the cluster results, in term its DBI values.
Accurate sparse-projection image reconstruction via nonlocal TV regularization.
Zhang, Yi; Zhang, Weihua; Zhou, Jiliu
2014-01-01
Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.
Vibratory high pressure coal feeder having a helical ramp
Farber, Gerald
1978-01-01
Apparatus and method for feeding powdered coal from a helical ramp into a high pressure, heated, reactor tube containing hydrogen for hydrogenating the coal and/or for producing useful products from coal. To this end, the helical ramp is vibrated to feed the coal cleanly at an accurately controlled rate in a simple reliable and trouble-free manner that eliminates complicated and expensive screw feeders, and/or complicated and expensive seals, bearings and fully rotating parts.
Downscaling NASA Climatological Data to Produce Detailed Climate Zone Maps
NASA Technical Reports Server (NTRS)
Chandler, William S.; Hoell, James M.; Westberg, David J.; Whitlock, Charles H.; Zhang, Taiping; Stackhouse, P. W.
2011-01-01
The design of energy efficient sustainable buildings is heavily dependent on accurate long-term and near real-time local weather data. To varying degrees the current meteorological networks over the globe have been used to provide these data albeit often from sites far removed from the desired location. The national need is for access to weather and solar resource data accurate enough to use to develop preliminary building designs within a short proposal time limit, usually within 60 days. The NASA Prediction Of Worldwide Energy Resource (POWER) project was established by NASA to provide industry friendly access to globally distributed solar and meteorological data. As a result, the POWER web site (power.larc.nasa.gov) now provides global information on many renewable energy parameters and several buildings-related items but at a relatively coarse resolution. This paper describes a method of downscaling NASA atmospheric assimilation model results to higher resolution and maps those parameters to produce building climate zone maps using estimates of temperature and precipitation. The distribution of climate zones for North America with an emphasis on the Pacific Northwest for just one year shows very good correspondence to the currently defined distribution. The method has the potential to provide a consistent procedure for deriving climate zone information on a global basis that can be assessed for variability and updated more regularly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakatsuji, Hiroshi, E-mail: h.nakatsuji@qcri.or.jp; Nakashima, Hiroyuki
The free-complement (FC) method is a general method for solving the Schrödinger equation (SE): The produced wave function has the potentially exact structure as the solution of the Schrödinger equation. The variables included are determined either by using the variational principle (FC-VP) or by imposing the local Schrödinger equations (FC-LSE) at the chosen set of the sampling points. The latter method, referred to as the local Schrödinger equation (LSE) method, is integral-free and therefore applicable to any atom and molecule. The purpose of this paper is to formulate the basic theories of the LSE method and explain their basic features.more » First, we formulate three variants of the LSE method, the AB, HS, and H{sup T}Q methods, and explain their properties. Then, the natures of the LSE methods are clarified in some detail using the simple examples of the hydrogen atom and the Hooke’s atom. Finally, the ideas obtained in this study are applied to solving the SE of the helium atom highly accurately with the FC-LSE method. The results are very encouraging: we could get the world’s most accurate energy of the helium atom within the sampling-type methodologies, which is comparable to those obtained with the FC-VP method. Thus, the FC-LSE method is an easy and yet a powerful integral-free method for solving the Schrödinger equation of general atoms and molecules.« less
Singlet oxygen detection in biological systems: Uses and limitations
Koh, Eugene; Fluhr, Robert
2016-01-01
ABSTRACT The study of singlet oxygen in biological systems is challenging in many ways. Singlet oxygen is a relatively unstable ephemeral molecule, and its properties make it highly reactive with many biomolecules, making it difficult to quantify accurately. Several methods have been developed to study this elusive molecule, but most studies thus far have focused on those conditions that produce relatively large amounts of singlet oxygen. However, the need for more sensitive methods is required as one begins to explore the levels of singlet oxygen required in signaling and regulatory processes. Here we discuss the various methods used in the study of singlet oxygen, and outline their uses and limitations. PMID:27231787
Applying the method of fundamental solutions to harmonic problems with singular boundary conditions
NASA Astrophysics Data System (ADS)
Valtchev, Svilen S.; Alves, Carlos J. S.
2017-07-01
The method of fundamental solutions (MFS) is known to produce highly accurate numerical results for elliptic boundary value problems (BVP) with smooth boundary conditions, posed in analytic domains. However, due to the analyticity of the shape functions in its approximation basis, the MFS is usually disregarded when the boundary functions possess singularities. In this work we present a modification of the classical MFS which can be applied for the numerical solution of the Laplace BVP with Dirichlet boundary conditions exhibiting jump discontinuities. In particular, a set of harmonic functions with discontinuous boundary traces is added to the MFS basis. The accuracy of the proposed method is compared with the results form the classical MFS.
Shen, Shuwei; Wang, Haili; Xue, Yue; Yuan, Li; Zhou, Ximing; Zhao, Zuhua; Dong, Erbao; Liu, Bin; Liu, Wendong; Cromeens, Barrett; Adler, Brent; Besner, Gail; Xu, Ronald X
2017-09-08
Preoperative assessment of tissue anatomy and accurate surgical planning is crucial in conjoined twin separation surgery. We developed a new method that combines three-dimensional (3D) printing, assembling, and casting to produce anatomic models of high fidelity for surgical planning. The related anatomic features of the conjoined twins were captured by computed tomography (CT), classified as five organ groups, and reconstructed as five computer models. Among these organ groups, the skeleton was produced by fused deposition modeling (FDM) using acrylonitrile-butadiene-styrene. For the other four organ groups, shell molds were prepared by FDM and cast with silica gel to simulate soft tissues, with contrast enhancement pigments added to simulate different CT and visual contrasts. The produced models were assembled, positioned firmly within a 3D printed shell mold simulating the skin boundary, and cast with transparent silica gel. The produced phantom was subject to further CT scan in comparison with that of the patient data for fidelity evaluation. Further data analysis showed that the produced model reassembled the geometric features of the original CT data with an overall mean deviation of less than 2 mm, indicating the clinical potential to use this method for surgical planning in conjoined twin separation surgery.
de la Torre, Xavier; Colamonici, Cristiana; Curcio, Davide; Molaioni, Francesco; Pizzardi, Marta; Botrè, Francesco
2011-04-01
Nandrolone and/or its precursors are included in the World Anti-doping Agency (WADA) list of forbidden substances and methods and as such their use is banned in sport. 19-Norandrosterone (19-NA) the main metabolite of these compounds can also be produced endogenously. The need to establish the origin of 19-NA in human urine samples obliges the antidoping laboratories to use isotope ratio mass spectrometry (IRMS) coupled to gas chromatography (GC/C/IRMS). In this work a simple liquid chromatographic method without any additional derivatization step is proposed, allowing to drastically simplify the urine pretreatment procedure, leading to extracts free of interferences permitting precise and accurate IRMS analysis. The purity of the extracts was verified by parallel analysis by gas chromatography coupled to mass spectrometry with GC conditions identical to those of the GC/C/IRMS assay. The method has been validated according to ISO17025 requirements (within assay precision of ±0.3‰ and between assay precision of ±0.4‰). The method has been tested with samples obtained after the administration of synthetic 19-norandrostenediol and samples collected during pregnancy where 19-NA is known to be produced endogenously. Twelve drugs and synthetic standards able to produce through metabolism 19-NA have shown to present δ(13)C values around -29‰ being quite homogeneous (-28.8±1.5; mean±standard deviation) while endogenously produced 19-NA has shown values comparable to other endogenous produced steroids in the range -21 to -24‰ as already reported. The efficacy of the method was tested on real samples from routine antidoping analyses. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kilian-Meneghin, Josh; Xiong, Z.; Rudin, S.; Oines, A.; Bednarek, D. R.
2017-03-01
The purpose of this work is to evaluate methods for producing a library of 2D-radiographic images to be correlated to clinical images obtained during a fluoroscopically-guided procedure for automated patient-model localization. The localization algorithm will be used to improve the accuracy of the skin-dose map superimposed on the 3D patient- model of the real-time Dose-Tracking-System (DTS). For the library, 2D images were generated from CT datasets of the SK-150 anthropomorphic phantom using two methods: Schmid's 3D-visualization tool and Plastimatch's digitally-reconstructed-radiograph (DRR) code. Those images, as well as a standard 2D-radiographic image, were correlated to a 2D-fluoroscopic image of a phantom, which represented the clinical-fluoroscopic image, using the Corr2 function in Matlab. The Corr2 function takes two images and outputs the relative correlation between them, which is fed into the localization algorithm. Higher correlation means better alignment of the 3D patient-model with the patient image. In this instance, it was determined that the localization algorithm will succeed when Corr2 returns a correlation of at least 50%. The 3D-visualization tool images returned 55-80% correlation relative to the fluoroscopic-image, which was comparable to the correlation for the radiograph. The DRR images returned 61-90% correlation, again comparable to the radiograph. Both methods prove to be sufficient for the localization algorithm and can be produced quickly; however, the DRR method produces more accurate grey-levels. Using the DRR code, a library at varying angles can be produced for the localization algorithm.
NASA Astrophysics Data System (ADS)
Hester, David Barry
The objective of this research was to develop methods for urban land cover analysis using QuickBird high spatial resolution satellite imagery. Such imagery has emerged as a rich commercially available remote sensing data source and has enjoyed high-profile broadcast news media and Internet applications, but methods of quantitative analysis have not been thoroughly explored. The research described here consists of three studies focused on the use of pan-sharpened 61-cm spatial resolution QuickBird imagery, the spatial resolution of which is the highest of any commercial satellite. In the first study, a per-pixel land cover classification method is developed for use with this imagery. This method utilizes a per-pixel classification approach to generate an accurate six-category high spatial resolution land cover map of a developing suburban area. The primary objective of the second study was to develop an accurate land cover change detection method for use with QuickBird land cover products. This work presents an efficient fuzzy framework for transforming map uncertainty into accurate and meaningful high spatial resolution land cover change analysis. The third study described here is an urban planning application of the high spatial resolution QuickBird-based land cover product developed in the first study. This work both meaningfully connects this exciting new data source to urban watershed management and makes an important empirical contribution to the study of suburban watersheds. Its analysis of residential roads and driveways as well as retail parking lots sheds valuable light on the impact of transportation-related land use on the suburban landscape. Broadly, these studies provide new methods for using state-of-the-art remote sensing data to inform land cover analysis and urban planning. These methods are widely adaptable and produce land cover products that are both meaningful and accurate. As additional high spatial resolution satellites are launched and the cost of high resolution imagery continues to decline, this research makes an important contribution to this exciting era in the science of remote sensing.
The prediction and mapping of geoidal undulations from GEOS-3 altimetry. [gravity anomalies
NASA Technical Reports Server (NTRS)
Kearsley, W.
1978-01-01
From the adjusted altimeter data an approximation to the geoid height in ocean areas is obtained. Methods are developed to produce geoid maps in these areas. Geoid heights are obtained for grid points in the region to be mapped, and two of the parameters critical to the production of an accurate map are investigated. These are the spacing of the grid, which must be related to the half-wavelength of the altimeter signal whose amplitude is the desired accuracy of the contour; and the method adopted to predict the grid values. Least squares collocation was used to find geoid undulations on a 1 deg grid in the mapping area. Twenty maps, with their associated precisions, were produced and are included. These maps cover the Indian Ocean, Southwestern and Northeastern portions of the Pacific Ocean, and Southwest Atlantic and the U.S. Calibration Area.
Trigger probe for determining the orientation of the power distribution of an electron beam
Elmer, John W [Danville, CA; Palmer, Todd A [Livermore, CA; Teruya, Alan T [Livermore, CA
2007-07-17
The present invention relates to a probe for determining the orientation of electron beams being profiled. To accurately time the location of an electron beam, the probe is designed to accept electrons from only a narrowly defined area. The signal produced from the probe is then used as a timing or triggering fiducial for an operably coupled data acquisition system. Such an arrangement eliminates changes in slit geometry, an additional signal feedthrough in the wall of a welding chamber and a second timing or triggering channel on a data acquisition system. As a result, the present invention improves the accuracy of the resulting data by minimizing the adverse effects of current slit triggering methods so as to accurately reconstruct electron or ion beams.
Metrics for quantifying antimicrobial use in beef feedlots.
Benedict, Katharine M; Gow, Sheryl P; Reid-Smith, Richard J; Booker, Calvin W; Morley, Paul S
2012-08-01
Accurate antimicrobial drug use data are needed to enlighten discussions regarding the impact of antimicrobial drug use in agriculture. The primary objective of this study was to investigate the perceived accuracy and clarity of different methods for reporting antimicrobial drug use information collected regarding beef feedlots. Producers, veterinarians, industry representatives, public health officials, and other knowledgeable beef industry leaders were invited to complete a web-based survey. A total of 156 participants in 33 US states, 4 Canadian provinces, and 8 other countries completed the survey. No single metric was considered universally optimal for all use circumstances or for all audiences. To effectively communicate antimicrobial drug use data, evaluation of the target audience is critical to presenting the information. Metrics that are most accurate need to be carefully and repeatedly explained to the audience.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nozirov, Farhod, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com; Stachów, Michał, E-mail: michal.stachow@gmail.com; Kupka, Teobald, E-mail: teobaldk@gmail.com, E-mail: farhod.nozirov@gmail.com
2014-04-14
A theoretical prediction of nuclear magnetic shieldings and indirect spin-spin coupling constants in 1,1-, cis- and trans-1,2-difluoroethylenes is reported. The results obtained using density functional theory (DFT) combined with large basis sets and gauge-independent atomic orbital calculations were critically compared with experiment and conventional, higher level correlated electronic structure methods. Accurate structural, vibrational, and NMR parameters of difluoroethylenes were obtained using several density functionals combined with dedicated basis sets. B3LYP/6-311++G(3df,2pd) optimized structures of difluoroethylenes closely reproduced experimental geometries and earlier reported benchmark coupled cluster results, while BLYP/6-311++G(3df,2pd) produced accurate harmonic vibrational frequencies. The most accurate vibrations were obtained using B3LYP/6-311++G(3df,2pd)more » with correction for anharmonicity. Becke half and half (BHandH) density functional predicted more accurate {sup 19}F isotropic shieldings and van Voorhis and Scuseria's τ-dependent gradient-corrected correlation functional yielded better carbon shieldings than B3LYP. A surprisingly good performance of Hartree-Fock (HF) method in predicting nuclear shieldings in these molecules was observed. Inclusion of zero-point vibrational correction markedly improved agreement with experiment for nuclear shieldings calculated by HF, MP2, CCSD, and CCSD(T) methods but worsened the DFT results. The threefold improvement in accuracy when predicting {sup 2}J(FF) in 1,1-difluoroethylene for BHandH density functional compared to B3LYP was observed (the deviations from experiment were −46 vs. −115 Hz)« less
Walsh, Aaron M.; Crispie, Fiona; Daari, Kareem; O'Sullivan, Orla; Martin, Jennifer C.; Arthur, Cornelius T.; Claesson, Marcus J.; Scott, Karen P.
2017-01-01
ABSTRACT The rapid detection of pathogenic strains in food products is essential for the prevention of disease outbreaks. It has already been demonstrated that whole-metagenome shotgun sequencing can be used to detect pathogens in food but, until recently, strain-level detection of pathogens has relied on whole-metagenome assembly, which is a computationally demanding process. Here we demonstrated that three short-read-alignment-based methods, i.e., MetaMLST, PanPhlAn, and StrainPhlAn, could accurately and rapidly identify pathogenic strains in spinach metagenomes that had been intentionally spiked with Shiga toxin-producing Escherichia coli in a previous study. Subsequently, we employed the methods, in combination with other metagenomics approaches, to assess the safety of nunu, a traditional Ghanaian fermented milk product that is produced by the spontaneous fermentation of raw cow milk. We showed that nunu samples were frequently contaminated with bacteria associated with the bovine gut and, worryingly, we detected putatively pathogenic E. coli and Klebsiella pneumoniae strains in a subset of nunu samples. Ultimately, our work establishes that short-read-alignment-based bioinformatics approaches are suitable food safety tools, and we describe a real-life example of their utilization. IMPORTANCE Foodborne pathogens are responsible for millions of illnesses each year. Here we demonstrate that short-read-alignment-based bioinformatics tools can accurately and rapidly detect pathogenic strains in food products by using shotgun metagenomics data. The methods used here are considerably faster than both traditional culturing methods and alternative bioinformatics approaches that rely on metagenome assembly; therefore, they can potentially be used for more high-throughput food safety testing. Overall, our results suggest that whole-metagenome sequencing can be used as a practical food safety tool to prevent diseases or to link outbreaks to specific food products. PMID:28625983
Versatile technique for assessing thickness of 2D layered materials by XPS
NASA Astrophysics Data System (ADS)
Zemlyanov, Dmitry Y.; Jespersen, Michael; Zakharov, Dmitry N.; Hu, Jianjun; Paul, Rajib; Kumar, Anurag; Pacley, Shanee; Glavin, Nicholas; Saenz, David; Smith, Kyle C.; Fisher, Timothy S.; Voevodin, Andrey A.
2018-03-01
X-ray photoelectron spectroscopy (XPS) has been utilized as a versatile method for thickness characterization of various two-dimensional (2D) films. Accurate thickness can be measured simultaneously while acquiring XPS data for chemical characterization of 2D films having thickness up to approximately 10 nm. For validating the developed technique, thicknesses of few-layer graphene (FLG), MoS2 and amorphous boron nitride (a-BN) layer, produced by microwave plasma chemical vapor deposition (MPCVD), plasma enhanced chemical vapor deposition (PECVD), and pulsed laser deposition (PLD) respectively, were accurately measured. The intensity ratio between photoemission peaks recorded for the films (C 1s, Mo 3d, B 1s) and the substrates (Cu 2p, Al 2p, Si 2p) is the primary input parameter for thickness calculation, in addition to the atomic densities of the substrate and the film, and the corresponding electron attenuation length (EAL). The XPS data was used with a proposed model for thickness calculations, which was verified by cross-sectional transmission electron microscope (TEM) measurement of thickness for all the films. The XPS method determines thickness values averaged over an analysis area which is orders of magnitude larger than the typical area in cross-sectional TEM imaging, hence provides an advanced approach for thickness measurement over large areas of 2D materials. The study confirms that the versatile XPS method allows rapid and reliable assessment of the 2D material thickness and this method can facilitate in tailoring growth conditions for producing very thin 2D materials effectively over a large area. Furthermore, the XPS measurement for a typical 2D material is non-destructive and does not require special sample preparation. Therefore, after XPS analysis, exactly the same sample can undergo further processing or utilization.
Versatile technique for assessing thickness of 2D layered materials by XPS
Zemlyanov, Dmitry Y.; Jespersen, Michael; Zakharov, Dmitry N.; ...
2018-02-07
X-ray photoelectron spectroscopy (XPS) has been utilized as a versatile method for thickness characterization of various two-dimensional (2D) films. Accurate thickness can be measured simultaneously while acquiring XPS data for chemical characterization of 2D films having thickness up to approximately 10 nm. For validating the developed technique, thicknesses of few-layer graphene (FLG), MoS 2 and amorphous boron nitride (a-BN) layer, produced by microwave plasma chemical vapor deposition (MPCVD), plasma enhanced chemical vapor deposition (PECVD), and pulsed laser deposition (PLD) respectively, were accurately measured. The intensity ratio between photoemission peaks recorded for the films (C 1s, Mo 3d, B 1s) andmore » the substrates (Cu 2p, Al 2p, Si 2p) is the primary input parameter for thickness calculation, in addition to the atomic densities of the substrate and the film, and the corresponding electron attenuation length (EAL). The XPS data was used with a proposed model for thickness calculations, which was verified by cross-sectional transmission electron microscope (TEM) measurement of thickness for all the films. The XPS method determines thickness values averaged over an analysis area which is orders of magnitude larger than the typical area in cross-sectional TEM imaging, hence provides an advanced approach for thickness measurement over large areas of 2D materials. The study confirms that the versatile XPS method allows rapid and reliable assessment of the 2D material thickness and this method can facilitate in tailoring growth conditions for producing very thin 2D materials effectively over a large area. Furthermore, the XPS measurement for a typical 2D material is non-destructive and does not require special sample preparation. Furthermore, after XPS analysis, exactly the same sample can undergo further processing or utilization.« less
Versatile technique for assessing thickness of 2D layered materials by XPS.
Zemlyanov, Dmitry Y; Jespersen, Michael; Zakharov, Dmitry N; Hu, Jianjun; Paul, Rajib; Kumar, Anurag; Pacley, Shanee; Glavin, Nicholas; Saenz, David; Smith, Kyle C; Fisher, Timothy S; Voevodin, Andrey A
2018-03-16
X-ray photoelectron spectroscopy (XPS) has been utilized as a versatile method for thickness characterization of various two-dimensional (2D) films. Accurate thickness can be measured simultaneously while acquiring XPS data for chemical characterization of 2D films having thickness up to approximately 10 nm. For validating the developed technique, thicknesses of few-layer graphene (FLG), MoS 2 and amorphous boron nitride (a-BN) layer, produced by microwave plasma chemical vapor deposition (MPCVD), plasma enhanced chemical vapor deposition (PECVD), and pulsed laser deposition (PLD) respectively, were accurately measured. The intensity ratio between photoemission peaks recorded for the films (C 1s, Mo 3d, B 1s) and the substrates (Cu 2p, Al 2p, Si 2p) is the primary input parameter for thickness calculation, in addition to the atomic densities of the substrate and the film, and the corresponding electron attenuation length (EAL). The XPS data was used with a proposed model for thickness calculations, which was verified by cross-sectional transmission electron microscope (TEM) measurement of thickness for all the films. The XPS method determines thickness values averaged over an analysis area which is orders of magnitude larger than the typical area in cross-sectional TEM imaging, hence provides an advanced approach for thickness measurement over large areas of 2D materials. The study confirms that the versatile XPS method allows rapid and reliable assessment of the 2D material thickness and this method can facilitate in tailoring growth conditions for producing very thin 2D materials effectively over a large area. Furthermore, the XPS measurement for a typical 2D material is non-destructive and does not require special sample preparation. Therefore, after XPS analysis, exactly the same sample can undergo further processing or utilization.
Dallas, Lorna J; Devos, Alexandre; Fievet, Bruno; Turner, Andrew; Lyons, Brett P; Jha, Awadhesh N
2016-05-01
Accurate dosimetry is critically important for ecotoxicological and radioecological studies on the potential effects of environmentally relevant radionuclides, such as tritium ((3)H). Previous studies have used basic dosimetric equations to estimate dose from (3)H exposure in ecologically important organisms, such as marine mussels. This study compares four different methods of estimating dose to adult mussels exposed to 1 or 15 MBq L(-1) tritiated water (HTO) under laboratory conditions. These methods were (1) an equation converting seawater activity concentrations to dose rate with fixed parameters; (2) input into the ERICA tool of seawater activity concentrations only; (3) input into the ERICA tool of estimated whole organism concentrations (woTACs), comprising dry activity plus estimated tissue free water tritium (TFWT) activity (TFWT volume × seawater activity concentration); and (4) input into the ERICA tool of measured whole organism activity concentrations, comprising dry activity plus measured TFWT activity (TFWT volume × TFWT activity concentration). Methods 3 and 4 are recommended for future ecotoxicological experiments as they produce values for individual animals and are not reliant on transfer predictions (estimation of concentration ratio). Method 1 may be suitable if measured whole organism concentrations are not available, as it produced results between 3 and 4. As there are technical complications to accurately measuring TFWT, we recommend that future radiotoxicological studies on mussels or other aquatic invertebrates measure whole organism activity in non-dried tissues (i.e. incorporating TFWT and dry activity as one, rather than as separate fractions) and input this data into the ERICA tool. Copyright © 2016 Elsevier Ltd. All rights reserved.
Versatile technique for assessing thickness of 2D layered materials by XPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemlyanov, Dmitry Y.; Jespersen, Michael; Zakharov, Dmitry N.
X-ray photoelectron spectroscopy (XPS) has been utilized as a versatile method for thickness characterization of various two-dimensional (2D) films. Accurate thickness can be measured simultaneously while acquiring XPS data for chemical characterization of 2D films having thickness up to approximately 10 nm. For validating the developed technique, thicknesses of few-layer graphene (FLG), MoS 2 and amorphous boron nitride (a-BN) layer, produced by microwave plasma chemical vapor deposition (MPCVD), plasma enhanced chemical vapor deposition (PECVD), and pulsed laser deposition (PLD) respectively, were accurately measured. The intensity ratio between photoemission peaks recorded for the films (C 1s, Mo 3d, B 1s) andmore » the substrates (Cu 2p, Al 2p, Si 2p) is the primary input parameter for thickness calculation, in addition to the atomic densities of the substrate and the film, and the corresponding electron attenuation length (EAL). The XPS data was used with a proposed model for thickness calculations, which was verified by cross-sectional transmission electron microscope (TEM) measurement of thickness for all the films. The XPS method determines thickness values averaged over an analysis area which is orders of magnitude larger than the typical area in cross-sectional TEM imaging, hence provides an advanced approach for thickness measurement over large areas of 2D materials. The study confirms that the versatile XPS method allows rapid and reliable assessment of the 2D material thickness and this method can facilitate in tailoring growth conditions for producing very thin 2D materials effectively over a large area. Furthermore, the XPS measurement for a typical 2D material is non-destructive and does not require special sample preparation. Furthermore, after XPS analysis, exactly the same sample can undergo further processing or utilization.« less
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
2016-01-01
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554
Establishment and validation of a method for multi-dose irradiation of cells in 96-well microplates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abatzoglou, Ioannis; Zois, Christos E.; Pouliliou, Stamatia
2013-02-15
Highlights: ► We established a method for multi-dose irradiation of cell cultures within a 96-well plate. ► Equations to adjust to preferable dose levels are produced and provided. ► Up to eight different dose levels can be tested in one microplate. ► This method results in fast and reliable estimation of radiation dose–response curves. -- Abstract: Microplates are useful tools in chemistry, biotechnology and molecular biology. In radiobiology research, these can be also applied to assess the effect of a certain radiation dose delivered to the whole microplate, to test radio-sensitivity, radio-sensitization or radio-protection. Whether different radiation doses can bemore » accurately applied to a single 96-well plate to further facilitate and accelerated research by one hand and spare funds on the other, is a question dealt in the current paper. Following repeated ion-chamber, TLD and radiotherapy planning dosimetry we established a method for multi-dose irradiation of cell cultures within a 96-well plate, which allows an accurate delivery of desired doses in sequential columns of the microplate. Up to eight different dose levels can be tested in one microplate. This method results in fast and reliable estimation of radiation dose–response curves.« less
a Method of 3d Measurement and Reconstruction for Cultural Relics in Museums
NASA Astrophysics Data System (ADS)
Zheng, S.; Zhou, Y.; Huang, R.; Zhou, L.; Xu, X.; Wang, C.
2012-07-01
Three-dimensional measurement and reconstruction during conservation and restoration of cultural relics have become an essential part of a modem museum regular work. Although many kinds of methods including laser scanning, computer vision and close-range photogrammetry have been put forward, but problems still exist, such as contradiction between cost and good result, time and fine effect. Aimed at these problems, this paper proposed a structure-light based method for 3D measurement and reconstruction of cultural relics in museums. Firstly, based on structure-light principle, digitalization hardware has been built and with its help, dense point cloud of cultural relics' surface can be easily acquired. To produce accurate 3D geometry model from point cloud data, multi processing algorithms have been developed and corresponding software has been implemented whose functions include blunder detection and removal, point cloud alignment and merge, 3D mesh construction and simplification. Finally, high-resolution images are captured and the alignment of these images and 3D geometry model is conducted and realistic, accurate 3D model is constructed. Based on such method, a complete system including hardware and software are built. Multi-kinds of cultural relics have been used to test this method and results prove its own feature such as high efficiency, high accuracy, easy operation and so on.
Polynomial Supertree Methods Revisited
Brinkmeyer, Malte; Griebel, Thasso; Böcker, Sebastian
2011-01-01
Supertree methods allow to reconstruct large phylogenetic trees by combining smaller trees with overlapping leaf sets into one, more comprehensive supertree. The most commonly used supertree method, matrix representation with parsimony (MRP), produces accurate supertrees but is rather slow due to the underlying hard optimization problem. In this paper, we present an extensive simulation study comparing the performance of MRP and the polynomial supertree methods MinCut Supertree, Modified MinCut Supertree, Build-with-distances, PhySIC, PhySIC_IST, and super distance matrix. We consider both quality and resolution of the reconstructed supertrees. Our findings illustrate the tradeoff between accuracy and running time in supertree construction, as well as the pros and cons of voting- and veto-based supertree approaches. Based on our results, we make some general suggestions for supertree methods yet to come. PMID:22229028
Comparison AHP and SAW to promotion of head major department SMK Muhammadiyah 04 Medan
NASA Astrophysics Data System (ADS)
Saputra, M.; Sitompul, O. S.; Sihombing, P.
2018-04-01
Decision Support System (DSS) is a system that can help a person to make informed decisions about various types of choices that are performed accurately and in accordance with the desired goals. Many problems can be solved by using decision support systems. In this journal the decision support system is used to assist the Chief of Muhammadiyah Medan branch in the selection of the department chief. The criteria used for the election of department chiefs are: Loyalty, Job Performance, Responsibility, Obedience, Honesty, Cooperation, Education, and Leadership. The selection promotion process consists of Analytical Hierarchy Process (AHP) and Simple Additive Weighting (SAW) methods. The data were obtained through teacher assessment questionnaires by principals and colleagues. The results of this study used a comparison with two decision methods namely SAW method and AHP method so that the decision maker (principal) is more appropriate in the determination of candidates who will be elected head of department at school. The final result of this research is the first rank obtained by muhammad musa with weight value on AHP method (0.274) and weight value on SAW method (0.993), alvin syahrin with weight value on AHP method (0.241) and weight value on SAW method (0.883), noviyanti with weight value on AHP method (0.193) and weight value on SAW method (0.707). So the conclusion on the research that is by using SAW method the value of weight produced more accurate.
NASA Astrophysics Data System (ADS)
Coghlan, Leslie; Singleton, H. R.; Dell'Italia, L. J.; Linderholm, C. E.; Pohost, G. M.
1995-05-01
We have developed a method for measuring the detailed in vivo three dimensional geometry of the left and right ventricles using cine-magnetic resonance imaging. From data in the form of digitized short axis outlines, the normal vectors, principal curvatures and directions, and wall thickness were computed. The method was evaluated on simulated ellipsoids and on human MRI data. Measurements of normal vectors and of wall thickness were very accurate in simulated data and appeared appropriate in patient data. On simulated data, measurements of the principal curvature k1 (corresponding approximately to the short axis direction of the left ventricle) and of principal directions were quite accurate, but measurements of the other principal curvature (k2) were less accurate. The reasons behind this are considered. We expect improvements in the accuracy with thinner slices and improved representation of the surface data. Gradient echo images were acquired from 8 dogs with a 1.5T system (Philips Gyroscan) at baseline and four months after closed chest experimentally produced mitral regurgitation (MR). The product (k1 + k2) X wall thickness averaged over all slices at end-diastole was significantly lower after surgery (n equals 8, p < 0.005). These geometry changes were consistent with the expected increase in wall stress after MR.
Advanced Mass Spectrometric Methods for the Rapid and Quantitative Characterization of Proteomes
Smith, Richard D.
2002-01-01
Progress is reviewedmore » towards the development of a global strategy that aims to extend the sensitivity, dynamic range, comprehensiveness and throughput of proteomic measurements based upon the use of high performance separations and mass spectrometry. The approach uses high accuracy mass measurements from Fourier transform ion cyclotron resonance mass spectrometry (FTICR) to validate peptide ‘accurate mass tags’ (AMTs) produced by global protein enzymatic digestions for a specific organism, tissue or cell type from ‘potential mass tags’ tentatively identified using conventional tandem mass spectrometry (MS/MS). This provides the basis for subsequent measurements without the need for MS/ MS. High resolution capillary liquid chromatography separations combined with high sensitivity, and high resolution accurate FTICR measurements are shown to be capable of characterizing peptide mixtures of more than 10 5 components. The strategy has been initially demonstrated using the microorganisms Saccharomyces cerevisiae and Deinococcus radiodurans. Advantages of the approach include the high confidence of protein identification, its broad proteome coverage, high sensitivity, and the capability for stableisotope labeling methods for precise relative protein abundance measurements. Abbreviations : LC, liquid chromatography; FTICR, Fourier transform ion cyclotron resonance; AMT, accurate mass tag; PMT, potential mass tag; MMA, mass measurement accuracy; MS, mass spectrometry; MS/MS, tandem mass spectrometry; ppm, parts per million.« less
Large-scale structure prediction by improved contact predictions and model quality assessment.
Michel, Mirco; Menéndez Hurtado, David; Uziela, Karolis; Elofsson, Arne
2017-07-15
Accurate contact predictions can be used for predicting the structure of proteins. Until recently these methods were limited to very big protein families, decreasing their utility. However, recent progress by combining direct coupling analysis with machine learning methods has made it possible to predict accurate contact maps for smaller families. To what extent these predictions can be used to produce accurate models of the families is not known. We present the PconsFold2 pipeline that uses contact predictions from PconsC3, the CONFOLD folding algorithm and model quality estimations to predict the structure of a protein. We show that the model quality estimation significantly increases the number of models that reliably can be identified. Finally, we apply PconsFold2 to 6379 Pfam families of unknown structure and find that PconsFold2 can, with an estimated 90% specificity, predict the structure of up to 558 Pfam families of unknown structure. Out of these, 415 have not been reported before. Datasets as well as models of all the 558 Pfam families are available at http://c3.pcons.net/ . All programs used here are freely available. arne@bioinfo.se. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Flip-avoiding interpolating surface registration for skull reconstruction.
Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye
2018-03-30
Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.
Learning accurate and concise naïve Bayes classifiers from attribute value taxonomies and data
Kang, D.-K.; Silvescu, A.; Honavar, V.
2009-01-01
In many application domains, there is a need for learning algorithms that can effectively exploit attribute value taxonomies (AVT)—hierarchical groupings of attribute values—to learn compact, comprehensible and accurate classifiers from data—including data that are partially specified. This paper describes AVT-NBL, a natural generalization of the naïve Bayes learner (NBL), for learning classifiers from AVT and data. Our experimental results show that AVT-NBL is able to generate classifiers that are substantially more compact and more accurate than those produced by NBL on a broad range of data sets with different percentages of partially specified values. We also show that AVT-NBL is more efficient in its use of training data: AVT-NBL produces classifiers that outperform those produced by NBL using substantially fewer training examples. PMID:20351793
Lamar, Richard T; Olk, Daniel C; Mayhew, Lawrence; Bloom, Paul R
2014-01-01
Increased use of humic substances in agriculture has generated intense interest among producers, consumers, and regulators for an accurate and reliable method to quantify humic acid (HA) and fulvic acid (FA) in raw ores and products. Here we present a thoroughly validated method, the new standardized method for determination of HA and FA contents in raw humate ores and in solid and liquid products produced from them. The methods used for preparation of HA and FA were adapted according to the guidelines of the International Humic Substances Society involving alkaline extraction followed by acidification to separate HA from the fulvic fraction. This is followed by separation of FA from the fulvic fraction by adsorption on a nonionic macroporous acrylic ester resin at acid pH. It differs from previous methods in that it determines HA and FA concentrations gravimetrically on an ash-free basis. Critical steps in the method, e.g., initial test portion mass, test portion to extract volume ratio, extraction time, and acidification of alkaline extract, were optimized for maximum and consistent recovery of HA and FA. The method detection limits for HA and FA were 4.62 and 4.8 mg/L, respectively. The method quantitation limits for HA and FA were 14.7 and 15.3 mg/L, respectively.
Binet, Rachel; Deer, Deanne M; Uhlfelder, Samantha J
2014-06-01
Faster detection of contaminated foods can prevent adulterated foods from being consumed and minimize the risk of an outbreak of foodborne illness. A sensitive molecular detection method is especially important for Shigella because ingestion of as few as 10 of these bacterial pathogens can cause disease. The objectives of this study were to compare the ability of four DNA extraction methods to detect Shigella in six types of produce, post-enrichment, and to evaluate a new and rapid conventional multiplex assay that targets the Shigella ipaH, virB and mxiC virulence genes. This assay can detect less than two Shigella cells in pure culture, even when the pathogen is mixed with background microflora, and it can also differentiate natural Shigella strains from a control strain and eliminate false positive results due to accidental laboratory contamination. The four DNA extraction methods (boiling, PrepMan Ultra [Applied Biosystems], InstaGene Matrix [Bio-Rad], DNeasy Tissue kit [Qiagen]) detected 1.6 × 10(3)Shigella CFU/ml post-enrichment, requiring ∼18 doublings to one cell in 25 g of produce pre-enrichment. Lower sensitivity was obtained, depending on produce type and extraction method. The InstaGene Matrix was the most consistent and sensitive and the multiplex assay accurately detected Shigella in less than 90 min, outperforming, to the best of our knowledge, molecular assays currently in place for this pathogen. Published by Elsevier Ltd.
Dynamic mode decomposition for plasma diagnostics and validation.
Taylor, Roy; Kutz, J Nathan; Morgan, Kyle; Nelson, Brian A
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
Dynamic mode decomposition for plasma diagnostics and validation
NASA Astrophysics Data System (ADS)
Taylor, Roy; Kutz, J. Nathan; Morgan, Kyle; Nelson, Brian A.
2018-05-01
We demonstrate the application of the Dynamic Mode Decomposition (DMD) for the diagnostic analysis of the nonlinear dynamics of a magnetized plasma in resistive magnetohydrodynamics. The DMD method is an ideal spatio-temporal matrix decomposition that correlates spatial features of computational or experimental data while simultaneously associating the spatial activity with periodic temporal behavior. DMD can produce low-rank, reduced order surrogate models that can be used to reconstruct the state of the system with high fidelity. This allows for a reduction in the computational cost and, at the same time, accurate approximations of the problem, even if the data are sparsely sampled. We demonstrate the use of the method on both numerical and experimental data, showing that it is a successful mathematical architecture for characterizing the helicity injected torus with steady inductive (HIT-SI) magnetohydrodynamics. Importantly, the DMD produces interpretable, dominant mode structures, including a stationary mode consistent with our understanding of a HIT-SI spheromak accompanied by a pair of injector-driven modes. In combination, the 3-mode DMD model produces excellent dynamic reconstructions across the domain of analyzed data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arinilhaq,; Widita, Rena
2014-09-30
Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arraysmore » are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.« less
Lesch, H P; Laitinen, A; Peixoto, C; Vicente, T; Makkonen, K-E; Laitinen, L; Pikkarainen, J T; Samaranayake, H; Alves, P M; Carrondo, M J T; Ylä-Herttuala, S; Airenne, K J
2011-06-01
Lentivirus can be engineered to be a highly potent vector for gene therapy applications. However, generation of clinical grade vectors in enough quantities for therapeutic use is still troublesome and limits the preclinical and clinical experiments. As a first step to solve this unmet need we recently introduced a baculovirus-based production system for lentiviral vector (LV) production using adherent cells. Herein, we have adapted and optimized the production of these vectors to a suspension cell culture system using recombinant baculoviruses delivering all elements required for a safe latest generation LV preparation. High-titer LV stocks were achieved in 293T cells grown in suspension. Produced viruses were accurately characterized and the functionality was also tested in vivo. Produced viruses were compared with viruses produced by calcium phosphate transfection method in adherent cells and polyethylenimine transfection method in suspension cells. Furthermore, a scalable and cost-effective capture purification step was developed based on a diethylaminoethyl monolithic column capable of removing most of the baculoviruses from the LV pool with 65% recovery.
Goff, Ben M; Moore, Kenneth J; Fales, Steven L; Pedersen, Jeffery F
2011-06-01
Sorghum [Sorghum bicolor (L.) Moench] has been shown to contain the cyanogenic glycoside dhurrin, which is responsible for the disorder known as prussic acid poisoning in livestock. The current standard method for estimating hydrogen cyanide (HCN) uses spectrophotometry to measure the aglycone, p-hydroxybenzaldehyde (p-HB), after hydrolysis. Errors may occur due to the inability of this method to solely estimate the absorbance of p-HB at a given wavelength. The objective of this study was to compare the use of gas chromatography (GC) and near infrared spectroscopy (NIRS) methods, along with a spectrophotometry method to estimate the potential for prussic acid (HCNp) of sorghum and sudangrasses over three stages maturities. It was shown that the GC produced higher HCNp estimates than the spectrophotometer for the grain sorghums, but lower concentrations for the sudangrass. Based on what is known about the analytical process of each method, the GC data is likely closer to the true HCNp concentrations of the forages. Both the GC and spectrophotometry methods yielded robust equations with the NIRS method; however, using GC as the calibration method resulted in more accurate and repeatable estimates. The HCNp values obtained from using the GC quantification method are believed to be closer to the actual values of the forage, and that use of this method will provide a more accurate and easily automated means of quantifying prussic acid. Copyright © 2011 Society of Chemical Industry.
NASA Astrophysics Data System (ADS)
Zhang, Leiming; Cao, Peiyu; Li, Shenggong; Yu, Guirui; Zhang, Junhui; Li, Yingnian
2016-04-01
To accurately assess the change of phenology and its relationship with ecosystem gross primary productivity (GPP) is one of the key issues in context of global change study. In this study, an alpine shrubland meadow in Haibei (HBS) of Qinghai-Tibetan plateau and a broad-leaved Korean pine forest in Changbai Mountain (CBM) of Northeastern China were selected. Based on the long-term GPP from eddy flux measurements and the Normalized Difference Vegetation Index (NDVI) from remote sensed vegetation index, phenological indicators including the start of growing season (SOS), the end of growing season (EOS), and the growing season length (GSL) since 2003 were derived via multiple methods, and then the influences of phenology variation on GPP were explored. Compared with ground phenology observations of dominant plant species, both GPP- and NDVI-derived SOS and EOS exhibited a similar interannual trend. GPP-derived SOS was quite close to NDVI-derived SOS, but GPP-derived EOS differed significantly from NDVI-derived EOS, and thus leading to a significant difference between GPP- and NDVI-derived GSL. Relative to SOS, EOS presented larger differences between the extraction methods, indicating large uncertainties to accurately define EOS. In general, among the methods used, the threshold methods produced more satisfactory assessment on phenology change. This study highlights that how to harmonize with the flux measurements, remote sensing and ground monitoring are a big challenge that needs further consideration in phenology study, especially the accurate extraction of EOS. Key words: phenological variation, carbon flux, vegetation index, vegetation grwoth, interannual varibility
Wang, Hao; Straubinger, Robert M; Aletta, John M; Cao, Jin; Duan, Xiaotao; Yu, Haoying; Qu, Jun
2009-03-01
Protein arginine (Arg) methylation serves an important functional role in eucaryotic cells, and typically occurs in domains consisting of multiple Arg in close proximity. Localization of methylarginine (MA) within Arg-rich domains poses a challenge for mass spectrometry (MS)-based methods; the peptides are highly charged under electrospray ionization (ESI), which limits the number of sequence-informative products produced by collision induced dissociation (CID), and loss of the labile methylation moieties during CID precludes effective fragmentation of the peptide backbone. Here the fragmentation behavior of Arg-rich peptides was investigated comprehensively using electron-transfer dissociation (ETD) and CID for both methylated and unmodified glycine-/Arg-rich peptides (GAR), derived from residues 679-695 of human nucleolin, which contains methylation motifs that are widely-represented in biological systems. ETD produced abundant information for sequencing and MA localization, whereas CID failed to provide credible identification for any available charge state (z = 2-4). Nevertheless, CID produced characteristic neutral losses that can be employed to distinguish among different types of MA, as suggested by previous works and confirmed here with product ion scans of high accuracy/resolution by an LTQ/Orbitrap. To analyze MA-peptides in relatively complex mixtures, a method was developed that employs nano-LC coupled to alternating CID/ETD for peptide sequencing and MA localization/characterization, and an Orbitrap for accurate precursor measurement and relative quantification of MA-peptide stoichiometries. As proof of concept, GAR-peptides methylated in vitro by protein arginine N-methyltransferases PRMT1 and PRMT7 were analyzed. It was observed that PRMT1 generated a number of monomethylated (MMA) and asymmetric-dimethylated peptides, while PRMT7 produced predominantly MMA peptides and some symmetric-dimethylated peptides. This approach and the results may advance understanding of the actions of PRMTs and the functional significance of Arg methylation patterns.
Wang, Hao; Straubinger, Robert M.; Aletta, John M.; Cao, Jin; Duan, Xiaotao; Yu, Haoying; Qu, Jun
2012-01-01
Protein arginine (Arg) methylation serves an important functional role in eukaryotic cells, and typically occurs in domains consisting of multiple Arg in close proximity. Localization of methylarginine (MA) within Arg-rich domains poses a challenge for mass spectrometry (MS)-based methods; the peptides are highly-charged under electrospray ionization (ESI), which limits the number of sequence-informative products produced by collision induced dissociation (CID), and loss of the labile methylation moieties during CID precludes effective fragmentation of the peptide backbone. Here the fragmentation behavior of Arg-rich peptides was investigated comprehensively using electron transfer dissociation (ETD) and CID for both methylated and unmodified glycine-/Arg-rich peptides (GAR), derived from residues 679-695 of human nucleolin, which contains methylation motifs that are widely-represented in biological systems. ETD produced abundant information for sequencing and MA localization, whereas CID failed to provide credible identification for any available charge state (z=2-4). Nevertheless, CID produced characteristic neutral losses that can be employed to distinguish among different types of MA, as suggested by previous works and confirmed here with product ion scans of high accuracy/resolution by an LTQ/Orbitrap. To analyze MA-peptides in relatively complex mixtures, a method was developed that employs nano-LC coupled to alternating CID/ETD for peptide sequencing and MA localization/characterization, and an Orbitrap for accurate precursor measurement and relative quantification of MA-peptide stoichiometries. As proof of concept, GAR-peptides methylated in vitro by protein arginine N-methyltransferases PRMT1 and PRMT7 were analyzed. It was observed that PRMT1 generated a number of monomethylated (MMA) and asymmetric-dimethylated peptides, while PRMT7 produced predominantly MMA peptides and some symmetric-dimethylated peptides. This approach and the results may advance understanding of the actions of PRMTs and the functional significance of Arg methylation patterns. PMID:19110445
Salis, Howard; Kaznessis, Yiannis N
2005-12-01
Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.
NASA Astrophysics Data System (ADS)
Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon
2016-03-01
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
State estimation for autopilot control of small unmanned aerial vehicles in windy conditions
NASA Astrophysics Data System (ADS)
Poorman, David Paul
The use of small unmanned aerial vehicles (UAVs) both in the military and civil realms is growing. This is largely due to the proliferation of inexpensive sensors and the increase in capability of small computers that has stemmed from the personal electronic device market. Methods for performing accurate state estimation for large scale aircraft have been well known and understood for decades, which usually involve a complex array of expensive high accuracy sensors. Performing accurate state estimation for small unmanned aircraft is a newer area of study and often involves adapting known state estimation methods to small UAVs. State estimation for small UAVs can be more difficult than state estimation for larger UAVs due to small UAVs employing limited sensor suites due to cost, and the fact that small UAVs are more susceptible to wind than large aircraft. The purpose of this research is to evaluate the ability of existing methods of state estimation for small UAVs to accurately capture the states of the aircraft that are necessary for autopilot control of the aircraft in a Dryden wind field. The research begins by showing which aircraft states are necessary for autopilot control in Dryden wind. Then two state estimation methods that employ only accelerometer, gyro, and GPS measurements are introduced. The first method uses assumptions on aircraft motion to directly solve for attitude information and smooth GPS data, while the second method integrates sensor data to propagate estimates between GPS measurements and then corrects those estimates with GPS information. The performance of both methods is analyzed with and without Dryden wind, in straight and level flight, in a coordinated turn, and in a wings level ascent. It is shown that in zero wind, the first method produces significant steady state attitude errors in both a coordinated turn and in a wings level ascent. In Dryden wind, it produces large noise on the estimates for its attitude states, and has a non-zero mean error that increases when gyro bias is increased. The second method is shown to not exhibit any steady state error in the tested scenarios that is inherent to its design. The second method can correct for attitude errors that arise from both integration error and gyro bias states, but it suffers from lack of attitude error observability. The attitude errors are shown to be more observable in wind, but increased integration error in wind outweighs the increase in attitude corrections that such increased observability brings, resulting in larger attitude errors in wind. Overall, this work highlights many technical deficiencies of both of these methods of state estimation that could be improved upon in the future to enhance state estimation for small UAVs in windy conditions.
Spectral Cauchy Characteristic Extraction: Gravitational Waves and Gauge Free News
NASA Astrophysics Data System (ADS)
Handmer, Casey; Szilagyi, Bela; Winicour, Jeff
2015-04-01
We present a fast, accurate spectral algorithm for the characteristic evolution of the full non-linear vacuum Einstein field equations in the Bondi framework. Developed within the Spectral Einstein Code (SpEC), we demonstrate how spectral Cauchy characteristic extraction produces gravitational News without confounding gauge effects. We explain several numerical innovations and demonstrate speed, stability, accuracy, exponential convergence, and consistency with existing methods. We highlight its capability to deliver physical insights in the study of black hole binaries.
Molecular quenching and relaxation in a plasmonic tunable system
NASA Astrophysics Data System (ADS)
Baffou, Guillaume; Girard, Christian; Dujardin, Erik; Colas Des Francs, Gérard; Martin, Olivier J. F.
2008-03-01
Molecular fluorescence decay is significantly modified when the emitting molecule is located near a plasmonic structure. When the lateral sizes of such structures are reduced to nanometer-scale cross sections, they can be used to accurately control and amplify the emission rate. In this Rapid Communication, we extend Green’s dyadic method to quantitatively investigate both radiative and nonradiative decay channels experienced by a single fluorescent molecule confined in an adjustable dielectric-metal nanogap. The technique produces data in excellent agreement with current experimental work.
Optical diffraction by ordered 2D arrays of silica microspheres
NASA Astrophysics Data System (ADS)
Shcherbakov, A. A.; Shavdina, O.; Tishchenko, A. V.; Veillas, C.; Verrier, I.; Dellea, O.; Jourlin, Y.
2017-03-01
The article presents experimental and theoretical studies of angular dependent diffraction properties of 2D monolayer arrays of silica microspheres. High-quality large area defect-free monolayers of 1 μm diameter silica microspheres were deposited by the Langmuir-Blodgett technique under an accurate optical control. Measured angular dependencies of zeroth and one of the first order diffraction efficiencies produced by deposited samples were simulated by the rigorous Generalized Source Method taking into account particle size dispersion and lattice nonideality.
Rapid Identification of Sequences for Orphan Enzymes to Power Accurate Protein Annotation
Ojha, Sunil; Watson, Douglas S.; Bomar, Martha G.; Galande, Amit K.; Shearer, Alexander G.
2013-01-01
The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the “back catalog” of enzymology – “orphan enzymes,” those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme “back catalog” is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology’s “back catalog” another powerful tool to drive accurate genome annotation. PMID:24386392
NASA Astrophysics Data System (ADS)
Prastuti, M.; Suhartono; Salehah, NA
2018-04-01
The need for energy supply, especially for electricity in Indonesia has been increasing in the last past years. Furthermore, the high electricity usage by people at different times leads to the occurrence of heteroscedasticity issue. Estimate the electricity supply that could fulfilled the community’s need is very important, but the heteroscedasticity issue often made electricity forecasting hard to be done. An accurate forecast of electricity consumptions is one of the key challenges for energy provider to make better resources and service planning and also take control actions in order to balance the electricity supply and demand for community. In this paper, hybrid ARIMAX Quantile Regression (ARIMAX-QR) approach was proposed to predict the short-term electricity consumption in East Java. This method will also be compared to time series regression using RMSE, MAPE, and MdAPE criteria. The data used in this research was the electricity consumption per half-an-hour data during the period of September 2015 to April 2016. The results show that the proposed approach can be a competitive alternative to forecast short-term electricity in East Java. ARIMAX-QR using lag values and dummy variables as predictors yield more accurate prediction in both in-sample and out-sample data. Moreover, both time series regression and ARIMAX-QR methods with addition of lag values as predictor could capture accurately the patterns in the data. Hence, it produces better predictions compared to the models that not use additional lag variables.
Rapid identification of sequences for orphan enzymes to power accurate protein annotation.
Ramkissoon, Kevin R; Miller, Jennifer K; Ojha, Sunil; Watson, Douglas S; Bomar, Martha G; Galande, Amit K; Shearer, Alexander G
2013-01-01
The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.
NASA Astrophysics Data System (ADS)
Delgado, Carlos; Cátedra, Manuel Felipe
2018-05-01
This work presents a technique that allows a very noticeable relaxation of the computational requirements for full-wave electromagnetic simulations based on the Method of Moments. A ray-tracing analysis of the geometry is performed in order to extract the critical points with significant contributions. These points are then used to generate a reduced mesh, considering the regions of the geometry that surround each critical point and taking into account the electrical path followed from the source. The electromagnetic analysis of the reduced mesh produces very accurate results, requiring a fraction of the resources that the conventional analysis would utilize.
ELECTRON MICROSCOPIC EXAMINATION OF SUBCELLULAR FRACTIONS
Baudhuin, Pierre; Evrard, Philippe; Berthet, Jacques
1967-01-01
A method is described for preparing, by filtration on Millipore filters, very thin (about 10 µ) pellicles of packed particles. These pellicles can be embedded in Epon for electron microscopic examination. They are also suitable for cytochemical assays. The method was used with various particulate fractions from rat liver. Its main advantages over the usual centrifugal packing techniques are that it produces heterogeneity solely in the direction perpendicular to the surface of the pellicle and that sections covering the whole depth of the pellicle can be photographed in a single field. It thus answers the essential criterion of random sampling and can be used for accurate quantitative evaluations. PMID:10976209
NASA Technical Reports Server (NTRS)
Wilson, R. B.; Banerjee, P. K.
1987-01-01
This Annual Status Report presents the results of work performed during the third year of the 3-D Inelastic Analysis Methods for Hot Sections Components program (NASA Contract NAS3-23697). The objective of the program is to produce a series of computer codes that permit more accurate and efficient three-dimensional analyses of selected hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The computer codes embody a progression of mathematical models and are streamlined to take advantage of geometrical features, loading conditions, and forms of material response that distinguish each group of selected components.
Salehi, Sohrab; Steif, Adi; Roth, Andrew; Aparicio, Samuel; Bouchard-Côté, Alexandre; Shah, Sohrab P
2017-03-01
Next-generation sequencing (NGS) of bulk tumour tissue can identify constituent cell populations in cancers and measure their abundance. This requires computational deconvolution of allelic counts from somatic mutations, which may be incapable of fully resolving the underlying population structure. Single cell sequencing (SCS) is a more direct method, although its replacement of NGS is impeded by technical noise and sampling limitations. We propose ddClone, which analytically integrates NGS and SCS data, leveraging their complementary attributes through joint statistical inference. We show on real and simulated datasets that ddClone produces more accurate results than can be achieved by either method alone.
Spacecraft inertia estimation via constrained least squares
NASA Technical Reports Server (NTRS)
Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.
2006-01-01
This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.
Wilquin, Hélène; Delevoye-Turrell, Yvonne; Dione, Mariama; Giersch, Anne
2018-01-01
Objective: Basic temporal dysfunctions have been described in patients with schizophrenia, which may impact their ability to connect and synchronize with the outer world. The present study was conducted with the aim to distinguish between interval timing and synchronization difficulties and more generally the spatial-temporal organization disturbances for voluntary actions. A new sensorimotor synchronization task was developed to test these abilities. Method: Twenty-four chronic schizophrenia patients matched with 27 controls performed a spatial-tapping task in which finger taps were to be produced in synchrony with a regular metronome to six visual targets presented around a virtual circle on a tactile screen. Isochronous (time intervals of 500 ms) and non-isochronous auditory sequences (alternated time intervals of 300/600 ms) were presented. The capacity to produce time intervals accurately versus the ability to synchronize own actions (tap) with external events (tone) were measured. Results: Patients with schizophrenia were able to produce the tapping patterns of both isochronous and non-isochronous auditory sequences as accurately as controls producing inter-response intervals close to the expected interval of 500 and 900 ms, respectively. However, the synchronization performances revealed significantly more positive asynchrony means (but similar variances) in the patient group than in the control group for both types of auditory sequences. Conclusion: The patterns of results suggest that patients with schizophrenia are able to perceive and produce both simple and complex sequences of time intervals but are impaired in the ability to synchronize their actions with external events. These findings suggest a specific deficit in predictive timing, which may be at the core of early symptoms previously described in schizophrenia.
Surface-Constrained Volumetric Brain Registration Using Harmonic Mappings
Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.
2015-01-01
In order to compare anatomical and functional brain imaging data across subjects, the images must first be registered to a common coordinate system in which anatomical features are aligned. Intensity-based volume registration methods can align subcortical structures well, but the variability in sulcal folding patterns typically results in misalignment of the cortical surface. Conversely, surface-based registration using sulcal features can produce excellent cortical alignment but the mapping between brains is restricted to the cortical surface. Here we describe a method for volumetric registration that also produces an accurate one-to-one point correspondence between cortical surfaces. This is achieved by first parameterizing and aligning the cortical surfaces using sulcal landmarks. We then use a constrained harmonic mapping to extend this surface correspondence to the entire cortical volume. Finally, this mapping is refined using an intensity-based warp. We demonstrate the utility of the method by applying it to T1-weighted magnetic resonance images (MRI). We evaluate the performance of our proposed method relative to existing methods that use only intensity information; for this comparison we compute the inter-subject alignment of expert-labeled sub-cortical structures after registration. PMID:18092736
Analytic Evolution of Singular Distribution Amplitudes in QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tandogan Kunkel, Asli
2014-08-01
Distribution amplitudes (DAs) are the basic functions that contain information about the quark momentum. DAs are necessary to describe hard exclusive processes in quantum chromodynamics. We describe a method of analytic evolution of DAs that have singularities such as nonzero values at the end points of the support region, jumps at some points inside the support region and cusps. We illustrate the method by applying it to the evolution of a at (constant) DA, antisymmetric at DA, and then use the method for evolution of the two-photon generalized distribution amplitude. Our approach to DA evolution has advantages over the standardmore » method of expansion in Gegenbauer polynomials [1, 2] and over a straightforward iteration of an initial distribution with evolution kernel. Expansion in Gegenbauer polynomials requires an infinite number of terms in order to accurately reproduce functions in the vicinity of singular points. Straightforward iteration of an initial distribution produces logarithmically divergent terms at each iteration. In our method the logarithmic singularities are summed from the start, which immediately produces a continuous curve. Afterwards, in order to get precise results, only one or two iterations are needed.« less
THE EFFECT OF PULMONARY CONGESTION ON THE ON THE VENTILATION OF THE LUNGS
Drinker, Cecil K.; Peabody, Francis W.; Blumgart, Herrmann L.
1922-01-01
1. A method is described for producing pulmonary congestion, together with what may be termed a differential spirometer method for studying lung ventilation. 2. The method utilized permits an approximately accurate prediction of degrees of pulmonary edema in the living animal, and suggests avenues of approach for the very difficult problems of pulmonary capillary pressure. 3. It is shown that intravascular blood can encroach markedly upon the pulmonary air space. Although the methods used in these animal experiments do not resemble vital capacity measurements in man, their result is so definite that their applicability to clinical conditions may be considered. 4. The similarity between the experiments described and certain conditions of cardiac decompensation, of which mitral stenosis is the best example, is pointed out. PMID:19868589
Exact exchange-correlation potentials of singlet two-electron systems
NASA Astrophysics Data System (ADS)
Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.
2017-10-01
We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.
Bulk Enthalpy Calculations in the Arc Jet Facility at NASA ARC
NASA Technical Reports Server (NTRS)
Thompson, Corinna S.; Prabhu, Dinesh; Terrazas-Salinas, Imelda; Mach, Jeffrey J.
2011-01-01
The Arc Jet Facilities at NASA Ames Research Center generate test streams with enthalpies ranging from 5 MJ/kg to 25 MJ/kg. The present work describes a rigorous method, based on equilibrium thermodynamics, for calculating the bulk enthalpy of the flow produced in two of these facilities. The motivation for this work is to determine a dimensionally-correct formula for calculating the bulk enthalpy that is at least as accurate as the conventional formulas that are currently used. Unlike previous methods, the new method accounts for the amount of argon that is present in the flow. Comparisons are made with bulk enthalpies computed from an energy balance method. An analysis of primary facility operating parameters and their associated uncertainties is presented in order to further validate the enthalpy calculations reported herein.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christiansen, R.L.; Kalbus, J.S.; Howarth, S.M.
This report documents, demonstrates, evaluates, and provides theoretical justification for methods used to convert experimental data into relative permeability relationships. The report facilities accurate determination of relative permeabilities of anhydride rock samples from the Salado Formation at the Waste Isolation Pilot Plant (WIPP). Relative permeability characteristic curves are necessary for WIPP Performance Assessment (PA) predictions of the potential for flow of waste-generated gas from the repository and brine flow into repository. This report follows Christiansen and Howarth (1995), a comprehensive literature review of methods for measuring relative permeability. It focuses on unsteady-state experiments and describes five methods for obtaining relativemore » permeability relationships from unsteady-state experiments. Unsteady-state experimental methods were recommended for relative permeability measurements of low-permeability anhydrite rock samples form the Salado Formation because these tests produce accurate relative permeability information and take significantly less time to complete than steady-state tests. Five methods for obtaining relative permeability relationships from unsteady-state experiments are described: the Welge method, the Johnson-Bossler-Naumann method, the Jones-Roszelle method, the Ramakrishnan-Cappiello method, and the Hagoort method. A summary, an example of the calculations, and a theoretical justification are provided for each of the five methods. Displacements in porous media are numerically simulated for the calculation examples. The simulated product data were processed using the methods, and the relative permeabilities obtained were compared with those input to the numerical model. A variety of operating conditions were simulated to show sensitivity of production behavior to rock-fluid properties.« less
Fast and accurate mock catalogue generation for low-mass galaxies
NASA Astrophysics Data System (ADS)
Koda, Jun; Blake, Chris; Beutler, Florian; Kazin, Eyal; Marin, Felipe
2016-06-01
We present an accurate and fast framework for generating mock catalogues including low-mass haloes, based on an implementation of the COmoving Lagrangian Acceleration (COLA) technique. Multiple realisations of mock catalogues are crucial for analyses of large-scale structure, but conventional N-body simulations are too computationally expensive for the production of thousands of realizations. We show that COLA simulations can produce accurate mock catalogues with a moderate computation resource for low- to intermediate-mass galaxies in 1012 M⊙ haloes, both in real and redshift space. COLA simulations have accurate peculiar velocities, without systematic errors in the velocity power spectra for k ≤ 0.15 h Mpc-1, and with only 3-per cent error for k ≤ 0.2 h Mpc-1. We use COLA with 10 time steps and a Halo Occupation Distribution to produce 600 mock galaxy catalogues of the WiggleZ Dark Energy Survey. Our parallelized code for efficient generation of accurate halo catalogues is publicly available at github.com/junkoda/cola_halo.
Nakatsuji, Hiroshi
2012-09-18
Just as Newtonian law governs classical physics, the Schrödinger equation (SE) and the relativistic Dirac equation (DE) rule the world of chemistry. So, if we can solve these equations accurately, we can use computation to predict chemistry precisely. However, for approximately 80 years after the discovery of these equations, chemists believed that they could not solve SE and DE for atoms and molecules that included many electrons. This Account reviews ideas developed over the past decade to further the goal of predictive quantum chemistry. Between 2000 and 2005, I discovered a general method of solving the SE and DE accurately. As a first inspiration, I formulated the structure of the exact wave function of the SE in a compact mathematical form. The explicit inclusion of the exact wave function's structure within the variational space allows for the calculation of the exact wave function as a solution of the variational method. Although this process sounds almost impossible, it is indeed possible, and I have published several formulations and applied them to solve the full configuration interaction (CI) with a very small number of variables. However, when I examined analytical solutions for atoms and molecules, the Hamiltonian integrals in their secular equations diverged. This singularity problem occurred in all atoms and molecules because it originates from the singularity of the Coulomb potential in their Hamiltonians. To overcome this problem, I first introduced the inverse SE and then the scaled SE. The latter simpler idea led to immediate and surprisingly accurate solution for the SEs of the hydrogen atom, helium atom, and hydrogen molecule. The free complement (FC) method, also called the free iterative CI (free ICI) method, was efficient for solving the SEs. In the FC method, the basis functions that span the exact wave function are produced by the Hamiltonian of the system and the zeroth-order wave function. These basis functions are called complement functions because they are the elements of the complete functions for the system under consideration. We extended this idea to solve the relativistic DE and applied it to the hydrogen and helium atoms, without observing any problems such as variational collapse. Thereafter, we obtained very accurate solutions of the SE for the ground and excited states of the Born-Oppenheimer (BO) and non-BO states of very small systems like He, H(2)(+), H(2), and their analogues. For larger systems, however, the overlap and Hamiltonian integrals over the complement functions are not always known mathematically (integration difficulty); therefore we formulated the local SE (LSE) method as an integral-free method. Without any integration, the LSE method gave fairly accurate energies and wave functions for small atoms and molecules. We also calculated continuous potential curves of the ground and excited states of small diatomic molecules by introducing the transferable local sampling method. Although the FC-LSE method is simple, the achievement of chemical accuracy in the absolute energy of larger systems remains time-consuming. The development of more efficient methods for the calculations of ordinary molecules would allow researchers to make these calculations more easily.
Field-programmable analogue arrays for the sensorless control of DC motors
NASA Astrophysics Data System (ADS)
Rivera, J.; Dueñas, I.; Ortega, S.; Del Valle, J. L.
2018-02-01
This work presents the analogue implementation of a sensorless controller for direct current motors based on the super-twisting (ST) sliding mode technique, by means of field programmable analogue arrays (FPAA). The novelty of this work is twofold, first is the use of the ST algorithm in a sensorless scheme for DC motors, and the implementation method of this type of sliding mode controllers in FPAAs. The ST algorithm reduces the chattering problem produced with the deliberate use of the sign function in classical sliding mode approaches. On the other hand, the advantages of the implementation method over a digital one are that the controller is not digitally approximated, the controller gains are not fine tuned and the implementation does not require the use of analogue-to-digital and digital-to-analogue converter circuits. In addition to this, the FPAA is a reconfigurable, lower cost and power consumption technology. Simulation and experimentation results were registered, where a more accurate transient response and lower power consumption were obtained by the proposed implementation method when compared to a digital implementation. Also, a more accurate performance by the DC motor is obtained with proposed sensorless ST technique when compared with a classical sliding mode approach.
Confidence Region of Least Squares Solution for Single-Arc Observations
NASA Astrophysics Data System (ADS)
Principe, G.; Armellin, R.; Lewis, H.
2016-09-01
The total number of active satellites, rocket bodies, and debris larger than 10 cm is currently about 20,000. Considering all resident space objects larger than 1 cm this rises to an estimated minimum of 500,000 objects. Latest generation sensor networks will be able to detect small-size objects, producing millions of observations per day. Due to observability constraints it is likely that long gaps between observations will occur for small objects. This requires to determine the space object (SO) orbit and to accurately describe the associated uncertainty when observations are acquired on a single arc. The aim of this work is to revisit the classical least squares method taking advantage of the high order Taylor expansions enabled by differential algebra. In particular, the high order expansion of the residuals with respect to the state is used to implement an arbitrary order least squares solver, avoiding the typical approximations of differential correction methods. In addition, the same expansions are used to accurately characterize the confidence region of the solution, going beyond the classical Gaussian distributions. The properties and performances of the proposed method are discussed using optical observations of objects in LEO, HEO, and GEO.
NASA Astrophysics Data System (ADS)
Chu, Chunlei; Stoffa, Paul L.
2012-01-01
Discrete earth models are commonly represented by uniform structured grids. In order to ensure accurate numerical description of all wave components propagating through these uniform grids, the grid size must be determined by the slowest velocity of the entire model. Consequently, high velocity areas are always oversampled, which inevitably increases the computational cost. A practical solution to this problem is to use nonuniform grids. We propose a nonuniform grid implicit spatial finite difference method which utilizes nonuniform grids to obtain high efficiency and relies on implicit operators to achieve high accuracy. We present a simple way of deriving implicit finite difference operators of arbitrary stencil widths on general nonuniform grids for the first and second derivatives and, as a demonstration example, apply these operators to the pseudo-acoustic wave equation in tilted transversely isotropic (TTI) media. We propose an efficient gridding algorithm that can be used to convert uniformly sampled models onto vertically nonuniform grids. We use a 2D TTI salt model to demonstrate its effectiveness and show that the nonuniform grid implicit spatial finite difference method can produce highly accurate seismic modeling results with enhanced efficiency, compared to uniform grid explicit finite difference implementations.
An alternative method of fabricating sub-micron resolution masks using excimer laser ablation
NASA Astrophysics Data System (ADS)
Hayden, C. J.; Eijkel, J. C. T.; Dalton, C.
2004-06-01
In the work presented here, an excimer laser micromachining system has been used successfully to fabricate high-resolution projection and contact masks. The contact masks were subsequently used to produce chrome-gold circular ac electro-osmotic pump (cACEOP) microelectrode arrays on glass substrates, using a conventional contact photolithography process. The contact masks were produced rapidly (~15 min each) and were found to be accurate to sub-micron resolution, demonstrating an alternative route for mask fabrication. Laser machined masks were also used in a laser-projection system, demonstrating that such fabrication techniques are also suited to projection lithography. The work addresses a need for quick reproduction of high-resolution contact masks, given their rapid degradation when compared to non-contact masks.
Encapsulation of Volatile Citronella Essential Oil by Coacervation: Efficiency and Release Study
NASA Astrophysics Data System (ADS)
Manaf, M. A.; Subuki, I.; Jai, J.; Raslan, R.; Mustapa, A. N.
2018-05-01
The volatile citronella essential oil was encapsulated by simple coacervation and complex coacervation using Arabic gum and gelatin as wall material. Glutaraldehyde was used in the methodology as crosslinking agent. The citronella standard calibration graph obtained with R2 of 0.9523 was used for the accurate determination of encapsulation efficiency and release study. The release kinetic was analysed based on Fick"s law of diffusion for polymeric system and linear graph of Log fraction release over Log time was constructed to determine the release rate constant, k and diffusion coefficient, n. Both coacervation methods in the present study produce encapsulation efficiency around 94%. The produced capsules for both coacervation processes were discussed based on the capsules morphology and release kinetic mechanisms.
A vortex-filament and core model for wings with edge vortex separation
NASA Technical Reports Server (NTRS)
Pao, J. L.; Lan, C. E.
1981-01-01
A method for predicting aerodynamic characteristics of slender wings with edge vortex separation was developed. Semiempirical but simple methods were used to determine the initial positions of the free sheet and vortex core. Comparison with available data indicates that: the present method is generally accurate in predicting the lift and induced drag coefficients but the predicted pitching moment is too positive; the spanwise lifting pressure distributions estimated by the one vortex core solution of the present method are significantly better than the results of Mehrotra's method relative to the pressure peak values for the flat delta; the two vortex core system applied to the double delta and strake wing produce overall aerodynamic characteristics which have good agreement with data except for the pitching moment; and the computer time for the present method is about two thirds of that of Mehrotra's method.
A vortex-filament and core model for wings with edge vortex separation
NASA Technical Reports Server (NTRS)
Pao, J. L.; Lan, C. E.
1982-01-01
A vortex filament-vortex core method for predicting aerodynamic characteristics of slender wings with edge vortex separation was developed. Semi-empirical but simple methods were used to determine the initial positions of the free sheet and vortex core. Comparison with available data indicates that: (1) the present method is generally accurate in predicting the lift and induced drag coefficients but the predicted pitching moment is too positive; (2) the spanwise lifting pressure distributions estimated by the one vortex core solution of the present method are significantly better than the results of Mehrotra's method relative to the pressure peak values for the flat delta; (3) the two vortex core system applied to the double delta and strake wings produce overall aerodynamic characteristics which have good agreement with data except for the pitching moment; and (4) the computer time for the present method is about two thirds of that of Mehrotra's method.
New spatial upscaling methods for multi-point measurements: From normal to p-normal
NASA Astrophysics Data System (ADS)
Liu, Feng; Li, Xin
2017-12-01
Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.
FMLRC: Hybrid long read error correction using an FM-index.
Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D
2018-02-09
Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahaney, W.C.; Boyer, M.G.
1986-08-01
Microflora (bacteria and fungi) distributions in several paleosols from Mount Kenya, East Africa, provide important information about contamination of buried soil horizons dated by radiocarbon. High counts of bacteria and fungi in buried soils provide evidence for contamination by plant root effects or ground water movement. Profiles with decreasing counts versus depth appear to produce internally consistent and accurate radiocarbon dates. Profiles with disjunct or bimodal distributions of microflora at various depths produce internally inconsistent chronological sequences of radiocarbon-dated buried surfaces. Preliminary results suggest that numbers up to 5 x 10/sup 2/ g/sup -1/ for bacteria in buried A horizonsmore » do not appear to affect the validity of /sup 14/C dates. Beyond this threshold value, contamination appears to produce younger dates, the difference between true age and /sup 14/C age increasing with the amount of microflora contamination.« less
Liu, Rui; Milkie, Daniel E; Kerlin, Aaron; MacLennan, Bryan; Ji, Na
2014-01-27
In traditional zonal wavefront sensing for adaptive optics, after local wavefront gradients are obtained, the entire wavefront can be calculated by assuming that the wavefront is a continuous surface. Such an approach will lead to sub-optimal performance in reconstructing wavefronts which are either discontinuous or undersampled by the zonal wavefront sensor. Here, we report a new method to reconstruct the wavefront by directly measuring local wavefront phases in parallel using multidither coherent optical adaptive technique. This method determines the relative phases of each pupil segment independently, and thus produces an accurate wavefront for even discontinuous wavefronts. We implemented this method in an adaptive optical two-photon fluorescence microscopy and demonstrated its superior performance in correcting large or discontinuous aberrations.
Verification of a ground-based method for simulating high-altitude, supersonic flight conditions
NASA Astrophysics Data System (ADS)
Zhou, Xuewen; Xu, Jian; Lv, Shuiyan
Ground-based methods for accurately representing high-altitude, high-speed flight conditions have been an important research topic in the aerospace field. Based on an analysis of the requirements for high-altitude supersonic flight tests, a ground-based test bed was designed combining Laval nozzle, which is often found in wind tunnels, with a rocket sled system. Sled tests were used to verify the performance of the test bed. The test results indicated that the test bed produced a uniform-flow field with a static pressure and density equivalent to atmospheric conditions at an altitude of 13-15km and at a flow velocity of approximately M 2.4. This test method has the advantages of accuracy, fewer experimental limitations, and reusability.
Computer controlled fluorometer device and method of operating same
Kolber, Z.; Falkowski, P.
1990-07-17
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.
Computer controlled fluorometer device and method of operating same
Kolber, Zbigniew; Falkowski, Paul
1990-01-01
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.
Reducing misfocus-related motion artefacts in laser speckle contrast imaging.
Ringuette, Dene; Sigal, Iliya; Gad, Raanan; Levi, Ofer
2015-01-01
Laser Speckle Contrast Imaging (LSCI) is a flexible, easy-to-implement technique for measuring blood flow speeds in-vivo. In order to obtain reliable quantitative data from LSCI the object must remain in the focal plane of the imaging system for the duration of the measurement session. However, since LSCI suffers from inherent frame-to-frame noise, it often requires a moving average filter to produce quantitative results. This frame-to-frame noise also makes the implementation of rapid autofocus system challenging. In this work, we demonstrate an autofocus method and system based on a novel measure of misfocus which serves as an accurate and noise-robust feedback mechanism. This measure of misfocus is shown to enable the localization of best focus with sub-depth-of-field sensitivity, yielding more accurate estimates of blood flow speeds and blood vessel diameters.
Accurate integration over atomic regions bounded by zero-flux surfaces.
Polestshuk, Pavel M
2013-01-30
The approach for the integration over a region covered by zero-flux surface is described. This approach based on the surface triangulation technique is efficiently realized in a newly developed program TWOE. The elaborated method is tested on several atomic properties including the source function. TWOE results are compared with those produced by using well-known existing programs. Absolute errors in computed atomic properties are shown to range usually from 10(-6) to 10(-5) au. The demonstrative examples prove that present realization has perfect convergence of atomic properties with increasing size of angular grid and allows to obtain highly accurate data even in the most difficult cases. It is believed that the developed program can be bridgehead that allows to implement atomic partitioning of any desired molecular property with high accuracy. Copyright © 2012 Wiley Periodicals, Inc.
Metrics for quantifying antimicrobial use in beef feedlots
Benedict, Katharine M.; Gow, Sheryl P.; Reid-Smith, Richard J.; Booker, Calvin W.; Morley, Paul S.
2012-01-01
Accurate antimicrobial drug use data are needed to enlighten discussions regarding the impact of antimicrobial drug use in agriculture. The primary objective of this study was to investigate the perceived accuracy and clarity of different methods for reporting antimicrobial drug use information collected regarding beef feedlots. Producers, veterinarians, industry representatives, public health officials, and other knowledgeable beef industry leaders were invited to complete a web-based survey. A total of 156 participants in 33 US states, 4 Canadian provinces, and 8 other countries completed the survey. No single metric was considered universally optimal for all use circumstances or for all audiences. To effectively communicate antimicrobial drug use data, evaluation of the target audience is critical to presenting the information. Metrics that are most accurate need to be carefully and repeatedly explained to the audience. PMID:23372190
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.
NASA Astrophysics Data System (ADS)
Webb, Mathew A.; Hall, Andrew; Kidd, Darren; Minansy, Budiman
2016-05-01
Assessment of local spatial climatic variability is important in the planning of planting locations for horticultural crops. This study investigated three regression-based calibration methods (i.e. traditional versus two optimized methods) to relate short-term 12-month data series from 170 temperature loggers and 4 weather station sites with data series from nearby long-term Australian Bureau of Meteorology climate stations. The techniques trialled to interpolate climatic temperature variables, such as frost risk, growing degree days (GDDs) and chill hours, were regression kriging (RK), regression trees (RTs) and random forests (RFs). All three calibration methods produced accurate results, with the RK-based calibration method delivering the most accurate validation measures: coefficients of determination ( R 2) of 0.92, 0.97 and 0.95 and root-mean-square errors of 1.30, 0.80 and 1.31 °C, for daily minimum, daily maximum and hourly temperatures, respectively. Compared with the traditional method of calibration using direct linear regression between short-term and long-term stations, the RK-based calibration method improved R 2 and reduced root-mean-square error (RMSE) by at least 5 % and 0.47 °C for daily minimum temperature, 1 % and 0.23 °C for daily maximum temperature and 3 % and 0.33 °C for hourly temperature. Spatial modelling indicated insignificant differences between the interpolation methods, with the RK technique tending to be the slightly better method due to the high degree of spatial autocorrelation between logger sites.
Accurate and reproducible functional maps in 127 human cell types via 2D genome segmentation
Hardison, Ross C.
2017-01-01
Abstract The Roadmap Epigenomics Consortium has published whole-genome functional annotation maps in 127 human cell types by integrating data from studies of multiple epigenetic marks. These maps have been widely used for studying gene regulation in cell type-specific contexts and predicting the functional impact of DNA mutations on disease. Here, we present a new map of functional elements produced by applying a method called IDEAS on the same data. The method has several unique advantages and outperforms existing methods, including that used by the Roadmap Epigenomics Consortium. Using five categories of independent experimental datasets, we compared the IDEAS and Roadmap Epigenomics maps. While the overall concordance between the two maps is high, the maps differ substantially in the prediction details and in their consistency of annotation of a given genomic position across cell types. The annotation from IDEAS is uniformly more accurate than the Roadmap Epigenomics annotation and the improvement is substantial based on several criteria. We further introduce a pipeline that improves the reproducibility of functional annotation maps. Thus, we provide a high-quality map of candidate functional regions across 127 human cell types and compare the quality of different annotation methods in order to facilitate biomedical research in epigenomics. PMID:28973456
NASA Astrophysics Data System (ADS)
Vyas, N.; Sammons, R. L.; Addison, O.; Dehghani, H.; Walmsley, A. D.
2016-09-01
Biofilm accumulation on biomaterial surfaces is a major health concern and significant research efforts are directed towards producing biofilm resistant surfaces and developing biofilm removal techniques. To accurately evaluate biofilm growth and disruption on surfaces, accurate methods which give quantitative information on biofilm area are needed, as current methods are indirect and inaccurate. We demonstrate the use of machine learning algorithms to segment biofilm from scanning electron microscopy images. A case study showing disruption of biofilm from rough dental implant surfaces using cavitation bubbles from an ultrasonic scaler is used to validate the imaging and analysis protocol developed. Streptococcus mutans biofilm was disrupted from sandblasted, acid etched (SLA) Ti discs and polished Ti discs. Significant biofilm removal occurred due to cavitation from ultrasonic scaling (p < 0.001). The mean sensitivity and specificity values for segmentation of the SLA surface images were 0.80 ± 0.18 and 0.62 ± 0.20 respectively and 0.74 ± 0.13 and 0.86 ± 0.09 respectively for polished surfaces. Cavitation has potential to be used as a novel way to clean dental implants. This imaging and analysis method will be of value to other researchers and manufacturers wishing to study biofilm growth and removal.
Yang, Xing-Jian; Dang, Zhi; Zhang, Fang-Li; Lin, Zhao-Ying; Zou, Meng-Yao; Tao, Xue-Qin; Lu, Gui-Ning
2013-01-01
This study described the development of a method based on soxhlet extraction combining high performance liquid chromatography (soxhlet-HPLC) for the accurate detection of BDE-209 in soils. The solvent effect of working standard solutions in HPLC was discussed. Results showed that 1 : 1 of methanol and acetone was the optimal condition which could totally dissolve the BDE-209 in environmental samples and avoid the decrease of the peak area and the peak deformation difference of BDE-209 in HPLC. The preliminary experiment was conducted on the configured grassland (1 μg/g) to validate the method feasibility. The method produced reliable reproducibility, simulated soils (n = 4) RSD 1.0%, and was further verified by the analysis e-waste contaminated soils, RSD range 5.9–11.4%. The contamination level of BDE-209 in burning site was consistent with the previous study of Longtang town but lower than Guiyu town, and higher concentration of BDE-209 in paddy field mainly resulted from the long-standing disassembling area nearby. This accurate and fast method was successfully developed to extract and analyze BDE-209 in soil samples, showing its potential use for replacing GC to determinate BDE-209 in soil samples. PMID:24302876
Lattice Boltzmann Method of Different BGA Orientations on I-Type Dispensing Method
Gan, Z. L.; Ishak, M. H. H.; Abdullah, M. Z.; Khor, Soon Fuat
2016-01-01
This paper studies the three dimensional (3D) simulation of fluid flows through the ball grid array (BGA) to replicate the real underfill encapsulation process. The effect of different solder bump arrangements of BGA on the flow front, pressure and velocity of the fluid is investigated. The flow front, pressure and velocity for different time intervals are determined and analyzed for potential problems relating to solder bump damage. The simulation results from Lattice Boltzmann Method (LBM) code will be validated with experimental findings as well as the conventional Finite Volume Method (FVM) code to ensure highly accurate simulation setup. Based on the findings, good agreement can be seen between LBM and FVM simulations as well as the experimental observations. It was shown that only LBM is capable of capturing the micro-voids formation. This study also shows an increasing trend in fluid filling time for BGA with perimeter, middle empty and full orientations. The perimeter orientation has a higher pressure fluid at the middle region of BGA surface compared to middle empty and full orientation. This research would shed new light for a highly accurate simulation of encapsulation process using LBM and help to further increase the reliability of the package produced. PMID:27454872
Hughes, Paul; Deng, Wenjie; Olson, Scott C; Coombs, Robert W; Chung, Michael H; Frenkel, Lisa M
2016-03-01
Accurate analysis of minor populations of drug-resistant HIV requires analysis of a sufficient number of viral templates. We assessed the effect of experimental conditions on the analysis of HIV pol 454 pyrosequences generated from plasma using (1) the "Insertion-deletion (indel) and Carry Forward Correction" (ICC) pipeline, which clusters sequence reads using a nonsubstitution approach and can correct for indels and carry forward errors, and (2) the "Primer Identification (ID)" method, which facilitates construction of a consensus sequence to correct for sequencing errors and allelic skewing. The Primer ID and ICC methods produced similar estimates of viral diversity, but differed in the number of sequence variants generated. Sequence preparation for ICC was comparably simple, but was limited by an inability to assess the number of templates analyzed and allelic skewing. The more costly Primer ID method corrected for allelic skewing and provided the number of viral templates analyzed, which revealed that amplifiable HIV templates varied across specimens and did not correlate with clinical viral load. This latter observation highlights the value of the Primer ID method, which by determining the number of templates amplified, enables more accurate assessment of minority species in the virus population, which may be relevant to prescribing effective antiretroviral therapy.
Spectro-photometric determinations of Mn, Fe and Cu in aluminum master alloys
NASA Astrophysics Data System (ADS)
Rehan; Naveed, A.; Shan, A.; Afzal, M.; Saleem, J.; Noshad, M. A.
2016-08-01
Highly reliable, fast and cost effective Spectro-photometric methods have been developed for the determination of Mn, Fe & Cu in aluminum master alloys, based on the development of calibration curves being prepared via laboratory standards. The calibration curves are designed so as to induce maximum sensitivity and minimum instrumental error (Mn 1mg/100ml-2mg/100ml, Fe 0.01mg/100ml-0.2mg/100ml and Cu 2mg/100ml-10mg/ 100ml). The developed Spectro-photometric methods produce accurate results while analyzing Mn, Fe and Cu in certified reference materials. Particularly, these methods are suitable for all types of Al-Mn, Al-Fe and Al-Cu master alloys (5%, 10%, 50% etc. master alloys).Moreover, the sampling practices suggested herein include a reasonable amount of analytical sample, which truly represent the whole lot of a particular master alloy. Successive dilution technique was utilized to meet the calibration curve range. Furthermore, the workout methods were also found suitable for the analysis of said elements in ordinary aluminum alloys. However, it was observed that Cush owed a considerable interference with Fe, the later one may not be accurately measured in the presence of Cu greater than 0.01 %.
The design of a turboshaft speed governor using modern control techniques
NASA Technical Reports Server (NTRS)
Delosreyes, G.; Gouchoe, D. R.
1986-01-01
The objectives of this program were: to verify the model of off schedule compressor variable geometry in the T700 turboshaft engine nonlinear model; to evaluate the use of the pseudo-random binary noise (PRBN) technique for obtaining engine frequency response data; and to design a high performance power turbine speed governor using modern control methods. Reduction of T700 engine test data generated at NASA-Lewis indicated that the off schedule variable geometry effects were accurate as modeled. Analysis also showed that the PRBN technique combined with the maximum likelihood model identification method produced a Bode frequency response that was as accurate as the response obtained from standard sinewave testing methods. The frequency response verified the accuracy of linear models consisting of engine partial derivatives and used for design. A power turbine governor was designed using the Linear Quadratic Regulator (LQR) method of full state feedback control. A Kalman filter observer was used to estimate helicopter main rotor blade velocity. Compared to the baseline T700 power turbine speed governor, the LQR governor reduced droop up to 25 percent for a 490 shaft horsepower transient in 0.1 sec simulating a wind gust, and up to 85 percent for a 700 shaft horsepower transient in 0.5 sec simulating a large collective pitch angle transient.
Estimating net solar radiation using Landsat Thematic Mapper and digital elevation data
NASA Technical Reports Server (NTRS)
Dubayah, R.
1992-01-01
A radiative transfer algorithm is combined with digital elevation and satellite reflectance data to model spatial variability in net solar radiation at fine spatial resolution. The method is applied to the tall-grass prairie of the 16 x 16 sq km FIFE site (First ISLSCP Field Experiment) of the International Satellite Land Surface Climatology Project. Spectral reflectances as measured by the Landsat Thematic Mapper (TM) are corrected for atmospheric and topographic effects using field measurements and accurate 30-m digital elevation data in a detailed model of atmosphere-surface interaction. The spectral reflectances are then integrated to produce estimates of surface albedo in the range 0.3-3.0 microns. This map of albedo is used in an atmospheric and topographic radiative transfer model to produce a map of net solar radiation. A map of apparent net solar radiation is also derived using only the TM reflectance data, uncorrected for topography, and the average field-measured downwelling solar irradiance. Comparison with field measurements at 10 sites on the prairie shows that the topographically derived radiation map accurately captures the spatial variability in net solar radiation, but the apparent map does not.
NASA Astrophysics Data System (ADS)
Chinowsky, Timothy M.; Yee, Sinclair S.
2002-02-01
Surface plasmon resonance (SPR) affinity sensing, the problem of bulk refractive index (RI) interference in SPR sensing, and a sensor developed to overcome this problem are briefly reviewed. The sensor uses a design based on Texas Instruments' Spreeta SPR sensor to simultaneously measure both bulk and surface RI. The bulk RI measurement is then used to compensate the surface measurement and remove the effects of bulk RI interference. To achieve accurate compensation, robust data analysis and calibration techniques are necessary. Simple linear data analysis techniques derived from measurements of the sensor response were found to provide a versatile, low noise method for extracting measurements of bulk and surface refractive index from the raw sensor data. Automatic calibration using RI gradients was used to correct the linear estimates, enabling the sensor to produce accurate data even when the sensor has a complicated nonlinear response which varies with time. The calibration procedure is described, and the factors influencing calibration accuracy are discussed. Data analysis and calibration principles are illustrated with an experiment in which sucrose and detergent solutions are used to produce changes in bulk and surface RI, respectively.
NASA Astrophysics Data System (ADS)
Eshghi, M.; Alesheikh, A. A.
2015-12-01
Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.
Novel harmonic regularization approach for variable selection in Cox's proportional hazards model.
Chu, Ge-Jin; Liang, Yong; Wang, Jia-Xuan
2014-01-01
Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2 < q < 1) regularizations, to select key risk factors in the Cox's proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL), the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule
NASA Astrophysics Data System (ADS)
Jin, Qinian; Wang, Wei
2018-03-01
The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.
Spectroscopic investigations of microwave generated plasmas
NASA Technical Reports Server (NTRS)
Hawley, Martin C.; Haraburda, Scott S.; Dinkel, Duane W.
1991-01-01
The study deals with the plasma behavior as applied to spacecraft propulsion from the perspective of obtaining better design and modeling capabilities. The general theory of spectroscopy is reviewed, and existing methods for converting emission-line intensities into such quantities as temperatures and densities are outlined. Attention is focused on the single-atomic-line and two-line radiance ratio methods, atomic Boltzmann plot, and species concentration. Electronic temperatures for a helium plasma are determined as a function of pressure and a gas-flow rate using these methods, and the concentrations of ions and electrons are predicted from the Saha-Eggert equations using the sets of temperatures obtained as a function of the gas-flow rate. It is observed that the atomic Boltzmann method produces more reliable results for the electronic temperature, while the results obtained from the single-line method reflect the electron temperatures accurately.
The Multiscale Robin Coupled Method for flows in porous media
NASA Astrophysics Data System (ADS)
Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.
2018-02-01
A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.
Parameswaran, Vidhya; Anilkumar, S; Lylajam, S; Rajesh, C; Narayan, Vivek
2016-01-01
This in vitro study compared the shade matching abilities of an intraoral spectrophotometer and the conventional visual method using two shade guides. The results of previous investigations between color perceived by human observers and color assessed by instruments have been inconclusive. The objectives were to determine accuracies and interrater agreement of both methods and effectiveness of two shade guides with either method. In the visual method, 10 examiners with normal color vision matched target control shade tabs taken from the two shade guides (VITAPAN Classical™ and VITAPAN 3D Master™) with other full sets of the respective shade guides. Each tab was matched 3 times to determine repeatability of visual examiners. The spectrophotometric shade matching was performed by two independent examiners using an intraoral spectrophotometer (VITA Easyshade™) with five repetitions for each tab. Results revealed that visual method had greater accuracy than the spectrophotometer. The spectrophotometer; however, exhibited significantly better interrater agreement as compared to the visual method. While VITAPAN Classical shade guide was more accurate with the spectrophotometer, VITAPAN 3D Master shade guide proved better with visual method. This in vitro study clearly delineates the advantages and limitations of both methods. There were significant differences between the methods with the visual method producing more accurate results than the spectrophotometric method. The spectrophotometer showed far better interrater agreement scores irrespective of the shade guide used. Even though visual shade matching is subjective, it is not inferior and should not be underrated. Judicious combination of both techniques is imperative to attain a successful and esthetic outcome.
Equilibrium gas-oil ratio measurements using a microfluidic technique.
Fisher, Robert; Shah, Mohammad Khalid; Eskin, Dmitry; Schmidt, Kurt; Singh, Anil; Molla, Shahnawaz; Mostowfi, Farshid
2013-07-07
A method for measuring the equilibrium GOR (gas-oil ratio) of reservoir fluids using microfluidic technology is developed. Live crude oils (crude oil with dissolved gas) are injected into a long serpentine microchannel at reservoir pressure. The fluid forms a segmented flow as it travels through the channel. Gas and liquid phases are produced from the exit port of the channel that is maintained at atmospheric conditions. The process is analogous to the production of crude oil from a formation. By using compositional analysis and thermodynamic principles of hydrocarbon fluids, we show excellent equilibrium between the produced gas and liquid phases is achieved. The GOR of a reservoir fluid is a key parameter in determining the equation of state of a crude oil. Equations of state that are commonly used in petroleum engineering and reservoir simulations describe the phase behaviour of a fluid at equilibrium state. Therefore, to accurately determine the coefficients of an equation of state, the produced gas and liquid phases have to be as close to the thermodynamic equilibrium as possible. In the examples presented here, the GORs measured with the microfluidic technique agreed with GOR values obtained from conventional methods. Furthermore, when compared to conventional methods, the microfluidic technique was simpler to perform, required less equipment, and yielded better repeatability.
Comparison of Climatological Planetary Boundary Layer Depth Estimates Using the GEOS-5 AGCM
NASA Technical Reports Server (NTRS)
Mcgrath-Spangler, Erica Lynn; Molod, Andrea M.
2014-01-01
Planetary boundary layer (PBL) processes, including those influencing the PBL depth, control many aspects of weather and climate and accurate models of these processes are important for forecasting changes in the future. However, evaluation of model estimates of PBL depth are difficult because no consensus on PBL depth definition currently exists and various methods for estimating this parameter can give results that differ by hundreds of meters or more. In order to facilitate comparisons between the Goddard Earth Observation System (GEOS-5) and other modeling and observational systems, seven PBL depth estimation methods are used to produce PBL depth climatologies and are evaluated and compared here. All seven methods evaluate the same atmosphere so all differences are related solely to the definition chosen. These methods depend on the scalar diffusivity, bulk and local Richardson numbers, and the diagnosed horizontal turbulent kinetic energy (TKE). Results are aggregated by climate class in order to allow broad generalizations. The various PBL depth estimations give similar midday results with some exceptions. One method based on horizontal turbulent kinetic energy produces deeper PBL depths in the winter associated with winter storms. In warm, moist conditions, the method based on a bulk Richardson number gives results that are shallower than those given by the methods based on the scalar diffusivity. The impact of turbulence driven by radiative cooling at cloud top is most significant during the evening transition and along several regions across the oceans and methods sensitive to this cooling produce deeper PBL depths where it is most active. Additionally, Richardson number-based methods collapse better at night than methods that depend on the scalar diffusivity. This feature potentially affects tracer transport.
NASA Astrophysics Data System (ADS)
Kim, J.; Park, K.
2016-12-01
In order to evaluate the performance of operational forecast models in the Korea operational oceanographic system (KOOS) which has been developed by Korea Institute of Ocean Science and Technology (KIOST), a skill assessment (SA) tool has developed and provided multiple skill metrics including not only correlation and error skills by comparing predictions and observation but also pattern clustering with numerical models, satellite, and observation. The KOOS has produced 72 hours forecast information on atmospheric and hydrodynamic forecast variables of wind, pressure, current, tide, wave, temperature, and salinity at every 12 hours per day produced by operating numerical models such as WRF, ROMS, MOM5, WW-III, and SWAN and the SA has conducted to evaluate the forecasts. We have been operationally operated several kinds of numerical models such as WRF, ROMS, MOM5, MOHID, WW-III. Quantitative assessment of operational ocean forecast model is very important to provide accurate ocean forecast information not only to general public but also to support ocean-related problems. In this work, we propose a method of pattern clustering using machine learning method and GIS-based spatial analytics to evaluate spatial distribution of numerical models and spatial observation data such as satellite and HF radar. For the clustering, we use 10 or 15 years-long reanalysis data which was computed by the KOOS, ECMWF, and HYCOM to make best matching clusters which are classified physical meaning with time variation and then we compare it with forecast data. Moreover, for evaluating current, we develop extraction method of dominant flow and apply it to hydrodynamic models and HF radar's sea surface current data. By applying pattern clustering method, it allows more accurate and effective assessment of ocean forecast models' performance by comparing not only specific observation positions which are determined by observation stations but also spatio-temporal distribution of whole model areas. We believe that our proposed method will be very useful to examine and evaluate large amount of numerical modeling data as well as satellite data.
Hall, Mary Beth; Hatfield, Ronald D
2015-11-01
Microbial glycogen measurement is used to account for fates of carbohydrate substrates. It is commonly applied to washed cells or pure cultures which can be accurately subsampled, allowing the use of smaller sample sizes. However, the nonhomogeneous fermentation pellets produced with strained rumen inoculum cannot be accurately subsampled, requiring analysis of the entire pellet. In this study, two microbial glycogen methods were compared for analysis of such fermentation pellets: boiling samples for 3h in 30% KOH (KOH) or for 15min in 0.2M NaOH (NaOH), followed by enzymatic hydrolysis with α-amylase and amyloglucosidase, and detection of released glucose. Total α-glucan was calculated as glucose×0.9. KOH and NaOH did not differ in the α-glucan detected in fermentation pellets (29.9 and 29.6mg, respectively; P=0.61). Recovery of different control α-glucans was also tested using KOH, NaOH, and a method employing 45min of bead beating (BB). For purified beef liver glycogen (water-soluble) recovery, BB (95.0%)>KOH (91.4%)>NaOH (87.4%; P<0.05), and for wheat starch (water-insoluble granules) recovery, NaOH (96.9%)>BB (93.8%)>KOH (91.0%; P<0.05). Recovery of isolated protozoal glycogen (water-insoluble granules) did not differ among KOH (87.0%), NaOH (87.6%), and BB (86.0%; P=0.81), but recoveries for all were below 90%. Differences among substrates in the need for gelatinization and susceptibility to destruction by alkali likely affected the results. In conclusion, KOH and NaOH glycogen methods provided comparable determinations of fermentation pellet α-glucan. The tests on purified α-glucans indicated that assessment of recovery in glycogen methods can differ by the control α-glucan selected. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Klein, R.; Adler, A.; Beanlands, R. S.; de Kemp, R. A.
2007-02-01
A rubidium-82 (82Rb) elution system is described for use with positron emission tomography. Due to the short half-life of 82Rb (76 s), the system physics must be modelled precisely to account for transport delay and the associated activity decay and dispersion. Saline flow is switched between a 82Sr/82Rb generator and a bypass line to achieve a constant-activity elution of 82Rb. Pulse width modulation (PWM) of a solenoid valve is compared to simple threshold control as a means to simulate a proportional valve. A predictive-corrective control (PCC) algorithm is developed which produces a constant-activity elution within the constraints of long feedback delay and short elution time. The system model parameters are adjusted through a self-tuning algorithm to minimize error versus the requested time-activity profile. The system is self-calibrating with 2.5% repeatability, independent of generator activity and elution flow rate. Accurate 30 s constant-activity elutions of 10-70% of the total generator activity are achieved using both control methods. The combined PWM-PCC method provides significant improvement in precision and accuracy of the requested elution profiles. The 82Rb elution system produces accurate and reproducible constant-activity elution profiles of 82Rb activity, independent of parent 82Sr activity in the generator. More reproducible elution profiles may improve the quality of clinical and research PET perfusion studies using 82Rb.
Klein, R; Adler, A; Beanlands, R S; Dekemp, R A
2007-02-07
A rubidium-82 ((82)Rb) elution system is described for use with positron emission tomography. Due to the short half-life of (82)Rb (76 s), the system physics must be modelled precisely to account for transport delay and the associated activity decay and dispersion. Saline flow is switched between a (82)Sr/(82)Rb generator and a bypass line to achieve a constant-activity elution of (82)Rb. Pulse width modulation (PWM) of a solenoid valve is compared to simple threshold control as a means to simulate a proportional valve. A predictive-corrective control (PCC) algorithm is developed which produces a constant-activity elution within the constraints of long feedback delay and short elution time. The system model parameters are adjusted through a self-tuning algorithm to minimize error versus the requested time-activity profile. The system is self-calibrating with 2.5% repeatability, independent of generator activity and elution flow rate. Accurate 30 s constant-activity elutions of 10-70% of the total generator activity are achieved using both control methods. The combined PWM-PCC method provides significant improvement in precision and accuracy of the requested elution profiles. The (82)Rb elution system produces accurate and reproducible constant-activity elution profiles of (82)Rb activity, independent of parent (82)Sr activity in the generator. More reproducible elution profiles may improve the quality of clinical and research PET perfusion studies using (82)Rb.
Modeling the Lyα Forest in Collisionless Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, Daniele; Oñorbe, José; Lukić, Zarija
2016-08-11
Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present in this paper "Iteratively Matched Statistics" (IMS), a novel method to accurately model the Lyα forest with collisionless N-body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) andmore » the power spectrum of the real-space Lyα forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N-body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Lyα forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N-body simulations with achievable mean inter-particle separations in large-volume simulations. Finally, in addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic "mock" skies for Lyα forest surveys.« less
MODELING THE Ly α FOREST IN COLLISIONLESS SIMULATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorini, Daniele; Oñorbe, José; Hennawi, Joseph F.
2016-08-20
Cosmological hydrodynamic simulations can accurately predict the properties of the intergalactic medium (IGM), but only under the condition of retaining the high spatial resolution necessary to resolve density fluctuations in the IGM. This resolution constraint prohibits simulating large volumes, such as those probed by BOSS and future surveys, like DESI and 4MOST. To overcome this limitation, we present “Iteratively Matched Statistics” (IMS), a novel method to accurately model the Ly α forest with collisionless N -body simulations, where the relevant density fluctuations are unresolved. We use a small-box, high-resolution hydrodynamic simulation to obtain the probability distribution function (PDF) and themore » power spectrum of the real-space Ly α forest flux. These two statistics are iteratively mapped onto a pseudo-flux field of an N -body simulation, which we construct from the matter density. We demonstrate that our method can reproduce the PDF, line of sight and 3D power spectra of the Ly α forest with good accuracy (7%, 4%, and 7% respectively). We quantify the performance of the commonly used Gaussian smoothing technique and show that it has significantly lower accuracy (20%–80%), especially for N -body simulations with achievable mean inter-particle separations in large-volume simulations. In addition, we show that IMS produces reasonable and smooth spectra, making it a powerful tool for modeling the IGM in large cosmological volumes and for producing realistic “mock” skies for Ly α forest surveys.« less
Fast-SG: an alignment-free algorithm for hybrid assembly.
Di Genova, Alex; Ruz, Gonzalo A; Sagot, Marie-France; Maass, Alejandro
2018-05-01
Long-read sequencing technologies are the ultimate solution for genome repeats, allowing near reference-level reconstructions of large genomes. However, long-read de novo assembly pipelines are computationally intense and require a considerable amount of coverage, thereby hindering their broad application to the assembly of large genomes. Alternatively, hybrid assembly methods that combine short- and long-read sequencing technologies can reduce the time and cost required to produce de novo assemblies of large genomes. Here, we propose a new method, called Fast-SG, that uses a new ultrafast alignment-free algorithm specifically designed for constructing a scaffolding graph using light-weight data structures. Fast-SG can construct the graph from either short or long reads. This allows the reuse of efficient algorithms designed for short-read data and permits the definition of novel modular hybrid assembly pipelines. Using comprehensive standard datasets and benchmarks, we show how Fast-SG outperforms the state-of-the-art short-read aligners when building the scaffoldinggraph and can be used to extract linking information from either raw or error-corrected long reads. We also show how a hybrid assembly approach using Fast-SG with shallow long-read coverage (5X) and moderate computational resources can produce long-range and accurate reconstructions of the genomes of Arabidopsis thaliana (Ler-0) and human (NA12878). Fast-SG opens a door to achieve accurate hybrid long-range reconstructions of large genomes with low effort, high portability, and low cost.
Male production in stingless bees: variable outcomes of queen-worker conflict.
Tóth, Eva; Strassmann, Joan E; Nogueira-Neto, Paulo; Imperatriz-Fonseca, Vera L; Queller, David C
2002-12-01
The genetic structure of social insect colonies is predicted to affect the balance between cooperation and conflict. Stingless bees are of special interest in this respect because they are singly mated relatives of the multiply mated honeybees. Multiple mating is predicted to lead to workers policing each others' male production with the result that virtually all males are produced by the queen, and this prediction is borne out in honey bees. Single mating by the queen, as in stingless bees, causes workers to be more related to each others' sons than to the queen's sons, so they should not police each other. We used microsatellite markers to confirm single mating in eight species of stingless bees and then tested the prediction that workers would produce males. Using a likelihood method, we found some worker male production in six of the eight species, although queens produced some males in all of them. Thus the predicted contrast with honeybees is observed, but not perfectly, perhaps because workers either lack complete control or because of costs of conflict. The data are consistent with the view that there is ongoing conflict over male production. Our method of estimating worker male production appears to be more accurate than exclusion, which sometimes underestimates the proportion of males that are worker produced.
Inaccurate DNA synthesis in cell extracts of yeast producing active human DNA polymerase iota.
Makarova, Alena V; Grabow, Corinn; Gening, Leonid V; Tarantul, Vyacheslav Z; Tahirov, Tahir H; Bessho, Tadayoshi; Pavlov, Youri I
2011-01-31
Mammalian Pol ι has an unusual combination of properties: it is stimulated by Mn(2+) ions, can bypass some DNA lesions and misincorporates "G" opposite template "T" more frequently than incorporates the correct "A." We recently proposed a method of detection of Pol ι activity in animal cell extracts, based on primer extension opposite the template T with a high concentration of only two nucleotides, dGTP and dATP (incorporation of "G" versus "A" method of Gening, abbreviated as "misGvA"). We provide unambiguous proof of the "misGvA" approach concept and extend the applicability of the method for the studies of variants of Pol ι in the yeast model system with different cation cofactors. We produced human Pol ι in baker's yeast, which do not have a POLI ortholog. The "misGvA" activity is absent in cell extracts containing an empty vector, or producing catalytically dead Pol ι, or Pol ι lacking exon 2, but is robust in the strain producing wild-type Pol ι or its catalytic core, or protein with the active center L62I mutant. The signature pattern of primer extension products resulting from inaccurate DNA synthesis by extracts of cells producing either Pol ι or human Pol η is different. The DNA sequence of the template is critical for the detection of the infidelity of DNA synthesis attributed to DNA Pol ι. The primer/template and composition of the exogenous DNA precursor pool can be adapted to monitor replication fidelity in cell extracts expressing various error-prone Pols or mutator variants of accurate Pols. Finally, we demonstrate that the mutation rates in yeast strains producing human DNA Pols ι and η are not elevated over the control strain, despite highly inaccurate DNA synthesis by their extracts.
Impervious surface mapping with Quickbird imagery
Lu, Dengsheng; Hetrick, Scott; Moran, Emilio
2010-01-01
This research selects two study areas with different urban developments, sizes, and spatial patterns to explore the suitable methods for mapping impervious surface distribution using Quickbird imagery. The selected methods include per-pixel based supervised classification, segmentation-based classification, and a hybrid method. A comparative analysis of the results indicates that per-pixel based supervised classification produces a large number of “salt-and-pepper” pixels, and segmentation based methods can significantly reduce this problem. However, neither method can effectively solve the spectral confusion of impervious surfaces with water/wetland and bare soils and the impacts of shadows. In order to accurately map impervious surface distribution from Quickbird images, manual editing is necessary and may be the only way to extract impervious surfaces from the confused land covers and the shadow problem. This research indicates that the hybrid method consisting of thresholding techniques, unsupervised classification and limited manual editing provides the best performance. PMID:21643434
Wavelet imaging cleaning method for atmospheric Cherenkov telescopes
NASA Astrophysics Data System (ADS)
Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.
2002-07-01
We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.
Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.
Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki
2018-03-01
To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .
Utilization of Low Gravity Environment for Measuring Liquid Viscosity
NASA Technical Reports Server (NTRS)
Antar, Basil N.; Ethridge, Edwin
1998-01-01
The method of drop coalescence is used for determining the viscosity of highly viscous undercooled liquids. Low gravity environment is necessary in order to allow for examining large volumes affording much higher accuracy for the viscosity calculations than possible for smaller volumes available under 1 - g conditions. The drop coalescence method is preferred over the drop oscillation technique since the latter method can only be applied for liquids with vanishingly small viscosities. The technique developed relies on both the highly accurate solution of the Navier-Stokes equations as well as on data from experiments conducted in near zero gravity environment. Results are presented for method validation experiments recently performed on board the NASA/KC-135 aircraft. While the numerical solution was produced using the Boundary Element Method. In these tests the viscosity of a highly viscous liquid, glycerine at room temperature, was determined using the liquid coalescence method. The results from these experiments will be discussed.
NASA Astrophysics Data System (ADS)
Sharudin, R. W.; AbdulBari Ali, S.; Zulkarnain, M.; Shukri, M. A.
2018-05-01
This study reports on the integration of Artificial Neural Network (ANNs) with experimental data in predicting the solubility of carbon dioxide (CO2) blowing agent in SEBS by generating highest possible value for Regression coefficient (R2). Basically, foaming of thermoplastic elastomer with CO2 is highly affected by the CO2 solubility. The ability of ANN in predicting interpolated data of CO2 solubility was investigated by comparing training results via different method of network training. Regards to the final prediction result for CO2 solubility by ANN, the prediction trend (output generate) was corroborated with the experimental results. The obtained result of different method of training showed the trend of output generated by Gradient Descent with Momentum & Adaptive LR (traingdx) required longer training time and required more accurate input to produce better output with final Regression Value of 0.88. However, it goes vice versa with Levenberg-Marquardt (trainlm) technique as it produced better output in quick detention time with final Regression Value of 0.91.
Kwak, Kichang; Yoon, Uicheul; Lee, Dong-Kyun; Kim, Geon Ha; Seo, Sang Won; Na, Duk L; Shim, Hack-Joon; Lee, Jong-Min
2013-09-01
The hippocampus has been known to be an important structure as a biomarker for Alzheimer's disease (AD) and other neurological and psychiatric diseases. However, it requires accurate, robust and reproducible delineation of hippocampal structures. In this study, an automated hippocampal segmentation method based on a graph-cuts algorithm combined with atlas-based segmentation and morphological opening was proposed. First of all, the atlas-based segmentation was applied to define initial hippocampal region for a priori information on graph-cuts. The definition of initial seeds was further elaborated by incorporating estimation of partial volume probabilities at each voxel. Finally, morphological opening was applied to reduce false positive of the result processed by graph-cuts. In the experiments with twenty-seven healthy normal subjects, the proposed method showed more reliable results (similarity index=0.81±0.03) than the conventional atlas-based segmentation method (0.72±0.04). Also as for segmentation accuracy which is measured in terms of the ratios of false positive and false negative, the proposed method (precision=0.76±0.04, recall=0.86±0.05) produced lower ratios than the conventional methods (0.73±0.05, 0.72±0.06) demonstrating its plausibility for accurate, robust and reliable segmentation of hippocampus. Copyright © 2013 Elsevier Inc. All rights reserved.
Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos
2014-01-01
Assessing the structural integrity of the hippocampus (HC) is an essential step toward prevention, diagnosis, and follow-up of various brain disorders due to the implication of the structural changes of the HC in those disorders. In this respect, the development of automatic segmentation methods that can accurately, reliably, and reproducibly segment the HC has attracted considerable attention over the past decades. This paper presents an innovative 3-D fully automatic method to be used on top of the multiatlas concept for the HC segmentation. The method is based on a subject-specific set of 3-D optimal local maps (OLMs) that locally control the influence of each energy term of a hybrid active contour model (ACM). The complete set of the OLMs for a set of training images is defined simultaneously via an optimization scheme. At the same time, the optimal ACM parameters are also calculated. Therefore, heuristic parameter fine-tuning is not required. Training OLMs are subsequently combined, by applying an extended multiatlas concept, to produce the OLMs that are anatomically more suitable to the test image. The proposed algorithm was tested on three different and publicly available data sets. Its accuracy was compared with that of state-of-the-art methods demonstrating the efficacy and robustness of the proposed method. PMID:27170866
NASA Astrophysics Data System (ADS)
Feng, Shuo; Liu, Dejun; Cheng, Xing; Fang, Huafeng; Li, Caifang
2017-04-01
Magnetic anomalies produced by underground ferromagnetic pipelines because of the polarization of earth's magnetic field are used to obtain the information on the location, buried depth and other parameters of pipelines. In order to achieve a fast inversion and interpretation of measured data, it is necessary to develop a fast and stable forward method. Magnetic dipole reconstruction (MDR), as a kind of integration numerical method, is well suited for simulating a thin pipeline anomaly. In MDR the pipeline model must be cut into small magnetic dipoles through different segmentation methods. The segmentation method has an impact on the stability and speed of forward calculation. Rapid and accurate simulation of deep-buried pipelines has been achieved by exciting segmentation method. However, in practical measurement, the depth of underground pipe is uncertain. When it comes to the shallow-buried pipeline, the present segmentation may generate significant errors. This paper aims at solving this problem in three stages. First, the cause of inaccuracy is analyzed by simulation experiment. Secondly, new variable interval section segmentation is proposed based on the existing segmentation. It can help MDR method to obtain simulation results in a fast way under the premise of ensuring the accuracy of different depth models. Finally, the measured data is inversed based on new segmentation method. The result proves that the inversion based on the new segmentation can achieve fast and accurate inversion of depth parameters of underground pipes without being limited by pipeline depth.
Beyond Group: Multiple Person Tracking via Minimal Topology-Energy-Variation.
Gao, Shan; Ye, Qixiang; Xing, Junliang; Kuijper, Arjan; Han, Zhenjun; Jiao, Jianbin; Ji, Xiangyang
2017-12-01
Tracking multiple persons is a challenging task when persons move in groups and occlude each other. Existing group-based methods have extensively investigated how to make group division more accurately in a tracking-by-detection framework; however, few of them quantify the group dynamics from the perspective of targets' spatial topology or consider the group in a dynamic view. Inspired by the sociological properties of pedestrians, we propose a novel socio-topology model with a topology-energy function to factor the group dynamics of moving persons and groups. In this model, minimizing the topology-energy-variance in a two-level energy form is expected to produce smooth topology transitions, stable group tracking, and accurate target association. To search for the strong minimum in energy variation, we design the discrete group-tracklet jump moves embedded in the gradient descent method, which ensures that the moves reduce the energy variation of group and trajectory alternately in the varying topology dimension. Experimental results on both RGB and RGB-D data sets show the superiority of our proposed model for multiple person tracking in crowd scenes.
Computational Pollutant Environment Assessment from Propulsion-System Testing
NASA Technical Reports Server (NTRS)
Wang, Ten-See; McConnaughey, Paul; Chen, Yen-Sen; Warsi, Saif
1996-01-01
An asymptotic plume growth method based on a time-accurate three-dimensional computational fluid dynamics formulation has been developed to assess the exhaust-plume pollutant environment from a simulated RD-170 engine hot-fire test on the F1 Test Stand at Marshall Space Flight Center. Researchers have long known that rocket-engine hot firing has the potential for forming thermal nitric oxides, as well as producing carbon monoxide when hydrocarbon fuels are used. Because of the complex physics involved, most attempts to predict the pollutant emissions from ground-based engine testing have used simplified methods, which may grossly underpredict and/or overpredict the pollutant formations in a test environment. The objective of this work has been to develop a computational fluid dynamics-based methodology that replicates the underlying test-stand flow physics to accurately and efficiently assess pollutant emissions from ground-based rocket-engine testing. A nominal RD-170 engine hot-fire test was computed, and pertinent test-stand flow physics was captured. The predicted total emission rates compared reasonably well with those of the existing hydrocarbon engine hot-firing test data.
Computation of Steady and Unsteady Laminar Flames: Theory
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Radhakrishnan, Krishnan; Zhou, Ruhai
1999-01-01
In this paper we describe the numerical analysis underlying our efforts to develop an accurate and reliable code for simulating flame propagation using complex physical and chemical models. We discuss our spatial and temporal discretization schemes, which in our current implementations range in order from two to six. In space we use staggered meshes to define discrete divergence and gradient operators, allowing us to approximate complex diffusion operators while maintaining ellipticity. Our temporal discretization is based on the use of preconditioning to produce a highly efficient linearly implicit method with good stability properties. High order for time accurate simulations is obtained through the use of extrapolation or deferred correction procedures. We also discuss our techniques for computing stationary flames. The primary issue here is the automatic generation of initial approximations for the application of Newton's method. We use a novel time-stepping procedure, which allows the dynamic updating of the flame speed and forces the flame front towards a specified location. Numerical experiments are presented, primarily for the stationary flame problem. These illustrate the reliability of our techniques, and the dependence of the results on various code parameters.
NASA Technical Reports Server (NTRS)
Comfort, R. H.; Baugher, C. R.; Chappell, C. R.
1982-01-01
A procedure for analyzing low-energy (less than approximately 100 eV) ion data from the plasma composition experiment on ISEE 1 is set forth. The method is based on a derived analytic expression for particle flux to a limited aperture retarding potential analyzer (RPA) in the thin sheath approximation, which makes allowance for some effects of a charged spacecraft on plasma particle trajectories. Calculations using simulated data are employed in testing the efficacy and accuracy of the technique. On the basis of an analysis of these calculation results and the mathematical model, the method is seen as being able to provide accurate ion temperatures from all good plasmaspheric RPA data. It is noted that corresponding densities and spacecraft potentials should be accurate when spacecraft potentials are negative but that they are subject to error for positive spacecraft potentials, particularly when ion Mach numbers are much less than 1. An analysis of data from a representative ISEE 1 pass produces a plasmasphere temperature profile that is consistent in overall structure with previous observations.
Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery.
Kowalczuk, Jędrzej; Meyer, Avishai; Carlson, Jay; Psota, Eric T; Buettner, Shelby; Pérez, Lance C; Farritor, Shane M; Oleynikov, Dmitry
2012-12-01
Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.
Lu, Fred Sun; Hou, Suqin; Baltrusaitis, Kristin; Shah, Manan; Leskovec, Jure; Sosic, Rok; Hawkins, Jared; Brownstein, John; Conidi, Giuseppe; Gunn, Julia; Gray, Josh; Zink, Anna
2018-01-01
Background Influenza outbreaks pose major challenges to public health around the world, leading to thousands of deaths a year in the United States alone. Accurate systems that track influenza activity at the city level are necessary to provide actionable information that can be used for clinical, hospital, and community outbreak preparation. Objective Although Internet-based real-time data sources such as Google searches and tweets have been successfully used to produce influenza activity estimates ahead of traditional health care–based systems at national and state levels, influenza tracking and forecasting at finer spatial resolutions, such as the city level, remain an open question. Our study aimed to present a precise, near real-time methodology capable of producing influenza estimates ahead of those collected and published by the Boston Public Health Commission (BPHC) for the Boston metropolitan area. This approach has great potential to be extended to other cities with access to similar data sources. Methods We first tested the ability of Google searches, Twitter posts, electronic health records, and a crowd-sourced influenza reporting system to detect influenza activity in the Boston metropolis separately. We then adapted a multivariate dynamic regression method named ARGO (autoregression with general online information), designed for tracking influenza at the national level, and showed that it effectively uses the above data sources to monitor and forecast influenza at the city level 1 week ahead of the current date. Finally, we presented an ensemble-based approach capable of combining information from models based on multiple data sources to more robustly nowcast as well as forecast influenza activity in the Boston metropolitan area. The performances of our models were evaluated in an out-of-sample fashion over 4 influenza seasons within 2012-2016, as well as a holdout validation period from 2016 to 2017. Results Our ensemble-based methods incorporating information from diverse models based on multiple data sources, including ARGO, produced the most robust and accurate results. The observed Pearson correlations between our out-of-sample flu activity estimates and those historically reported by the BPHC were 0.98 in nowcasting influenza and 0.94 in forecasting influenza 1 week ahead of the current date. Conclusions We show that information from Internet-based data sources, when combined using an informed, robust methodology, can be effectively used as early indicators of influenza activity at fine geographic resolutions. PMID:29317382
NASA Astrophysics Data System (ADS)
Gleason, M. J.; Pitlick, J.; Buttenfield, B. P.
2011-12-01
Terrestrial laser scanning (TLS) represents a new and particularly effective remote sensing technique for investigating geomorphologic processes. Unfortunately, TLS data are commonly characterized by extremely large volume, heterogeneous point distribution, and erroneous measurements, raising challenges for applied researchers. To facilitate efficient and accurate use of TLS in geomorphology, and to improve accessibility for TLS processing in commercial software environments, we are developing a filtering method for raw TLS data to: eliminate data redundancy; produce a more uniformly spaced dataset; remove erroneous measurements; and maintain the ability of the TLS dataset to accurately model terrain. Our method conducts local aggregation of raw TLS data using a 3-D search algorithm based on the geometrical expression of expected random errors in the data. This approach accounts for the estimated accuracy and precision limitations of the instruments and procedures used in data collection, thereby allowing for identification and removal of potential erroneous measurements prior to data aggregation. Initial tests of the proposed technique on a sample TLS point cloud required a modest processing time of approximately 100 minutes to reduce dataset volume over 90 percent (from 12,380,074 to 1,145,705 points). Preliminary analysis of the filtered point cloud revealed substantial improvement in homogeneity of point distribution and minimal degradation of derived terrain models. We will test the method on two independent TLS datasets collected in consecutive years along a non-vegetated reach of the North Fork Toutle River in Washington. We will evaluate the tool using various quantitative, qualitative, and statistical methods. The crux of this evaluation will include a bootstrapping analysis to test the ability of the filtered datasets to model the terrain at roughly the same accuracy as the raw datasets.
The use of cognitive task analysis to improve instructional descriptions of procedures.
Clark, Richard E; Pugh, Carla M; Yates, Kenneth A; Inaba, Kenji; Green, Donald J; Sullivan, Maura E
2012-03-01
Surgical training relies heavily on the ability of expert surgeons to provide complete and accurate descriptions of a complex procedure. However, research from a variety of domains suggests that experts often omit critical information about the judgments, analysis, and decisions they make when solving a difficult problem or performing a complex task. In this study, we compared three methods for capturing surgeons' descriptions of how to perform the procedure for inserting a femoral artery shunt (unaided free-recall, unaided free-recall with simulation, and cognitive task analysis methods) to determine which method produced more accurate and complete results. Cognitive task analysis was approximately 70% more complete and accurate than free-recall and or free-recall during a simulation of the procedure. Ten expert trauma surgeons at a major urban trauma center were interviewed separately and asked to describe how to perform an emergency shunt procedure. Four surgeons provided an unaided free-recall description of the shunt procedure, five surgeons provided an unaided free-recall description of the procedure using visual aids and surgical instruments (simulation), and one (chosen randomly) was interviewed using cognitive task analysis (CTA) methods. An 11th vascular surgeon approved the final CTA protocol. The CTA interview with only one expert surgeon resulted in significantly greater accuracy and completeness of the descriptions compared with the unaided free-recall interviews with multiple expert surgeons. Surgeons in the unaided group omitted nearly 70% of necessary decision steps. In the free-recall group, heavy use of simulation improved surgeons' completeness when describing the steps of the procedure. CTA significantly increases the completeness and accuracy of surgeons' instructional descriptions of surgical procedures. In addition, simulation during unaided free-recall interviews may improve the completeness of interview data. Copyright © 2012 Elsevier Inc. All rights reserved.
Robust surface reconstruction by design-guided SEM photometric stereo
NASA Astrophysics Data System (ADS)
Miyamoto, Atsushi; Matsuse, Hiroki; Koutaki, Gou
2017-04-01
We present a novel approach that addresses the blind reconstruction problem in scanning electron microscope (SEM) photometric stereo for complicated semiconductor patterns to be measured. In our previous work, we developed a bootstrapping de-shadowing and self-calibration (BDS) method, which automatically calibrates the parameter of the gradient measurement formulas and resolves shadowing errors for estimating an accurate three-dimensional (3D) shape and underlying shadowless images. Experimental results on 3D surface reconstruction demonstrated the significance of the BDS method for simple shapes, such as an isolated line pattern. However, we found that complicated shapes, such as line-and-space (L&S) and multilayered patterns, produce deformed and inaccurate measurement results. This problem is due to brightness fluctuations in the SEM images, which are mainly caused by the energy fluctuations of the primary electron beam, variations in the electronic expanse inside a specimen, and electrical charging of specimens. Despite these being essential difficulties encountered in SEM photometric stereo, it is difficult to model accurately all the complicated physical phenomena of electronic behavior. We improved the robustness of the surface reconstruction in order to deal with these practical difficulties with complicated shapes. Here, design data are useful clues as to the pattern layout and layer information of integrated semiconductors. We used the design data as a guide of the measured shape and incorporated a geometrical constraint term to evaluate the difference between the measured and designed shapes into the objective function of the BDS method. Because the true shape does not necessarily correspond to the designed one, we use an iterative scheme to develop proper guide patterns and a 3D surface that provides both a less distorted and more accurate 3D shape after convergence. Extensive experiments on real image data demonstrate the robustness and effectiveness of our method.
NASA Astrophysics Data System (ADS)
Arida, Maya Ahmad
In 1972 sustainable development concept existed and during The years it became one of the most important solution to save natural resources and energy, but now with rising energy costs and increasing awareness of the effect of global warming, the development of building energy saving methods and models become apparently more necessary for sustainable future. According to U.S. Energy Information Administration EIA (EIA), today buildings in the U.S. consume 72 percent of electricity produced, and use 55 percent of U.S. natural gas. Buildings account for about 40 percent of the energy consumed in the United States, more than industry and transportation. Of this energy, heating and cooling systems use about 55 percent. If energy-use trends continue, buildings will become the largest consumer of global energy by 2025. This thesis proposes procedures and analysis techniques for building energy system and optimization methods using time series auto regression artificial neural networks. The model predicts whole building energy consumptions as a function of four input variables, dry bulb and wet bulb outdoor air temperatures, hour of day and type of day. The proposed model and the optimization process are tested using data collected from an existing building located in Greensboro, NC. The testing results show that the model can capture very well the system performance, and The optimization method was also developed to automate the process of finding the best model structure that can produce the best accurate prediction against the actual data. The results show that the developed model can provide results sufficiently accurate for its use in various energy efficiency and saving estimation applications.
A COMPARISON OF STELLAR ELEMENTAL ABUNDANCE TECHNIQUES AND MEASUREMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinkel, Natalie R.; Young, Patrick A.; Pagano, Michael D.
2016-09-01
Stellar elemental abundances are important for understanding the fundamental properties of a star or stellar group, such as age and evolutionary history, as well as the composition of an orbiting planet. However, as abundance measurement techniques have progressed, there has been little standardization between individual methods and their comparisons. As a result, different stellar abundance procedures determine measurements that vary beyond the quoted error for the same elements within the same stars. The purpose of this paper is to better understand the systematic variations between methods and offer recommendations for producing more accurate results in the future. We invited amore » number of participants from around the world (Australia, Portugal, Sweden, Switzerland, and the United States) to calculate 10 element abundances (C, O, Na, Mg, Al, Si, Fe, Ni, Ba, and Eu) using the same stellar spectra for four stars (HD 361, HD 10700, HD 121504, and HD 202206). Each group produced measurements for each star using (1) their own autonomous techniques, (2) standardized stellar parameters, (3) a standardized line list, and (4) both standardized parameters and a line list. We present the resulting stellar parameters, absolute abundances, and a metric of data similarity that quantifies the homogeneity of the data. We conclude that standardization of some kind, particularly stellar parameters, improves the consistency between methods. However, because results did not converge as more free parameters were standardized, it is clear there are inherent issues within the techniques that need to be reconciled. Therefore, we encourage more conversation and transparency within the community such that stellar abundance determinations can be reproducible as well as accurate and precise.« less
Flexible taxonomic assignment of ambiguous sequencing reads
2011-01-01
Background To characterize the diversity of bacterial populations in metagenomic studies, sequencing reads need to be accurately assigned to taxonomic units in a given reference taxonomy. Reads that cannot be reliably assigned to a unique leaf in the taxonomy (ambiguous reads) are typically assigned to the lowest common ancestor of the set of species that match it. This introduces a potentially severe error in the estimation of bacteria present in the sample due to false positives, since all species in the subtree rooted at the ancestor are implicitly assigned to the read even though many of them may not match it. Results We present a method that maps each read to a node in the taxonomy that minimizes a penalty score while balancing the relevance of precision and recall in the assignment through a parameter q. This mapping can be obtained in time linear in the number of matching sequences, because LCA queries to the reference taxonomy take constant time. When applied to six different metagenomic datasets, our algorithm produces different taxonomic distributions depending on whether coverage or precision is maximized. Including information on the quality of the reads reduces the number of unassigned reads but increases the number of ambiguous reads, stressing the relevance of our method. Finally, two measures of performance are described and results with a set of artificially generated datasets are discussed. Conclusions The assignment strategy of sequencing reads introduced in this paper is a versatile and a quick method to study bacterial communities. The bacterial composition of the analyzed samples can vary significantly depending on how ambiguous reads are assigned depending on the value of the q parameter. Validation of our results in an artificial dataset confirm that a combination of values of q produces the most accurate results. PMID:21211059
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.
Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang
2015-09-21
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.
Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.
1998-01-01
Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.
Zheng, Jinkai; Fang, Xiang; Cao, Yong; Xiao, Hang; He, Lili
2013-01-01
To develop an accurate and convenient method for monitoring the production of citrus-derived bioactive 5-demethylnobiletin from demethylation reaction of nobiletin, we compared surface enhanced Raman spectroscopy (SERS) methods with a conventional HPLC method. Our results show that both the substrate-based and solution-based SERS methods correlated with HPLC method very well. The solution method produced lower root mean square error of calibration and higher correlation coefficient than the substrate method. The solution method utilized an ‘affinity chromatography’-like procedure to separate the reactant nobiletin from the product 5-demthylnobiletin based on their different binding affinity to the silver dendrites. The substrate method was found simpler and faster to collect the SERS ‘fingerprint’ spectra of the samples as no incubation between samples and silver was needed and only trace amount of samples were required. Our results demonstrated that the SERS methods were superior to HPLC method in conveniently and rapidly characterizing and quantifying 5-demethylnobiletin production. PMID:23885986
Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zhong-Li, E-mail: zl.liu@163.com; Zhang, Xiu-Lu; Cai, Ling-Cang
A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curvemore » of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.« less
Radiometric Correction of Multitemporal Hyperspectral Uas Image Mosaics of Seedling Stands
NASA Astrophysics Data System (ADS)
Markelin, L.; Honkavaara, E.; Näsi, R.; Viljanen, N.; Rosnell, T.; Hakala, T.; Vastaranta, M.; Koivisto, T.; Holopainen, M.
2017-10-01
Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5 % to 25 %. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.
New high order schemes in BATS-R-US
NASA Astrophysics Data System (ADS)
Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.
2013-12-01
The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.
A neural network method to correct bidirectional effects in water-leaving radiance
NASA Astrophysics Data System (ADS)
Fan, Yongzhen; Li, Wei; Voss, Kenneth J.; Gatebe, Charles K.; Stamnes, Knut
2017-02-01
The standard method to convert the measured water-leaving radiances from the observation direction to the nadir direction developed by Morel and coworkers requires knowledge of the chlorophyll concentration (CHL). Also, the standard method was developed for open ocean water, which makes it unsuitable for turbid coastal waters. We introduce a neural network method to convert the water-leaving radiance (or the corresponding remote sensing reflectance) from the observation direction to the nadir direction. This method does not require any prior knowledge of the water constituents or the inherent optical properties (IOPs). This method is fast, accurate and can be easily adapted to different remote sensing instruments. Validation using NuRADS measurements in different types of water shows that this method is suitable for both open ocean and coastal waters. In open ocean or chlorophyll-dominated waters, our neural network method produces corrections similar to those of the standard method. In turbid coastal waters, especially sediment-dominated waters, a significant improvement was obtained compared to the standard method.
NASA Astrophysics Data System (ADS)
Rich, D. R.; Bowman, J. D.; Crawford, B. E.; Delheij, P. P. J.; Espy, M. A.; Haseyama, T.; Jones, G.; Keith, C. D.; Knudson, J.; Leuschner, M. B.; Masaike, A.; Masuda, Y.; Matsuda, Y.; Penttilä, S. I.; Pomeroy, V. R.; Smith, D. A.; Snow, W. M.; Szymanski, J. J.; Stephenson, S. L.; Thompson, A. K.; Yuan, V.
2002-04-01
The capability of performing accurate absolute measurements of neutron beam polarization opens a number of exciting opportunities in fundamental neutron physics and in neutron scattering. At the LANSCE pulsed neutron source we have measured the neutron beam polarization with an absolute accuracy of 0.3% in the neutron energy range from 40 meV to 10 eV using an optically pumped polarized 3He spin filter and a relative transmission measurement technique. 3He was polarized using the Rb spin-exchange method. We describe the measurement technique, present our results, and discuss some of the systematic effects associated with the method.
Water Level Prediction of Lake Cascade Mahakam Using Adaptive Neural Network Backpropagation (ANNBP)
NASA Astrophysics Data System (ADS)
Mislan; Gaffar, A. F. O.; Haviluddin; Puspitasari, N.
2018-04-01
A natural hazard information and flood events are indispensable as a form of prevention and improvement. One of the causes is flooding in the areas around the lake. Therefore, forecasting the surface of Lake water level to anticipate flooding is required. The purpose of this paper is implemented computational intelligence method namely Adaptive Neural Network Backpropagation (ANNBP) to forecasting the Lake Cascade Mahakam. Based on experiment, performance of ANNBP indicated that Lake water level prediction have been accurate by using mean square error (MSE) and mean absolute percentage error (MAPE). In other words, computational intelligence method can produce good accuracy. A hybrid and optimization of computational intelligence are focus in the future work.
A Hybrid Approach on Tourism Demand Forecasting
NASA Astrophysics Data System (ADS)
Nor, M. E.; Nurul, A. I. M.; Rusiman, M. S.
2018-04-01
Tourism has become one of the important industries that contributes to the country’s economy. Tourism demand forecasting gives valuable information to policy makers, decision makers and organizations related to tourism industry in order to make crucial decision and planning. However, it is challenging to produce an accurate forecast since economic data such as the tourism data is affected by social, economic and environmental factors. In this study, an equally-weighted hybrid method, which is a combination of Box-Jenkins and Artificial Neural Networks, was applied to forecast Malaysia’s tourism demand. The forecasting performance was assessed by taking the each individual method as a benchmark. The results showed that this hybrid approach outperformed the other two models
Computer-based route-definition system for peripheral bronchoscopy.
Graham, Michael W; Gibbs, Jason D; Higgins, William E
2012-04-01
Multi-detector computed tomography (MDCT) scanners produce high-resolution images of the chest. Given a patient's MDCT scan, a physician can use an image-guided intervention system to first plan and later perform bronchoscopy to diagnostic sites situated deep in the lung periphery. An accurate definition of complete routes through the airway tree leading to the diagnostic sites, however, is vital for avoiding navigation errors during image-guided bronchoscopy. We present a system for the robust definition of complete airway routes suitable for image-guided bronchoscopy. The system incorporates both automatic and semiautomatic MDCT analysis methods for this purpose. Using an intuitive graphical user interface, the user invokes automatic analysis on a patient's MDCT scan to produce a series of preliminary routes. Next, the user visually inspects each route and quickly corrects the observed route defects using the built-in semiautomatic methods. Application of the system to a human study for the planning and guidance of peripheral bronchoscopy demonstrates the efficacy of the system.
Integrated large view angle hologram system with multi-slm
NASA Astrophysics Data System (ADS)
Yang, ChengWei; Liu, Juan
2017-10-01
Recently holographic display has attracted much attention for its ability to generate real-time 3D reconstructed image. CGH provides an effective way to produce hologram, and spacial light modulator (SLM) is used to reconstruct the image. However the reconstructing system is usually very heavy and complex, and the view-angle is limited by the pixel size and spatial bandwidth product (SBP) of the SLM. In this paper a light portable holographic display system is proposed by integrating the optical elements and host computer units.Which significantly reduces the space taken in horizontal direction. CGH is produced based on the Fresnel diffraction and point source method. To reduce the memory usage and image distortion, we use an optimized accurate compressed look up table method (AC-LUT) to compute the hologram. In the system, six SLMs are concatenated to a curved plane, each one loading the phase-only hologram in a different angle of the object, the horizontal view-angle of the reconstructed image can be expanded to about 21.8°.
Layer Number and Stacking Order Imaging of Few-layer Graphenes by Transmission Electron Microscopy
NASA Astrophysics Data System (ADS)
Ping, Jinglei; Fuhrer, Michael
2012-02-01
A method using transmission electron microscopy (TEM) selected area electron diffraction (SAED) patterns and dark field (DF) images is developed to identify graphene layer number and stacking order by comparing intensity ratios of SAED spots with theory. Graphene samples are synthesized by ambient pressure chemical vapor depostion and then etched by hydrogen in high temperature to produce samples with crystalline stacking but varying layer number on the nanometer scale. Combined DF images from first- and second-order diffraction spots are used to produce images with layer-number and stacking-order contrast with few-nanometer resolution. This method is proved to be accurate enough for quantative stacking-order-identification of graphenes up to at least four layers. This work was partially supported by Science of Precision Multifunctional Nanostructures for Elecrical Energy Storage, an Energy Frontier Research Center funded by the U.S. DOE, Office of Science, Office of Basic Energy Sciences under Award Number DESC0001160.
YoTube: Searching Action Proposal Via Recurrent and Static Regression Networks
NASA Astrophysics Data System (ADS)
Zhu, Hongyuan; Vial, Romain; Lu, Shijian; Peng, Xi; Fu, Huazhu; Tian, Yonghong; Cao, Xianbin
2018-06-01
In this paper, we present YoTube-a novel network fusion framework for searching action proposals in untrimmed videos, where each action proposal corresponds to a spatialtemporal video tube that potentially locates one human action. Our method consists of a recurrent YoTube detector and a static YoTube detector, where the recurrent YoTube explores the regression capability of RNN for candidate bounding boxes predictions using learnt temporal dynamics and the static YoTube produces the bounding boxes using rich appearance cues in a single frame. Both networks are trained using rgb and optical flow in order to fully exploit the rich appearance, motion and temporal context, and their outputs are fused to produce accurate and robust proposal boxes. Action proposals are finally constructed by linking these boxes using dynamic programming with a novel trimming method to handle the untrimmed video effectively and efficiently. Extensive experiments on the challenging UCF-101 and UCF-Sports datasets show that our proposed technique obtains superior performance compared with the state-of-the-art.
Trimming Line Design using New Development Method and One Step FEM
NASA Astrophysics Data System (ADS)
Chung, Wan-Jin; Park, Choon-Dal; Yang, Dong-yol
2005-08-01
In most of automobile panel manufacturing, trimming is generally performed prior to flanging. To find feasible trimming line is crucial in obtaining accurate edge profile after flanging. Section-based method develops blank along section planes and find trimming line by generating loop of end points. This method suffers from inaccurate results for regions with out-of-section motion. On the other hand, simulation-based method can produce more accurate trimming line by iterative strategy. However, due to limitation of time and lack of information in initial die design, it is still not widely accepted in the industry. In this study, new fast method to find feasible trimming line is proposed. One step FEM is used to analyze the flanging process because we can define the desired final shape after flanging and most of strain paths are simple in flanging. When we use one step FEM, the main obstacle is the generation of initial guess. Robust initial guess generation method is developed to handle bad-shaped mesh, very different mesh size and undercut part. The new method develops 3D triangular mesh in propagational way from final mesh onto the drawing tool surface. Also in order to remedy mesh distortion during development, energy minimization technique is utilized. Trimming line is extracted from the outer boundary after one step FEM simulation. This method shows many benefits since trimming line can be obtained in the early design stage. The developed method is successfully applied to the complex industrial applications such as flanging of fender and door outer.
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
Incorrect Match Detection Method for Arctic Sea-Ice Reconstruction Using Uav Images
NASA Astrophysics Data System (ADS)
Kim, J.-I.; Kim, H.-C.
2018-05-01
Shapes and surface roughness, which are considered as key indicators in understanding Arctic sea-ice, can be measured from the digital surface model (DSM) of the target area. Unmanned aerial vehicle (UAV) flying at low altitudes enables theoretically accurate DSM generation. However, the characteristics of sea-ice with textureless surface and incessant motion make image matching difficult for DSM generation. In this paper, we propose a method for effectively detecting incorrect matches before correcting a sea-ice DSM derived from UAV images. The proposed method variably adjusts the size of search window to analyze the matching results of DSM generated and distinguishes incorrect matches. Experimental results showed that the sea-ice DSM produced large errors along the textureless surfaces, and that the incorrect matches could be effectively detected by the proposed method.
A Focusing Method in the Calibration Process of Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro, José L.; Gardel, Alfredo; Cano, Ángel E.; Bravo, Ignacio
2010-01-01
A focusing procedure in the calibration process of image sensors based on Incoherent Optical Fiber Bundles (IOFBs) is described using the information extracted from fibers. These procedures differ from any other currently known focusing method due to the non spatial in-out correspondence between fibers, which produces a natural codification of the image to transmit. Focus measuring is essential prior to carrying out calibration in order to guarantee accurate processing and decoding. Four algorithms have been developed to estimate the focus measure; two methods based on mean grey level, and the other two based on variance. In this paper, a few simple focus measures are defined and compared. Some experimental results referred to the focus measure and the accuracy of the developed methods are discussed in order to demonstrate its effectiveness. PMID:22315526
NASA Astrophysics Data System (ADS)
Rosenbaum, Joyce E.
2011-12-01
Commercial air traffic is anticipated to increase rapidly in the coming years. The impact of aviation noise on communities surrounding airports is, therefore, a growing concern. Accurate prediction of noise can help to mitigate the impact on communities and foster smoother integration of aerospace engineering advances. The problem of accurate sound level prediction requires careful inclusion of all mechanisms that affect propagation, in addition to correct source characterization. Terrain, ground type, meteorological effects, and source directivity can have a substantial influence on the noise level. Because they are difficult to model, these effects are often included only by rough approximation. This dissertation presents a model designed for sound propagation over uneven terrain, with mixed ground type and realistic meteorological conditions. The model is a hybrid of two numerical techniques: the parabolic equation (PE) and fast field program (FFP) methods, which allow for physics-based inclusion of propagation effects and ensure the low frequency content, a factor in community impact, is predicted accurately. Extension of the hybrid model to a pseudo-three-dimensional representation allows it to produce aviation noise contour maps in the standard form. In order for the model to correctly characterize aviation noise sources, a method of representing arbitrary source directivity patterns was developed for the unique form of the parabolic equation starting field. With this advancement, the model can represent broadband, directional moving sound sources, traveling along user-specified paths. This work was prepared for possible use in the research version of the sound propagation module in the Federal Aviation Administration's new standard predictive tool.
Selfe, James; Hardaker, Natalie; Thewlis, Dominic; Karki, Anna
2006-12-01
To develop an anatomic marker system (AMS) as an accurate, reliable method of thermal imaging data analysis, for use in cryotherapy research. Investigation of the accuracy of new thermal imaging technique. Hospital orthopedic outpatient department in England. Consecutive sample of 9 patients referred to anterior knee pain clinic. Not applicable. Thermally inert markers were placed at specific anatomic locations, defining an area over the anterior knee of patients with anterior knee pain. A baseline thermal image was taken. Patients underwent a 3-minute thermal washout of the affected knee. Thermal images were collected at a rate of 1 image per minute for a 20-minute re-warming period. A Matlab (version 7.0) program was written to digitize the marker positions and subsequently calculate the mean of the area over the anterior knee. Virtual markers were then defined as 15% distal from the proximal marker, 30% proximal from the distal markers, 15% lateral from the medial marker, and 15% medial from the lateral marker. The virtual markers formed an ellipse, which defined an area representative of the patella shape. Within the ellipse, the mean value of the full pixels determined the mean temperature of this region. Ten raters were recruited to use the program and interrater reliability was investigated. The intraclass correlation coefficient produced coefficients within acceptable bounds, ranging from .82 to .97, indicating adequate interrater reliability. The AMS provides an accurate, reliable method for thermal imaging data analysis and is a reliable tool with which to advance cryotherapy research.
Naishadham, Krishna; Piou, Jean E; Ren, Lingyun; Fathy, Aly E
2016-12-01
Ultra wideband (UWB) Doppler radar has many biomedical applications, including remote diagnosis of cardiovascular disease, triage and real-time personnel tracking in rescue missions. It uses narrow pulses to probe the human body and detect tiny cardiopulmonary movements by spectral analysis of the backscattered electromagnetic (EM) field. With the help of super-resolution spectral algorithms, UWB radar is capable of increased accuracy for estimating vital signs such as heart and respiration rates in adverse signal-to-noise conditions. A major challenge for biomedical radar systems is detecting the heartbeat of a subject with high accuracy, because of minute thorax motion (less than 0.5 mm) caused by the heartbeat. The problem becomes compounded by EM clutter and noise in the environment. In this paper, we introduce a new algorithm based on the state space method (SSM) for the extraction of cardiac and respiration rates from UWB radar measurements. SSM produces range-dependent system poles that can be classified parametrically with spectral peaks at the cardiac and respiratory frequencies. It is shown that SSM produces accurate estimates of the vital signs without producing harmonics and inter-modulation products that plague signal resolution in widely used FFT spectrograms.
Near infrared spectral linearisation in quantifying soluble solids content of intact carambola.
Omar, Ahmad Fairuz; MatJafri, Mohd Zubir
2013-04-12
This study presents a novel application of near infrared (NIR) spectral linearisation for measuring the soluble solids content (SSC) of carambola fruits. NIR spectra were measured using reflectance and interactance methods. In this study, only the interactance measurement technique successfully generated a reliable measurement result with a coefficient of determination of (R2) = 0.724 and a root mean square error of prediction for (RMSEP) = 0.461° Brix. The results from this technique produced a highly accurate and stable prediction model compared with multiple linear regression techniques.
Near Infrared Spectral Linearisation in Quantifying Soluble Solids Content of Intact Carambola
Omar, Ahmad Fairuz; MatJafri, Mohd Zubir
2013-01-01
This study presents a novel application of near infrared (NIR) spectral linearisation for measuring the soluble solids content (SSC) of carambola fruits. NIR spectra were measured using reflectance and interactance methods. In this study, only the interactance measurement technique successfully generated a reliable measurement result with a coefficient of determination of (R2) = 0.724 and a root mean square error of prediction for (RMSEP) = 0.461° Brix. The results from this technique produced a highly accurate and stable prediction model compared with multiple linear regression techniques. PMID:23584118
NASA Technical Reports Server (NTRS)
Wright, G.; Bryan, J. B.
1986-01-01
Faster production of large optical mirrors may result from combining single-point diamond crushing of the glass with polishing using a small area tool to smooth the surface and remove the damaged layer. Diamond crushing allows a surface contour accurate to 0.5 microns to be generated, and the small area computer-controlled polishing tool allows the surface roughness to be removed without destroying the initial contour. Final contours with an accuracy of 0.04 microns have been achieved.
Stresses Produced in Airplane Wings by Gusts
NASA Technical Reports Server (NTRS)
Kussner, Hans Georg
1932-01-01
Accurate prediction of gust stress being out of the question because of the multiplicity of the free air movements, the exploration of gust stress is restricted to static method which must be based upon: 1) stress measurements in free flight; 2) check of design specifications of approved type airplanes. With these empirical data the stress must be compared which can be computed for a gust of known intensity and structure. This "maximum gust" then must be so defined as to cover the whole ambit of empiricism and thus serve as prediction for new airplane designs.
Proof of concept of a simple computer-assisted technique for correcting bone deformities.
Ma, Burton; Simpson, Amber L; Ellis, Randy E
2007-01-01
We propose a computer-assisted technique for correcting bone deformities using the Ilizarov method. Our technique is an improvement over prior art in that it does not require a tracking system, navigation hardware and software, or intraoperative registration. Instead, we rely on a postoperative CT scan to obtain all of the information necessary to plan the correction and compute a correction schedule for the patient. Our laboratory experiments using plastic phantoms produced deformity corrections accurate to within 3.0 degrees of rotation and 1 mm of lengthening.
A fast and accurate surface plasmon resonance system
NASA Astrophysics Data System (ADS)
Espinosa Sánchez, Y. M.; Luna Moreno, D.; Noé Arias, E.; Garnica Campos, G.
2012-10-01
In this work we propose a Surface Plasmon Resonance (SPR) system driven by Labview software which produces a fast, simple and accuracy measurements of samples. The system takes 2000 data in a range of 20 degrees in 20 seconds and 0.01 degrees of resolution. All the information is sent from the computer to the microcontroller as an array of bytes in hexadecimal format to be analyzed. Besides to using the system in SPR measurement is possible to make measurement of the critic angle, and Brewster angle using the Abeles method.
A Soil-free System for Assaying Nematicidal Activity of Chemicals
Preiser, F. A.; Babu, J. R.; Haidri, A. A.
1981-01-01
A biological assay system for studying the nematicidal activity of chemicals has been devised using a model consisting of cucumber (Cucumis sativus L. cv. Long Marketer) seedlings growing in the diSPo® growth-pouch apparatus. Meloidogyne incognita was used as the test organism. The response was quantified in terms of the numbers of galls produced. Statistical procedures were applied to estimate the ED50 values of currently available nematicides. This system permits accurate quantification of galling and requires much less space and effort than the currently used methods. PMID:19300800
A Soil-free System for Assaying Nematicidal Activity of Chemicals.
Preiser, F A; Babu, J R; Haidri, A A
1981-10-01
A biological assay system for studying the nematicidal activity of chemicals has been devised using a model consisting of cucumber (Cucumis sativus L. cv. Long Marketer) seedlings growing in the diSPo(R) growth-pouch apparatus. Meloidogyne incognita was used as the test organism. The response was quantified in terms of the numbers of galls produced. Statistical procedures were applied to estimate the ED(50) values of currently available nematicides. This system permits accurate quantification of galling and requires much less space and effort than the currently used methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Jong, Wibe A.; Harrison, Robert J.; Dixon, David A.
A parallel implementation of the spin-free one-electron Douglas-Kroll(-Hess) Hamiltonian (DKH) in NWChem is discussed. An efficient and accurate method to calculate DKH gradients is introduced. It is shown that the use of standard (non-relativistic) contracted basis set can produce erroneous results for elements beyond the first row elements. The generation of DKH contracted cc-pVXZ (X = D, T, Q, 5) basis sets for H, He, B - Ne, Al - Ar, and Ga - Br will be discussed.
Parameswaran, Vidhya; Anilkumar, S.; Lylajam, S.; Rajesh, C.; Narayan, Vivek
2016-01-01
Background and Objectives: This in vitro study compared the shade matching abilities of an intraoral spectrophotometer and the conventional visual method using two shade guides. The results of previous investigations between color perceived by human observers and color assessed by instruments have been inconclusive. The objectives were to determine accuracies and interrater agreement of both methods and effectiveness of two shade guides with either method. Methods: In the visual method, 10 examiners with normal color vision matched target control shade tabs taken from the two shade guides (VITAPAN Classical™ and VITAPAN 3D Master™) with other full sets of the respective shade guides. Each tab was matched 3 times to determine repeatability of visual examiners. The spectrophotometric shade matching was performed by two independent examiners using an intraoral spectrophotometer (VITA Easyshade™) with five repetitions for each tab. Results: Results revealed that visual method had greater accuracy than the spectrophotometer. The spectrophotometer; however, exhibited significantly better interrater agreement as compared to the visual method. While VITAPAN Classical shade guide was more accurate with the spectrophotometer, VITAPAN 3D Master shade guide proved better with visual method. Conclusion: This in vitro study clearly delineates the advantages and limitations of both methods. There were significant differences between the methods with the visual method producing more accurate results than the spectrophotometric method. The spectrophotometer showed far better interrater agreement scores irrespective of the shade guide used. Even though visual shade matching is subjective, it is not inferior and should not be underrated. Judicious combination of both techniques is imperative to attain a successful and esthetic outcome. PMID:27746599
Performance of vegetation indices from Landsat time series in deforestation monitoring
NASA Astrophysics Data System (ADS)
Schultz, Michael; Clevers, Jan G. P. W.; Carter, Sarah; Verbesselt, Jan; Avitabile, Valerio; Quang, Hien Vu; Herold, Martin
2016-10-01
The performance of Landsat time series (LTS) of eight vegetation indices (VIs) was assessed for monitoring deforestation across the tropics. Three sites were selected based on differing remote sensing observation frequencies, deforestation drivers and environmental factors. The LTS of each VI was analysed using the Breaks For Additive Season and Trend (BFAST) Monitor method to identify deforestation. A robust reference database was used to evaluate the performance regarding spatial accuracy, sensitivity to observation frequency and combined use of multiple VIs. The canopy cover sensitive Normalized Difference Fraction Index (NDFI) was the most accurate. Among those tested, wetness related VIs (Normalized Difference Moisture Index (NDMI) and the Tasselled Cap wetness (TCw)) were spatially more accurate than greenness related VIs (Normalized Difference Vegetation Index (NDVI) and Tasselled Cap greenness (TCg)). When VIs were fused on feature level, spatial accuracy was improved and overestimation of change reduced. NDVI and NDFI produced the most robust results when observation frequency varies.
Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system
NASA Astrophysics Data System (ADS)
Baltzer, M. M.; Craig, D.; Den Hartog, D. J.; Nishizawa, T.; Nornberg, M. D.
2016-11-01
An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.
Loescher, Christine M; Morton, David W; Razic, Slavica; Agatonovic-Kustrin, Snezana
2014-09-01
Chromatography techniques such as HPTLC and HPLC are commonly used to produce a chemical fingerprint of a plant to allow identification and quantify the main constituents within the plant. The aims of this study were to compare HPTLC and HPLC, for qualitative and quantitative analysis of the major constituents of Calendula officinalis and to investigate the effect of different extraction techniques on the C. officinalis extract composition from different parts of the plant. The results found HPTLC to be effective for qualitative analysis, however, HPLC was found to be more accurate for quantitative analysis. A combination of the two methods may be useful in a quality control setting as it would allow rapid qualitative analysis of herbal material while maintaining accurate quantification of extract composition. Copyright © 2014 Elsevier B.V. All rights reserved.