Accuracy and Calibration of High Explosive Thermodynamic Equations of State
NASA Astrophysics Data System (ADS)
Baker, Ernest L.; Capellos, Christos; Stiel, Leonard I.; Pincay, Jack
2010-10-01
The Jones-Wilkins-Lee-Baker (JWLB) equation of state (EOS) was developed to more accurately describe overdriven detonation while maintaining an accurate description of high explosive products expansion work output. The increased mathematical complexity of the JWLB high explosive equations of state provides increased accuracy for practical problems of interest. Increased numbers of parameters are often justified based on improved physics descriptions but can also mean increased calibration complexity. A generalized extent of aluminum reaction Jones-Wilkins-Lee (JWL)-based EOS was developed in order to more accurately describe the observed behavior of aluminized explosives detonation products expansion. A calibration method was developed to describe the unreacted, partially reacted, and completely reacted explosive using nonlinear optimization. A reasonable calibration of a generalized extent of aluminum reaction JWLB EOS as a function of aluminum reaction fraction has not yet been achieved due to the increased mathematical complexity of the JWLB form.
An efficient approach to BAC based assembly of complex genomes.
Visendi, Paul; Berkman, Paul J; Hayashi, Satomi; Golicz, Agnieszka A; Bayer, Philipp E; Ruperao, Pradeep; Hurgobin, Bhavna; Montenegro, Juan; Chan, Chon-Kit Kenneth; Staňková, Helena; Batley, Jacqueline; Šimková, Hana; Doležel, Jaroslav; Edwards, David
2016-01-01
There has been an exponential growth in the number of genome sequencing projects since the introduction of next generation DNA sequencing technologies. Genome projects have increasingly involved assembly of whole genome data which produces inferior assemblies compared to traditional Sanger sequencing of genomic fragments cloned into bacterial artificial chromosomes (BACs). While whole genome shotgun sequencing using next generation sequencing (NGS) is relatively fast and inexpensive, this method is extremely challenging for highly complex genomes, where polyploidy or high repeat content confounds accurate assembly, or where a highly accurate 'gold' reference is required. Several attempts have been made to improve genome sequencing approaches by incorporating NGS methods, to variable success. We present the application of a novel BAC sequencing approach which combines indexed pools of BACs, Illumina paired read sequencing, a sequence assembler specifically designed for complex BAC assembly, and a custom bioinformatics pipeline. We demonstrate this method by sequencing and assembling BAC cloned fragments from bread wheat and sugarcane genomes. We demonstrate that our assembly approach is accurate, robust, cost effective and scalable, with applications for complete genome sequencing in large and complex genomes.
NASA Technical Reports Server (NTRS)
Craidon, C. B.
1983-01-01
A computer program was developed to extend the geometry input capabilities of previous versions of a supersonic zero lift wave drag computer program. The arbitrary geometry input description is flexible enough to describe almost any complex aircraft concept, so that highly accurate wave drag analysis can now be performed because complex geometries can be represented accurately and do not have to be modified to meet the requirements of a restricted input format.
Characterization of known protein complexes using k-connectivity and other topological measures
Gallagher, Suzanne R; Goldberg, Debra S
2015-01-01
Many protein complexes are densely packed, so proteins within complexes often interact with several other proteins in the complex. Steric constraints prevent most proteins from simultaneously binding more than a handful of other proteins, regardless of the number of proteins in the complex. Because of this, as complex size increases, several measures of the complex decrease within protein-protein interaction networks. However, k-connectivity, the number of vertices or edges that need to be removed in order to disconnect a graph, may be consistently high for protein complexes. The property of k-connectivity has been little used previously in the investigation of protein-protein interactions. To understand the discriminative power of k-connectivity and other topological measures for identifying unknown protein complexes, we characterized these properties in known Saccharomyces cerevisiae protein complexes in networks generated both from highly accurate X-ray crystallography experiments which give an accurate model of each complex, and also as the complexes appear in high-throughput yeast 2-hybrid studies in which new complexes may be discovered. We also computed these properties for appropriate random subgraphs.We found that clustering coefficient, mutual clustering coefficient, and k-connectivity are better indicators of known protein complexes than edge density, degree, or betweenness. This suggests new directions for future protein complex-finding algorithms. PMID:26913183
Li, Shan; Dong, Xia; Su, Zhengchang
2013-07-30
Although prokaryotic gene transcription has been studied over decades, many aspects of the process remain poorly understood. Particularly, recent studies have revealed that transcriptomes in many prokaryotes are far more complex than previously thought. Genes in an operon are often alternatively and dynamically transcribed under different conditions, and a large portion of genes and intergenic regions have antisense RNA (asRNA) and non-coding RNA (ncRNA) transcripts, respectively. Ironically, similar studies have not been conducted in the model bacterium E coli K12, thus it is unknown whether or not the bacterium possesses similar complex transcriptomes. Furthermore, although RNA-seq becomes the major method for analyzing the complexity of prokaryotic transcriptome, it is still a challenging task to accurately assemble full length transcripts using short RNA-seq reads. To fill these gaps, we have profiled the transcriptomes of E. coli K12 under different culture conditions and growth phases using a highly specific directional RNA-seq technique that can capture various types of transcripts in the bacterial cells, combined with a highly accurate and robust algorithm and tool TruHMM (http://bioinfolab.uncc.edu/TruHmm_package/) for assembling full length transcripts. We found that 46.9 ~ 63.4% of expressed operons were utilized in their putative alternative forms, 72.23 ~ 89.54% genes had putative asRNA transcripts and 51.37 ~ 72.74% intergenic regions had putative ncRNA transcripts under different culture conditions and growth phases. As has been demonstrated in many other prokaryotes, E. coli K12 also has a highly complex and dynamic transcriptomes under different culture conditions and growth phases. Such complex and dynamic transcriptomes might play important roles in the physiology of the bacterium. TruHMM is a highly accurate and robust algorithm for assembling full-length transcripts in prokaryotes using directional RNA-seq short reads.
2013-01-01
Background Although prokaryotic gene transcription has been studied over decades, many aspects of the process remain poorly understood. Particularly, recent studies have revealed that transcriptomes in many prokaryotes are far more complex than previously thought. Genes in an operon are often alternatively and dynamically transcribed under different conditions, and a large portion of genes and intergenic regions have antisense RNA (asRNA) and non-coding RNA (ncRNA) transcripts, respectively. Ironically, similar studies have not been conducted in the model bacterium E coli K12, thus it is unknown whether or not the bacterium possesses similar complex transcriptomes. Furthermore, although RNA-seq becomes the major method for analyzing the complexity of prokaryotic transcriptome, it is still a challenging task to accurately assemble full length transcripts using short RNA-seq reads. Results To fill these gaps, we have profiled the transcriptomes of E. coli K12 under different culture conditions and growth phases using a highly specific directional RNA-seq technique that can capture various types of transcripts in the bacterial cells, combined with a highly accurate and robust algorithm and tool TruHMM (http://bioinfolab.uncc.edu/TruHmm_package/) for assembling full length transcripts. We found that 46.9 ~ 63.4% of expressed operons were utilized in their putative alternative forms, 72.23 ~ 89.54% genes had putative asRNA transcripts and 51.37 ~ 72.74% intergenic regions had putative ncRNA transcripts under different culture conditions and growth phases. Conclusions As has been demonstrated in many other prokaryotes, E. coli K12 also has a highly complex and dynamic transcriptomes under different culture conditions and growth phases. Such complex and dynamic transcriptomes might play important roles in the physiology of the bacterium. TruHMM is a highly accurate and robust algorithm for assembling full-length transcripts in prokaryotes using directional RNA-seq short reads. PMID:23899370
Seo, Jung Hee; Mittal, Rajat
2010-01-01
A new sharp-interface immersed boundary method based approach for the computation of low-Mach number flow-induced sound around complex geometries is described. The underlying approach is based on a hydrodynamic/acoustic splitting technique where the incompressible flow is first computed using a second-order accurate immersed boundary solver. This is followed by the computation of sound using the linearized perturbed compressible equations (LPCE). The primary contribution of the current work is the development of a versatile, high-order accurate immersed boundary method for solving the LPCE in complex domains. This new method applies the boundary condition on the immersed boundary to a high-order by combining the ghost-cell approach with a weighted least-squares error method based on a high-order approximating polynomial. The method is validated for canonical acoustic wave scattering and flow-induced noise problems. Applications of this technique to relatively complex cases of practical interest are also presented. PMID:21318129
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Complex regression Doppler optical coherence tomography
NASA Astrophysics Data System (ADS)
Elahi, Sahar; Gu, Shi; Thrane, Lars; Rollins, Andrew M.; Jenkins, Michael W.
2018-04-01
We introduce a new method to measure Doppler shifts more accurately and extend the dynamic range of Doppler optical coherence tomography (OCT). The two-point estimate of the conventional Doppler method is replaced with a regression that is applied to high-density B-scans in polar coordinates. We built a high-speed OCT system using a 1.68-MHz Fourier domain mode locked laser to acquire high-density B-scans (16,000 A-lines) at high enough frame rates (˜100 fps) to accurately capture the dynamics of the beating embryonic heart. Flow phantom experiments confirm that the complex regression lowers the minimum detectable velocity from 12.25 mm / s to 374 μm / s, whereas the maximum velocity of 400 mm / s is measured without phase wrapping. Complex regression Doppler OCT also demonstrates higher accuracy and precision compared with the conventional method, particularly when signal-to-noise ratio is low. The extended dynamic range allows monitoring of blood flow over several stages of development in embryos without adjusting the imaging parameters. In addition, applying complex averaging recovers hidden features in structural images.
Dissecting innate immune responses with the tools of systems biology.
Smith, Kelly D; Bolouri, Hamid
2005-02-01
Systems biology strives to derive accurate predictive descriptions of complex systems such as innate immunity. The innate immune system is essential for host defense, yet the resulting inflammatory response must be tightly regulated. Current understanding indicates that this system is controlled by complex regulatory networks, which maintain homoeostasis while accurately distinguishing pathogenic infections from harmless exposures. Recent studies have used high throughput technologies and computational techniques that presage predictive models and will be the foundation of a systems level understanding of innate immunity.
Brooks, Mark A; Gewartowski, Kamil; Mitsiki, Eirini; Létoquart, Juliette; Pache, Roland A; Billier, Ysaline; Bertero, Michela; Corréa, Margot; Czarnocki-Cieciura, Mariusz; Dadlez, Michal; Henriot, Véronique; Lazar, Noureddine; Delbos, Lila; Lebert, Dorothée; Piwowarski, Jan; Rochaix, Pascal; Böttcher, Bettina; Serrano, Luis; Séraphin, Bertrand; van Tilbeurgh, Herman; Aloy, Patrick; Perrakis, Anastassis; Dziembowski, Andrzej
2010-09-08
For high-throughput structural studies of protein complexes of composition inferred from proteomics data, it is crucial that candidate complexes are selected accurately. Herein, we exemplify a procedure that combines a bioinformatics tool for complex selection with in vivo validation, to deliver structural results in a medium-throughout manner. We have selected a set of 20 yeast complexes, which were predicted to be feasible by either an automated bioinformatics algorithm, by manual inspection of primary data, or by literature searches. These complexes were validated with two straightforward and efficient biochemical assays, and heterologous expression technologies of complex components were then used to produce the complexes to assess their feasibility experimentally. Approximately one-half of the selected complexes were useful for structural studies, and we detail one particular success story. Our results underscore the importance of accurate target selection and validation in avoiding transient, unstable, or simply nonexistent complexes from the outset. Copyright © 2010 Elsevier Ltd. All rights reserved.
A spectral dynamic stiffness method for free vibration analysis of plane elastodynamic problems
NASA Astrophysics Data System (ADS)
Liu, X.; Banerjee, J. R.
2017-03-01
A highly efficient and accurate analytical spectral dynamic stiffness (SDS) method for modal analysis of plane elastodynamic problems based on both plane stress and plane strain assumptions is presented in this paper. First, the general solution satisfying the governing differential equation exactly is derived by applying two types of one-dimensional modified Fourier series. Then the SDS matrix for an element is formulated symbolically using the general solution. The SDS matrices are assembled directly in a similar way to that of the finite element method, demonstrating the method's capability to model complex structures. Any arbitrary boundary conditions are represented accurately in the form of the modified Fourier series. The Wittrick-Williams algorithm is then used as the solution technique where the mode count problem (J0) of a fully-clamped element is resolved. The proposed method gives highly accurate solutions with remarkable computational efficiency, covering low, medium and high frequency ranges. The method is applied to both plane stress and plane strain problems with simple as well as complex geometries. All results from the theory in this paper are accurate up to the last figures quoted to serve as benchmarks.
An efficient hybrid technique in RCS predictions of complex targets at high frequencies
NASA Astrophysics Data System (ADS)
Algar, María-Jesús; Lozano, Lorena; Moreno, Javier; González, Iván; Cátedra, Felipe
2017-09-01
Most computer codes in Radar Cross Section (RCS) prediction use Physical Optics (PO) and Physical theory of Diffraction (PTD) combined with Geometrical Optics (GO) and Geometrical Theory of Diffraction (GTD). The latter approaches are computationally cheaper and much more accurate for curved surfaces, but not applicable for the computation of the RCS of all surfaces of a complex object due to the presence of caustic problems in the analysis of concave surfaces or flat surfaces in the far field. The main contribution of this paper is the development of a hybrid method based on a new combination of two asymptotic techniques: GTD and PO, considering the advantages and avoiding the disadvantages of each of them. A very efficient and accurate method to analyze the RCS of complex structures at high frequencies is obtained with the new combination. The proposed new method has been validated comparing RCS results obtained for some simple cases using the proposed approach and RCS using the rigorous technique of Method of Moments (MoM). Some complex cases have been examined at high frequencies contrasting the results with PO. This study shows the accuracy and the efficiency of the hybrid method and its suitability for the computation of the RCS at really large and complex targets at high frequencies.
Visual communication with Haitian women: a look at pictorial literacy.
Gustafson, M B
1986-06-01
A study of village women in Haiti which presents baseline data from their responses to stylized health education pictures is reported. The study questioned the concept that pictorial messages were accurately recognized and self-explanatory to nonliterate Haitian village women. The investigator, who used a descriptive survey, sought answers to a major and a related question: what do nonliterate Haitian village women recognize in selected health education pictures; and are their differences in picture recognition traceable to the complexity of the pictures. There were 110 women (25 from a mountain village, 25 from a plains village, 25 from a seacoast village, and 35 urban dwellers) who responded to 9 health education pictures. The women ranged in age from 18-80 years of age; 32 (29%) had gone to school for a range of an "unknown time" to 8 years. 47% of those who had gone to school indicated that they could read. The investigator rated the verbatim responses to the pictures for accuracy as: accurate, overinclusive, underinclusive, inaccurate, and do not know. The quantitative analysis of this data revealed that the accuracy levels decreased as the complexity level increased. This is best shown in the 129 (39%) accurate responses in the low level; 6 (1.8%) in the moderate level; and no accurate responses in the high complexity level. An unexpected finding was the highest number of inaccurate responses (n = 83, 25.1%) found in the low complexity level, while the moderate and high levels both showed 36 (10.8%). In addition to the differences in accuracy in picture recognition based on picture complexity, there were significant differences on the chi-square test which confirmed the assertion of the question that picture recognition is traceable to the complexity of the picture. These findings are consistent with the picture complexity studies of Holmes, Jelliffe, and Kwansa.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
NASA Astrophysics Data System (ADS)
Zhang, Kai; Yang, Fanlin; Zhang, Hande; Su, Dianpeng; Li, QianQian
2017-06-01
The correlation between seafloor morphological features and biological complexity has been identified in numerous recent studies. This research focused on the potential for accurate characterization of coral reefs based on high-resolution bathymetry from multiple sources. A standard deviation (STD) based method for quantitatively characterizing terrain complexity was developed that includes robust estimation to correct for irregular bathymetry and a calibration for the depth-dependent variablity of measurement noise. Airborne lidar and shipborne sonar bathymetry measurements from Yuanzhi Island, South China Sea, were merged to generate seamless high-resolution coverage of coral bathymetry from the shoreline to deep water. The new algorithm was applied to the Yuanzhi Island surveys to generate maps of quantitive terrain complexity, which were then compared to in situ video observations of coral abundance. The terrain complexity parameter is significantly correlated with seafloor coral abundance, demonstrating the potential for accurately and efficiently mapping coral abundance through seafloor surveys, including combinations of surveys using different sensors.
NASA Astrophysics Data System (ADS)
Zhai, Yu; Li, Hui; Le Roy, Robert J.
2018-04-01
Spectroscopically accurate Potential Energy Surfaces (PESs) are fundamental for explaining and making predictions of the infrared and microwave spectra of van der Waals (vdW) complexes, and the model used for the potential energy function is critically important for providing accurate, robust and portable analytical PESs. The Morse/Long-Range (MLR) model has proved to be one of the most general, flexible and accurate one-dimensional (1D) model potentials, as it has physically meaningful parameters, is flexible, smooth and differentiable everywhere, to all orders and extrapolates sensibly at both long and short ranges. The Multi-Dimensional Morse/Long-Range (mdMLR) potential energy model described herein is based on that 1D MLR model, and has proved to be effective and accurate in the potentiology of various types of vdW complexes. In this paper, we review the current status of development of the mdMLR model and its application to vdW complexes. The future of the mdMLR model is also discussed. This review can serve as a tutorial for the construction of an mdMLR PES.
Saive, Anne-Lise; Royet, Jean-Pierre; Garcia, Samuel; Thévenet, Marc; Plailly, Jane
2015-01-01
Episodic memory is defined as the conscious retrieval of specific past events. Whether accurate episodic retrieval requires a recollective experience or if a feeling of knowing is sufficient remains unresolved. We recently devised an ecological approach to investigate the controlled cued-retrieval of episodes composed of unnamable odors (What) located spatially (Where) within a visual context (Which context). By combining the Remember/Know procedure with our laboratory-ecological approach in an original way, the present study demonstrated that the accurate odor-evoked retrieval of complex and multimodal episodes overwhelmingly required conscious recollection. A feeling of knowing, even when associated with a high level of confidence, was not sufficient to generate accurate episodic retrieval. Interestingly, we demonstrated that the recollection of accurate episodic memories was promoted by odor retrieval-cue familiarity and describability. In conclusion, our study suggested that semantic knowledge about retrieval-cues increased the recollection which is the state of awareness required for the accurate retrieval of complex episodic memories. PMID:26630170
Saive, Anne-Lise; Royet, Jean-Pierre; Garcia, Samuel; Thévenet, Marc; Plailly, Jane
2015-01-01
Episodic memory is defined as the conscious retrieval of specific past events. Whether accurate episodic retrieval requires a recollective experience or if a feeling of knowing is sufficient remains unresolved. We recently devised an ecological approach to investigate the controlled cued-retrieval of episodes composed of unnamable odors (What) located spatially (Where) within a visual context (Which context). By combining the Remember/Know procedure with our laboratory-ecological approach in an original way, the present study demonstrated that the accurate odor-evoked retrieval of complex and multimodal episodes overwhelmingly required conscious recollection. A feeling of knowing, even when associated with a high level of confidence, was not sufficient to generate accurate episodic retrieval. Interestingly, we demonstrated that the recollection of accurate episodic memories was promoted by odor retrieval-cue familiarity and describability. In conclusion, our study suggested that semantic knowledge about retrieval-cues increased the recollection which is the state of awareness required for the accurate retrieval of complex episodic memories.
Accurate complex scaling of three dimensional numerical potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan
2013-05-28
The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scalingmore » of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.« less
Benchmarks and Reliable DFT Results for Spin Gaps of Small Ligand Fe(II) Complexes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Suhwan; Kim, Min-Cheol; Sim, Eunji
2017-05-01
All-electron fixed-node diffusion Monte Carlo provides benchmark spin gaps for four Fe(II) octahedral complexes. Standard quantum chemical methods (semilocal DFT and CCSD(T)) fail badly for the energy difference between their high- and low-spin states. Density-corrected DFT is both significantly more accurate and reliable and yields a consistent prediction for the Fe-Porphyrin complex
Cybulski, Hubert; Henriksen, Christian; Dawes, Richard; Wang, Xiao-Gang; Bora, Neha; Avila, Gustavo; Carrington, Tucker; Fernández, Berta
2018-05-09
A new, highly accurate ab initio ground-state intermolecular potential-energy surface (IPES) for the CO-N2 complex is presented. Thousands of interaction energies calculated with the CCSD(T) method and Dunning's aug-cc-pVQZ basis set extended with midbond functions were fitted to an analytical function. The global minimum of the potential is characterized by an almost T-shaped structure and has an energy of -118.2 cm-1. The symmetry-adapted Lanczos algorithm was used to compute rovibrational energies (up to J = 20) on the new IPES. The RMSE with respect to experiment was found to be on the order of 0.038 cm-1 which confirms the very high accuracy of the potential. This level of agreement is among the best reported in the literature for weakly bound systems and considerably improves on those of previously published potentials.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
CNC Machining Of The Complex Copper Electrodes
NASA Astrophysics Data System (ADS)
Popan, Ioan Alexandru; Balc, Nicolae; Popan, Alina
2015-07-01
This paper presents the machining process of the complex copper electrodes. Machining of the complex shapes in copper is difficult because this material is soft and sticky. This research presents the main steps for processing those copper electrodes at a high dimensional accuracy and a good surface quality. Special tooling solutions are required for this machining process and optimal process parameters have been found for the accurate CNC equipment, using smart CAD/CAM software.
Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin
2017-01-01
The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629
Robert E. Keane
2013-01-01
Wildland fuelbeds are exceptionally complex, consisting of diverse particles of many sizes, types and shapes with abundances and properties that are highly variable in time and space. This complexity makes it difficult to accurately describe, classify, sample and map fuels for wildland fire research and management. As a result, many fire behaviour and effects software...
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
Structure-from-motion for MAV image sequence analysis with photogrammetric applications
NASA Astrophysics Data System (ADS)
Schönberger, J. L.; Fraundorfer, F.; Frahm, J.-M.
2014-08-01
MAV systems have found increased attention in the photogrammetric community as an (autonomous) image acquisition platform for accurate 3D reconstruction. For an accurate reconstruction in feasible time, the acquired imagery requires specialized SfM software. Current systems typically use high-resolution sensors in pre-planned flight missions from far distance. We describe and evaluate a new SfM pipeline specifically designed for sequential, close-distance, and low-resolution imagery from mobile cameras with relatively high frame-rate and high overlap. Experiments demonstrate reduced computational complexity by leveraging the temporal consistency, comparable accuracy and point density with respect to state-of-the-art systems.
Analysis and application of classification methods of complex carbonate reservoirs
NASA Astrophysics Data System (ADS)
Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei
2018-06-01
There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.
MULTI-K: accurate classification of microarray subtypes using ensemble k-means clustering
Kim, Eun-Youn; Kim, Seon-Young; Ashlock, Daniel; Nam, Dougu
2009-01-01
Background Uncovering subtypes of disease from microarray samples has important clinical implications such as survival time and sensitivity of individual patients to specific therapies. Unsupervised clustering methods have been used to classify this type of data. However, most existing methods focus on clusters with compact shapes and do not reflect the geometric complexity of the high dimensional microarray clusters, which limits their performance. Results We present a cluster-number-based ensemble clustering algorithm, called MULTI-K, for microarray sample classification, which demonstrates remarkable accuracy. The method amalgamates multiple k-means runs by varying the number of clusters and identifies clusters that manifest the most robust co-memberships of elements. In addition to the original algorithm, we newly devised the entropy-plot to control the separation of singletons or small clusters. MULTI-K, unlike the simple k-means or other widely used methods, was able to capture clusters with complex and high-dimensional structures accurately. MULTI-K outperformed other methods including a recently developed ensemble clustering algorithm in tests with five simulated and eight real gene-expression data sets. Conclusion The geometric complexity of clusters should be taken into account for accurate classification of microarray data, and ensemble clustering applied to the number of clusters tackles the problem very well. The C++ code and the data sets tested are available from the authors. PMID:19698124
NASA Astrophysics Data System (ADS)
Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.
2018-02-01
In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.
A Project Manager’s Personal Attributes as Predictors for Success
2007-03-01
Northouse (2004) explains that leadership is highly a researched topic with much written. Yet, a definitive description of this phenomenon is difficult to...express because of its complexity. Even though leadership has varied descriptions and conceptualizations, Northouse states that the concept of...characteristic of leadership is not an accurate predictor of performance. Leadership is a complex, multi-faceted attribute ( Northouse , 2004) and specific
Clinical decision-making by midwives: managing case complexity.
Cioffi, J; Markham, R
1997-02-01
In making clinical judgements, it is argued that midwives use 'shortcuts' or heuristics based on estimated probabilities to simplify the decision-making task. Midwives (n = 30) were given simulated patient assessment situations of high and low complexity and were required to think aloud. Analysis of verbal protocols showed that subjective probability judgements (heuristics) were used more frequently in the high than low complexity case and predominated in the last quarter of the assessment period for the high complexity case. 'Representativeness' was identified more frequently in the high than in the low case, but was the dominant heuristic in both. Reports completed after each simulation suggest that heuristics based on memory for particular conditions affect decisions. It is concluded that midwives use heuristics, derived mainly from their clinical experiences, in an attempt to save cognitive effort and to facilitate reasonably accurate decisions in the decision-making process.
Kang, Dongwan D.; Froula, Jeff; Egan, Rob; ...
2015-01-01
Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically formsmore » hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.« less
Fast and accurate spectral estimation for online detection of partial broken bar in induction motors
NASA Astrophysics Data System (ADS)
Samanta, Anik Kumar; Naha, Arunava; Routray, Aurobinda; Deb, Alok Kanti
2018-01-01
In this paper, an online and real-time system is presented for detecting partial broken rotor bar (BRB) of inverter-fed squirrel cage induction motors under light load condition. This system with minor modifications can detect any fault that affects the stator current. A fast and accurate spectral estimator based on the theory of Rayleigh quotient is proposed for detecting the spectral signature of BRB. The proposed spectral estimator can precisely determine the relative amplitude of fault sidebands and has low complexity compared to available high-resolution subspace-based spectral estimators. Detection of low-amplitude fault components has been improved by removing the high-amplitude fundamental frequency using an extended-Kalman based signal conditioner. Slip is estimated from the stator current spectrum for accurate localization of the fault component. Complexity and cost of sensors are minimal as only a single-phase stator current is required. The hardware implementation has been carried out on an Intel i7 based embedded target ported through the Simulink Real-Time. Evaluation of threshold and detectability of faults with different conditions of load and fault severity are carried out with empirical cumulative distribution function.
NASA Astrophysics Data System (ADS)
Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.
2006-09-01
As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.
NASA Astrophysics Data System (ADS)
Khalili, Ashkan; Jha, Ratneshwar; Samaratunga, Dulip
2016-11-01
Wave propagation analysis in 2-D composite structures is performed efficiently and accurately through the formulation of a User-Defined Element (UEL) based on the wavelet spectral finite element (WSFE) method. The WSFE method is based on the first-order shear deformation theory which yields accurate results for wave motion at high frequencies. The 2-D WSFE model is highly efficient computationally and provides a direct relationship between system input and output in the frequency domain. The UEL is formulated and implemented in Abaqus (commercial finite element software) for wave propagation analysis in 2-D composite structures with complexities. Frequency domain formulation of WSFE leads to complex valued parameters, which are decoupled into real and imaginary parts and presented to Abaqus as real values. The final solution is obtained by forming a complex value using the real number solutions given by Abaqus. Five numerical examples are presented in this article, namely undamaged plate, impacted plate, plate with ply drop, folded plate and plate with stiffener. Wave motions predicted by the developed UEL correlate very well with Abaqus simulations. The results also show that the UEL largely retains computational efficiency of the WSFE method and extends its ability to model complex features.
Wang, Huping; Han, Wenyu; Takagi, Junichi; Cong, Yao
2018-05-11
Cryo-electron microscopy (cryo-EM) has been established as one of the central tools in the structural study of macromolecular complexes. Although intermediate- or low-resolution structural information through negative staining or cryo-EM analysis remains highly valuable, we lack general and efficient ways to achieve unambiguous subunit identification in these applications. Here, we took advantage of the extremely high affinity between a dodecapeptide "PA" tag and the NZ-1 antibody Fab fragment to develop an efficient "yeast inner-subunit PA-NZ-1 labeling" strategy that when combined with cryo-EM could precisely identify subunits in macromolecular complexes. Using this strategy combined with cryo-EM 3D reconstruction, we were able to visualize the characteristic NZ-1 Fab density attached to the PA tag inserted into a surface-exposed loop in the middle of the sequence of CCT6 subunit present in the Saccharomyces cerevisiae group II chaperonin TRiC/CCT. This procedure facilitated the unambiguous localization of CCT6 in the TRiC complex. The PA tag was designed to contain only 12 amino acids and a tight turn configuration; when inserted into a loop, it usually has a high chance of maintaining the epitope structure and low likelihood of perturbing the native structure and function of the target protein compared to other tagging systems. We also found that the association between PA and NZ-1 can sustain the cryo freezing conditions, resulting in very high occupancy of the Fab in the final cryo-EM images. Our study demonstrated the robustness of this strategy combined with cryo-EM in efficient and accurate subunit identification in challenging multi-component complexes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Chaturvedi, Shalini; Siegel, Derick; Wagner, Carrie L; Park, Jaehong; van de Velde, Helgi; Vermeulen, Jessica; Fung, Man-Cheong; Reddy, Manjula; Hall, Brett; Sasser, Kate
2015-01-01
Aim Interleukin-6 (IL-6), a multifunctional cytokine, exists in several forms ranging from a low molecular weight (MW 20–30 kDa) non-complexed form to high MW (200–450 kDa), complexes. Accurate baseline IL-6 assessment is pivotal to understand clinical responses to IL-6-targeted treatments. Existing assays measure only the low MW, non-complexed IL-6 form. The present work aimed to develop a validated assay to measure accurately total IL-6 (complexed and non-complexed) in serum or plasma as matrix in a high throughput and easily standardized format for clinical testing. Methods Commercial capture and detection antibodies were screened against humanized IL-6 and evaluated in an enzyme-linked immunosorbent assay format. The best antibody combinations were screened to identify an antibody pair that gave minimum background and maximum recovery of IL-6 in the presence of 100% serum matrix. A plate-based total IL-6 assay was developed and transferred to the Meso Scale Discovery (MSD) platform for large scale clinical testing. Results The top-performing antibody pair from 36 capture and four detection candidates was validated on the MSD platform. The lower limit of quantification in human serum samples (n = 6) was 9.77 pg l–1, recovery ranged from 93.13–113.27%, the overall pooled coefficients of variation were 20.12% (inter-assay) and 8.67% (intra-assay). High MW forms of IL-6, in size fractionated serum samples from myelodysplastic syndrome and rheumatoid arthritis patients, were detected by the assay but not by a commercial kit. Conclusion This novel panoptic (sees all forms) IL-6 MSD assay that measures both high and low MW forms may have clinical utility. PMID:25847183
An improved switching converter model using discrete and average techniques
NASA Technical Reports Server (NTRS)
Shortt, D. J.; Lee, F. C.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.
NASA Astrophysics Data System (ADS)
Cross, M.
2016-12-01
An improved process for the identification of tree types from satellite imagery for tropical forests is needed for more accurate assessments of the impact of forests on the global climate. La Selva Biological Station in Costa Rica was the tropical forest area selected for this particular study. WorldView-3 imagery was utilized because of its high spatial, spectral and radiometric resolution, its availability, and its potential to differentiate species in a complex forest setting. The first-step was to establish confidence in the high spatial and high radiometric resolution imagery from WorldView-3 in delineating tree types within a complex forest setting. In achieving this goal, ASD field spectrometer data were collected of specific tree species to establish solid ground control within the study site. The spectrometer data were collected from the top of each specific tree canopy utilizing established towers located at La Selva Biological Station so as to match the near-nadir view of the WorldView-3 imagery. The ASD data was processed utilizing the spectral response functions for each of the WorldView-3 bands to convert the ASD data into a band specific reflectivity. This allowed direct comparison of the ASD spectrometer reflectance data to the WorldView-3 multispectral imagery. The WorldView-3 imagery was processed to surface reflectance using two standard atmospheric correction procedures and the proprietary DigitalGlobe Atmospheric Compensation (AComp) product. The most accurate correction process was identified through comparison to the spectrometer data collected. A series of statistical measures were then utilized to access the accuracy of the processed imagery and which imagery bands are best suited for tree type identification. From this analysis, a segmentation/classification process was performed to identify individual tree type locations within the study area. It is envisioned the results of this study will improve traditional forest classification processes, provide more accurate assessments of species density and distribution, facilitate a more accurate biomass estimate of the tropical forest which will impact the accuracy of tree carbon storage estimates, and ultimately assist in developing a better overall characterization of tropical rainforest dynamics.
Solid rocket booster internal flow analysis by highly accurate adaptive computational methods
NASA Technical Reports Server (NTRS)
Huang, C. Y.; Tworzydlo, W.; Oden, J. T.; Bass, J. M.; Cullen, C.; Vadaketh, S.
1991-01-01
The primary objective of this project was to develop an adaptive finite element flow solver for simulating internal flows in the solid rocket booster. Described here is a unique flow simulator code for analyzing highly complex flow phenomena in the solid rocket booster. New methodologies and features incorporated into this analysis tool are described.
An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2000-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duru, Kenneth, E-mail: kduru@stanford.edu; Dunham, Eric M.; Institute for Computational and Mathematical Engineering, Stanford University, Stanford, CA
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a)more » enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge–Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.« less
NASA Astrophysics Data System (ADS)
Duru, Kenneth; Dunham, Eric M.
2016-01-01
Dynamic propagation of shear ruptures on a frictional interface in an elastic solid is a useful idealization of natural earthquakes. The conditions relating discontinuities in particle velocities across fault zones and tractions acting on the fault are often expressed as nonlinear friction laws. The corresponding initial boundary value problems are both numerically and computationally challenging. In addition, seismic waves generated by earthquake ruptures must be propagated for many wavelengths away from the fault. Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods. We present a high order accurate finite difference method for: a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration; b) dynamic propagation of earthquake ruptures along nonplanar faults; and c) accurate propagation of seismic waves in heterogeneous media with free surface topography. We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts (SBP) finite difference operators in space. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. The finite difference stencils used in this paper are sixth order accurate in the interior and third order accurate close to the boundaries. However, the method is applicable to any spatial operator with a diagonal norm satisfying the SBP property. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme, thus yielding a globally fourth order accurate method in both space and time. We show numerical simulations on band limited self-similar fractal faults revealing the complexity of rupture dynamics on rough faults.
ERIC Educational Resources Information Center
Regan, Blake B.
2012-01-01
This study examined the relationship between high school exit exams and mathematical proficiency. With the No Child Left Behind (NCLB) Act requiring all students to be proficient in mathematics by 2014, it is imperative that high-stakes assessments accurately evaluate all aspects of student achievement, appropriately set the yardstick by which…
Real-time tumor motion estimation using respiratory surrogate via memory-based learning
NASA Astrophysics Data System (ADS)
Li, Ruijiang; Lewis, John H.; Berbeco, Ross I.; Xing, Lei
2012-08-01
Respiratory tumor motion is a major challenge in radiation therapy for thoracic and abdominal cancers. Effective motion management requires an accurate knowledge of the real-time tumor motion. External respiration monitoring devices (optical, etc) provide a noninvasive, non-ionizing, low-cost and practical approach to obtain the respiratory signal. Due to the highly complex and nonlinear relations between tumor and surrogate motion, its ultimate success hinges on the ability to accurately infer the tumor motion from respiratory surrogates. Given their widespread use in the clinic, such a method is critically needed. We propose to use a powerful memory-based learning method to find the complex relations between tumor motion and respiratory surrogates. The method first stores the training data in memory and then finds relevant data to answer a particular query. Nearby data points are assigned high relevance (or weights) and conversely distant data are assigned low relevance. By fitting relatively simple models to local patches instead of fitting one single global model, it is able to capture highly nonlinear and complex relations between the internal tumor motion and external surrogates accurately. Due to the local nature of weighting functions, the method is inherently robust to outliers in the training data. Moreover, both training and adapting to new data are performed almost instantaneously with memory-based learning, making it suitable for dynamically following variable internal/external relations. We evaluated the method using respiratory motion data from 11 patients. The data set consists of simultaneous measurement of 3D tumor motion and 1D abdominal surface (used as the surrogate signal in this study). There are a total of 171 respiratory traces, with an average peak-to-peak amplitude of ∼15 mm and average duration of ∼115 s per trace. Given only 5 s (roughly one breath) pretreatment training data, the method achieved an average 3D error of 1.5 mm and 95th percentile error of 3.4 mm on unseen test data. The average 3D error was further reduced to 1.4 mm when the model was tuned to its optimal setting for each respiratory trace. In one trace where a few outliers are present in the training data, the proposed method achieved an error reduction of as much as ∼50% compared with the best linear model (1.0 mm versus 2.1 mm). The memory-based learning technique is able to accurately capture the highly complex and nonlinear relations between tumor and surrogate motion in an efficient manner (a few milliseconds per estimate). Furthermore, the algorithm is particularly suitable to handle situations where the training data are contaminated by large errors or outliers. These desirable properties make it an ideal candidate for accurate and robust tumor gating/tracking using respiratory surrogates.
Lu, Tong; Tai, Chiew-Lan; Yang, Huafei; Cai, Shijie
2009-08-01
We present a novel knowledge-based system to automatically convert real-life engineering drawings to content-oriented high-level descriptions. The proposed method essentially turns the complex interpretation process into two parts: knowledge representation and knowledge-based interpretation. We propose a new hierarchical descriptor-based knowledge representation method to organize the various types of engineering objects and their complex high-level relations. The descriptors are defined using an Extended Backus Naur Form (EBNF), facilitating modification and maintenance. When interpreting a set of related engineering drawings, the knowledge-based interpretation system first constructs an EBNF-tree from the knowledge representation file, then searches for potential engineering objects guided by a depth-first order of the nodes in the EBNF-tree. Experimental results and comparisons with other interpretation systems demonstrate that our knowledge-based system is accurate and robust for high-level interpretation of complex real-life engineering projects.
High-resolution method for evolving complex interface networks
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-04-01
In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.
Highly Accurate Calculations of the Phase Diagram of Cold Lithium
NASA Astrophysics Data System (ADS)
Shulenburger, Luke; Baczewski, Andrew
The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Diffraction Correlation to Reconstruct Highly Strained Particles
NASA Astrophysics Data System (ADS)
Brown, Douglas; Harder, Ross; Clark, Jesse; Kim, J. W.; Kiefer, Boris; Fullerton, Eric; Shpyrko, Oleg; Fohtung, Edwin
2015-03-01
Through the use of coherent x-ray diffraction a three-dimensional diffraction pattern of a highly strained nano-crystal can be recorded in reciprocal space by a detector. Only the intensities are recorded, resulting in a loss of the complex phase. The recorded diffraction pattern therefore requires computational processing to reconstruct the density and complex distribution of the diffracted nano-crystal. For highly strained crystals, standard methods using HIO and ER algorithms are no longer sufficient to reconstruct the diffraction pattern. Our solution is to correlate the symmetry in reciprocal space to generate an a priori shape constraint to guide the computational reconstruction of the diffraction pattern. This approach has improved the ability to accurately reconstruct highly strained nano-crystals.
Mechanism for accurate, protein-assisted DNA annealing by Deinococcus radiodurans DdrB
Sugiman-Marangos, Seiji N.; Weiss, Yoni M.; Junop, Murray S.
2016-01-01
Accurate pairing of DNA strands is essential for repair of DNA double-strand breaks (DSBs). How cells achieve accurate annealing when large regions of single-strand DNA are unpaired has remained unclear despite many efforts focused on understanding proteins, which mediate this process. Here we report the crystal structure of a single-strand annealing protein [DdrB (DNA damage response B)] in complex with a partially annealed DNA intermediate to 2.2 Å. This structure and supporting biochemical data reveal a mechanism for accurate annealing involving DdrB-mediated proofreading of strand complementarity. DdrB promotes high-fidelity annealing by constraining specific bases from unauthorized association and only releases annealed duplex when bound strands are fully complementary. To our knowledge, this mechanism provides the first understanding for how cells achieve accurate, protein-assisted strand annealing under biological conditions that would otherwise favor misannealing. PMID:27044084
Lowe, Woan; March, Jordon K; Bunnell, Annette J; O'Neill, Kim L; Robison, Richard A
2014-01-01
Methods for the rapid detection and differentiation of the Burkholderia pseudomallei complex comprising B. pseudomallei, B. mallei, and B. thailandensis, have been the topic of recent research due to the high degree of phenotypic and genotypic similarities of these species. B. pseudomallei and B. mallei are recognized by the CDC as tier 1 select agents. The high mortality rates of glanders and melioidosis, their potential use as bioweapons, and their low infectious dose, necessitate the need for rapid and accurate detection methods. Although B. thailandensis is generally avirulent in mammals, this species displays very similar phenotypic characteristics to that of B. pseudomallei. Optimal identification of these species remains problematic, due to the difficulty in developing a sensitive, selective, and accurate assay. The development of PCR technologies has revolutionized diagnostic testing and these detection methods have become popular due to their speed, sensitivity, and accuracy. The purpose of this review is to provide a comprehensive overview and evaluation of the advancements in PCR-based detection and differentiation methodologies for the B. pseudomallei complex, and examine their potential uses in diagnostic and environmental testing.
Constitutional Chromoanagenesis of Distal 13q in a Young Adult with Recurrent Strokes.
Burnside, Rachel D; Harris, April; Speyer, Darrow; Burgin, W Scott; Rose, David Z; Sanchez-Valle, Amarilis
2016-01-01
Constitutional chromoanagenesis events, which include chromoanasynthesis and chromothripsis and result in highly complex rearrangements, have been reported for only a few individuals. While rare, these phenomena have likely been underestimated in a constitutional setting as technologies that can accurately detect such complexity are relatively new to the mature field of clinical cytogenetics. G-banding is not likely to accurately identify chromoanasynthesis or chromothripsis, since the banding patterns of chromosomes are likely to be misidentified or oversimplified due to a much lower resolution. We describe a patient who was initially referred for cytogenetic testing as a child for speech delay. As a young adult, he was referred again for recurrent strokes. Chromosome analysis was performed, and the rearrangement resembled a simple duplication of 13q32q34. However, SNP microarray analysis showed a complex pattern of copy number gains and a loss consistent with chromoanasynthesis involving distal 13q (13q32.1q34). This report emphasizes the value of performing microarray analysis for individuals with abnormal or complex chromosome rearrangements. © 2016 S. Karger AG, Basel.
Advanced EUV mask and imaging modeling
NASA Astrophysics Data System (ADS)
Evanschitzky, Peter; Erdmann, Andreas
2017-10-01
The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.
High-precision measurement of the X-ray Cu Kα spectrum
Mendenhall, Marcus H.; Henins, Albert; Hudson, Lawrence T.; Szabo, Csilla I.; Windover, Donald; Cline, James P.
2017-01-01
The structure of the X-ray emission lines of the Cu Kα complex has been remeasured on a newly commissioned instrument, in a manner directly traceable to the Système Internationale definition of the meter. In this measurement, the region from 8000 eV to 8100 eV has been covered with a highly precise angular scale, and well-defined system efficiency, providing accurate wavelengths and relative intensities. This measurement updates the standard multi-Lorentzian-fit parameters from Härtwig, Hölzer, et al., and is in modest disagreement with their results for the wavelength of the Kα1 line when compared via quadratic fitting of the peak top; the intensity ratio of Kα1 to Kα2 agrees within the combined error bounds. However, the position of the fitted top of Kα1 is very sensitive to the fit parameters, so it is not believed to be a robust value to quote without further qualification. We also provide accurate intensity and wavelength information for the so-called Kα3,4 “satellite” complex. Supplementary data is provided which gives the entire shape of the spectrum in this region, allowing it to be used directly in cases where simplified, multi-Lorentzian fits to it are not sufficiently accurate. PMID:28757682
Extension of optical lithography by mask-litho integration with computational lithography
NASA Astrophysics Data System (ADS)
Takigawa, T.; Gronlund, K.; Wiley, J.
2010-05-01
Wafer lithography process windows can be enlarged by using source mask co-optimization (SMO). Recently, SMO including freeform wafer scanner illumination sources has been developed. Freeform sources are generated by a programmable illumination system using a micro-mirror array or by custom Diffractive Optical Elements (DOE). The combination of freeform sources and complex masks generated by SMO show increased wafer lithography process window and reduced MEEF. Full-chip mask optimization using source optimized by SMO can generate complex masks with small variable feature size sub-resolution assist features (SRAF). These complex masks create challenges for accurate mask pattern writing and low false-defect inspection. The accuracy of the small variable-sized mask SRAF patterns is degraded by short range mask process proximity effects. To address the accuracy needed for these complex masks, we developed a highly accurate mask process correction (MPC) capability. It is also difficult to achieve low false-defect inspections of complex masks with conventional mask defect inspection systems. A printability check system, Mask Lithography Manufacturability Check (M-LMC), is developed and integrated with 199-nm high NA inspection system, NPI. M-LMC successfully identifies printable defects from all of the masses of raw defect images collected during the inspection of a complex mask. Long range mask CD uniformity errors are compensated by scanner dose control. A mask CD uniformity error map obtained by mask metrology system is used as input data to the scanner. Using this method, wafer CD uniformity is improved. As reviewed above, mask-litho integration technology with computational lithography is becoming increasingly important.
Accurate high-throughput structure mapping and prediction with transition metal ion FRET
Yu, Xiaozhen; Wu, Xiongwu; Bermejo, Guillermo A.; Brooks, Bernard R.; Taraska, Justin W.
2013-01-01
Mapping the landscape of a protein’s conformational space is essential to understanding its functions and regulation. The limitations of many structural methods have made this process challenging for most proteins. Here, we report that transition metal ion FRET (tmFRET) can be used in a rapid, highly parallel screen, to determine distances from multiple locations within a protein at extremely low concentrations. The distances generated through this screen for the protein Maltose Binding Protein (MBP) match distances from the crystal structure to within a few angstroms. Furthermore, energy transfer accurately detects structural changes during ligand binding. Finally, fluorescence-derived distances can be used to guide molecular simulations to find low energy states. Our results open the door to rapid, accurate mapping and prediction of protein structures at low concentrations, in large complex systems, and in living cells. PMID:23273426
Finding Meaning: Sense Inventories for Improved Word Sense Disambiguation
ERIC Educational Resources Information Center
Brown, Susan Windisch
2010-01-01
The deep semantic understanding necessary for complex natural language processing tasks, such as automatic question-answering or text summarization, would benefit from highly accurate word sense disambiguation (WSD). This dissertation investigates what makes an appropriate and effective sense inventory for WSD. Drawing on theories and…
UHF (Ultra-High-Frequency) Propagation in Vegetative Media.
1980-04-01
Y V /ik) where k = 2A/X is the wave number and the asterisk indicates complex conjugate. In order to obtain useful results for average values that are...easy to make an accurate estimation of the expected effects under one set of conditions on the basis of experimental observa- tions carried out under... systems propagating horizontally through vegetation. The large quantity A-13 of measured data demonstrates the complex effects upon path loss of irregu
NASA Technical Reports Server (NTRS)
Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori
2005-01-01
Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25 percent of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.
NASA Technical Reports Server (NTRS)
Midea, Anthony C.; Austin, Thomas; Pao, S. Paul; DeBonis, James R.; Mani, Mori
1999-01-01
Nozzle boattail drag is significant for the High Speed Civil Transport (HSCT) and can be as high as 25% of the overall propulsion system thrust at transonic conditions. Thus, nozzle boattail drag has the potential to create a thrust-drag pinch and can reduce HSCT aircraft aerodynamic efficiencies at transonic operating conditions. In order to accurately predict HSCT performance, it is imperative that nozzle boattail drag be accurately predicted. Previous methods to predict HSCT nozzle boattail drag were suspect in the transonic regime. In addition, previous prediction methods were unable to account for complex nozzle geometry and were not flexible enough for engine cycle trade studies. A computational fluid dynamics (CFD) effort was conducted by NASA and McDonnell Douglas to evaluate the magnitude and characteristics of HSCT nozzle boattail drag at transonic conditions. A team of engineers used various CFD codes and provided consistent, accurate boattail drag coefficient predictions for a family of HSCT nozzle configurations. The CFD results were incorporated into a nozzle drag database that encompassed the entire HSCT flight regime and provided the basis for an accurate and flexible prediction methodology.
NASA Astrophysics Data System (ADS)
Maries, Georgiana; Ahokangas, Elina; Mäkinen, Joni; Pasanen, Antti; Malehmir, Alireza
2017-05-01
A novel high-resolution (2-4 m source and receiver spacing) reflection and refraction seismic survey was carried out for aquifer characterization and to confirm the existing depositional model of the interlobate esker of Virttaankangas, which is part of the Säkylänharju-Virttaankangas glaciofluvial esker-chain complex in southwest Finland. The interlobate esker complex hosting the managed aquifer recharge (MAR) plant is the source of the entire water supply for the city of Turku and its surrounding municipalities. An accurate delineation of the aquifer is therefore critical for long-term MAR planning and sustainable use of the esker resources. Moreover, an additional target was to resolve the poorly known stratigraphy of the 70-100-m-thick glacial deposits overlying a zone of fractured bedrock. Bedrock surface as well as fracture zones were confirmed through combined reflection seismic and refraction tomography results and further validated against existing borehole information. The high-resolution seismic data proved successful in accurately delineating the esker cores and revealing complex stratigraphy from fan lobes to kettle holes, providing valuable information for potential new pumping wells. This study illustrates the potential of geophysical methods for fast and cost-effective esker studies, in particular the digital-based landstreamer and its combination with geophone-based wireless recorders, where the cover sediments are reasonably thick.
Simple to complex modeling of breathing volume using a motion sensor.
John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-06-01
To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.
Investigating the Validity of Two Widely Used Quantitative Text Tools
ERIC Educational Resources Information Center
Cunningham, James W.; Hiebert, Elfrieda H.; Mesmer, Heidi Anne
2018-01-01
In recent years, readability formulas have gained new prominence as a basis for selecting texts for learning and assessment. Variables that quantitative tools count (e.g., word frequency, sentence length) provide valid measures of text complexity insofar as they accurately predict representative and high-quality criteria. The longstanding…
Background/Question/Methods Solar radiation is a significant environmental driver that impacts the quality and resilience of terrestrial and aquatic habitats, yet its spatiotemporal variations are complicated to model accurately at high resolution over large, complex watersheds. ...
Søreide, K; Thorsen, K; Søreide, J A
2015-02-01
Mortality prediction models for patients with perforated peptic ulcer (PPU) have not yielded consistent or highly accurate results. Given the complex nature of this disease, which has many non-linear associations with outcomes, we explored artificial neural networks (ANNs) to predict the complex interactions between the risk factors of PPU and death among patients with this condition. ANN modelling using a standard feed-forward, back-propagation neural network with three layers (i.e., an input layer, a hidden layer and an output layer) was used to predict the 30-day mortality of consecutive patients from a population-based cohort undergoing surgery for PPU. A receiver-operating characteristic (ROC) analysis was used to assess model accuracy. Of the 172 patients, 168 had their data included in the model; the data of 117 (70%) were used for the training set, and the data of 51 (39%) were used for the test set. The accuracy, as evaluated by area under the ROC curve (AUC), was best for an inclusive, multifactorial ANN model (AUC 0.90, 95% CIs 0.85-0.95; p < 0.001). This model outperformed standard predictive scores, including Boey and PULP. The importance of each variable decreased as the number of factors included in the ANN model increased. The prediction of death was most accurate when using an ANN model with several univariate influences on the outcome. This finding demonstrates that PPU is a highly complex disease for which clinical prognoses are likely difficult. The incorporation of computerised learning systems might enhance clinical judgments to improve decision making and outcome prediction.
NASA Astrophysics Data System (ADS)
He, Runnan; Wang, Kuanquan; Li, Qince; Yuan, Yongfeng; Zhao, Na; Liu, Yang; Zhang, Henggui
2017-12-01
Cardiovascular diseases are associated with high morbidity and mortality. However, it is still a challenge to diagnose them accurately and efficiently. Electrocardiogram (ECG), a bioelectrical signal of the heart, provides crucial information about the dynamical functions of the heart, playing an important role in cardiac diagnosis. As the QRS complex in ECG is associated with ventricular depolarization, therefore, accurate QRS detection is vital for interpreting ECG features. In this paper, we proposed a real-time, accurate, and effective algorithm for QRS detection. In the algorithm, a proposed preprocessor with a band-pass filter was first applied to remove baseline wander and power-line interference from the signal. After denoising, a method combining K-Nearest Neighbor (KNN) and Particle Swarm Optimization (PSO) was used for accurate QRS detection in ECGs with different morphologies. The proposed algorithm was tested and validated using 48 ECG records from MIT-BIH arrhythmia database (MITDB), achieved a high averaged detection accuracy, sensitivity and positive predictivity of 99.43, 99.69, and 99.72%, respectively, indicating a notable improvement to extant algorithms as reported in literatures.
High Fidelity Simulations of Plume Impingement to the International Space Station
NASA Technical Reports Server (NTRS)
Lumpkin, Forrest E., III; Marichalar, Jeremiah; Stewart, Benedicte D.
2012-01-01
With the retirement of the Space Shuttle, the United States now depends on recently developed commercial spacecraft to supply the International Space Station (ISS) with cargo. These new vehicles supplement ones from international partners including the Russian Progress, the European Autonomous Transfer Vehicle (ATV), and the Japanese H-II Transfer Vehicle (HTV). Furthermore, to carry crew to the ISS and supplement the capability currently provided exclusively by the Russian Soyuz, new designs and a refinement to a cargo vehicle design are in work. Many of these designs include features such as nozzle scarfing or simultaneous firing of multiple thrusters resulting in complex plumes. This results in a wide variety of complex plumes impinging upon the ISS. Therefore, to ensure safe "proximity operations" near the ISS, the need for accurate and efficient high fidelity simulation of plume impingement to the ISS is as high as ever. A capability combining computational fluid dynamics (CFD) and the Direct Simulation Monte Carlo (DSMC) techniques has been developed to properly model the large density variations encountered as the plume expands from the high pressure in the combustion chamber to the near vacuum conditions at the orbiting altitude of the ISS. Details of the computational tools employed by this method, including recent software enhancements and the best practices needed to achieve accurate simulations, are discussed. Several recent examples of the application of this high fidelity capability are presented. These examples highlight many of the real world, complex features of plume impingement that occur when "visiting vehicles" operate in the vicinity of the ISS.
Developments in deep brain stimulation using time dependent magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowther, L.J.; Nlebedim, I.C.; Jiles, D.C.
2012-03-07
The effect of head model complexity upon the strength of field in different brain regions for transcranial magnetic stimulation (TMS) has been investigated. Experimental measurements were used to verify the validity of magnetic field calculations and induced electric field calculations for three 3D human head models of varying complexity. Results show the inability for simplified head models to accurately determine the site of high fields that lead to neuronal stimulation and highlight the necessity for realistic head modeling for TMS applications.
Developments in deep brain stimulation using time dependent magnetic fields
NASA Astrophysics Data System (ADS)
Crowther, L. J.; Nlebedim, I. C.; Jiles, D. C.
2012-04-01
The effect of head model complexity upon the strength of field in different brain regions for transcranial magnetic stimulation (TMS) has been investigated. Experimental measurements were used to verify the validity of magnetic field calculations and induced electric field calculations for three 3D human head models of varying complexity. Results show the inability for simplified head models to accurately determine the site of high fields that lead to neuronal stimulation and highlight the necessity for realistic head modeling for TMS applications.
Roussy, Georges; Dichtel, Bernard; Chaabane, Haykel
2003-01-01
By using a new integrated circuit, which is marketed for bluetooth applications, it is possible to simplify the method of measuring the complex impedance, complex reflection coefficient and complex transmission coefficient in an industrial microwave setup. The Analog Devices circuit AD 8302, which measures gain and phase up to 2.7 GHz, operates with variable level input signals and is less sensitive to both amplitude and frequency fluctuations of the industrial magnetrons than are mixers and AM crystal detectors. Therefore, accurate gain and phase measurements can be performed with low stability generators. A mechanical setup with an AD 8302 is described; the calibration procedure and its performance are presented.
Multiscale entropy-based methods for heart rate variability complexity analysis
NASA Astrophysics Data System (ADS)
Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio
2015-03-01
Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.
Evaluating the effectiveness of the MASW technique in a geologically complex terrain
NASA Astrophysics Data System (ADS)
Anukwu, G. C.; Khalil, A. E.; Abdullah, K. B.
2018-04-01
MASW surveys carried at a number of sites in Pulau Pinang, Malaysia, showed complicated dispersion curves which consequently made the inversion into soil shear velocity model ambiguous. This research work details effort to define the source of these complicated dispersion curves. As a starting point, the complexity of the phase velocity spectrum is assumed to be due to either the surveying parameters or the elastic properties of the soil structures. For the former, the surveying was carried out using different parameters. The complexities were persistent for the different surveying parameters, an indication that the elastic properties of the soil structure could be the reason. In order to exploit this assumption, a synthetic modelling approach was adopted using information from borehole, literature and geologically plausible models. Results suggest that the presence of irregular variation in the stiffness of the soil layers, high stiffness contrast and relatively shallow bedrock, results in a quite complex f-v spectrum, especially at frequencies lower than 20Hz, making it difficult to accurately extract the dispersion curve below this frequency. As such, for MASW technique, especially in complex geological situations as demonstrated, great care should be taken during the data processing and inversion to obtain a model that accurately depicts the subsurface.
Automated Approach to Very High-Order Aeroacoustic Computations. Revision
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.; Goodrich, John W.
2001-01-01
Computational aeroacoustics requires efficient, high-resolution simulation tools. For smooth problems, this is best accomplished with very high-order in space and time methods on small stencils. However, the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewski recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that am located near wall boundaries. These procedures are used to develop automatically and to implement very high-order methods (> 15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.
Jiang, Shang-Da; Maganas, Dimitrios; Levesanos, Nikolaos; Ferentinos, Eleftherios; Haas, Sabrina; Thirunavukkuarasu, Komalavalli; Krzystek, J; Dressel, Martin; Bogani, Lapo; Neese, Frank; Kyritsis, Panayotis
2015-10-14
The high-spin (S = 1) tetrahedral Ni(II) complex [Ni{(i)Pr2P(Se)NP(Se)(i)Pr2}2] was investigated by magnetometry, spectroscopic, and quantum chemical methods. Angle-resolved magnetometry studies revealed the orientation of the magnetization principal axes. The very large zero-field splitting (zfs), D = 45.40(2) cm(-1), E = 1.91(2) cm(-1), of the complex was accurately determined by far-infrared magnetic spectroscopy, directly observing transitions between the spin sublevels of the triplet ground state. These are the largest zfs values ever determined--directly--for a high-spin Ni(II) complex. Ab initio calculations further probed the electronic structure of the system, elucidating the factors controlling the sign and magnitude of D. The latter is dominated by spin-orbit coupling contributions of the Ni ions, whereas the corresponding effects of the Se atoms are remarkably smaller.
Kaleidoscopic imaging patterns of complex structures fabricated by laser-induced deformation
Zhang, Haoran; Yang, Fengyou; Dong, Jianjie; Du, Lena; Wang, Chuang; Zhang, Jianming; Guo, Chuan Fei; Liu, Qian
2016-01-01
Complex surface structures have stimulated a great deal of interests due to many potential applications in surface devices. However, in the fabrication of complex surface micro-/nanostructures, there are always great challenges in precise design, or good controllability, or low cost, or high throughput. Here, we present a route for the accurate design and highly controllable fabrication of surface quasi-three-dimensional (quasi-3D) structures based on a thermal deformation of simple two-dimensional laser-induced patterns. A complex quasi-3D structure, coaxially nested convex–concave microlens array, as an example, demonstrates our capability of design and fabrication of surface elements with this method. Moreover, by using only one relief mask with the convex–concave microlens structure, we have gotten hundreds of target patterns at different imaging planes, offering a cost-effective solution for mass production in lithography and imprinting, and portending a paradigm in quasi-3D manufacturing. PMID:27910852
Design of Restoration Method Based on Compressed Sensing and TwIST Algorithm
NASA Astrophysics Data System (ADS)
Zhang, Fei; Piao, Yan
2018-04-01
In order to improve the subjective and objective quality of degraded images at low sampling rates effectively,save storage space and reduce computational complexity at the same time, this paper proposes a joint restoration algorithm of compressed sensing and two step iterative threshold shrinkage (TwIST). The algorithm applies the TwIST algorithm which used in image restoration to the compressed sensing theory. Then, a small amount of sparse high-frequency information is obtained in frequency domain. The TwIST algorithm based on compressed sensing theory is used to accurately reconstruct the high frequency image. The experimental results show that the proposed algorithm achieves better subjective visual effects and objective quality of degraded images while accurately restoring degraded images.
Sampling and modeling riparian forest structure and riparian microclimate
Bianca N.I. Eskelson; Paul D. Anderson; Hailemariam Temesgen
2013-01-01
Riparian areas are extremely variable and dynamic, and represent some of the most complex terrestrial ecosystems in the world. The high variability within and among riparian areas poses challenges in developing efficient sampling and modeling approaches that accurately quantify riparian forest structure and riparian microclimate. Data from eight stream reaches that are...
The Facts Are on the Table: Analyzing the Geometry of Coin Collisions
ERIC Educational Resources Information Center
Theilmann, Florian
2014-01-01
In a typical high school course, the complex physics of collisions is broken up into the dichotomy of perfectly elastic versus completely inelastic collisions. Real-life collisions, however, generally fall between these two extremes. An accurate treatment is still possible, as demonstrated in an investigation of coin collisions. Simple…
When high working memory capacity is and is not beneficial for predicting nonlinear processes.
Fischer, Helen; Holt, Daniel V
2017-04-01
Predicting the development of dynamic processes is vital in many areas of life. Previous findings are inconclusive as to whether higher working memory capacity (WMC) is always associated with using more accurate prediction strategies, or whether higher WMC can also be associated with using overly complex strategies that do not improve accuracy. In this study, participants predicted a range of systematically varied nonlinear processes based on exponential functions where prediction accuracy could or could not be enhanced using well-calibrated rules. Results indicate that higher WMC participants seem to rely more on well-calibrated strategies, leading to more accurate predictions for processes with highly nonlinear trajectories in the prediction region. Predictions of lower WMC participants, in contrast, point toward an increased use of simple exemplar-based prediction strategies, which perform just as well as more complex strategies when the prediction region is approximately linear. These results imply that with respect to predicting dynamic processes, working memory capacity limits are not generally a strength or a weakness, but that this depends on the process to be predicted.
Computation of Steady and Unsteady Laminar Flames: Theory
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Radhakrishnan, Krishnan; Zhou, Ruhai
1999-01-01
In this paper we describe the numerical analysis underlying our efforts to develop an accurate and reliable code for simulating flame propagation using complex physical and chemical models. We discuss our spatial and temporal discretization schemes, which in our current implementations range in order from two to six. In space we use staggered meshes to define discrete divergence and gradient operators, allowing us to approximate complex diffusion operators while maintaining ellipticity. Our temporal discretization is based on the use of preconditioning to produce a highly efficient linearly implicit method with good stability properties. High order for time accurate simulations is obtained through the use of extrapolation or deferred correction procedures. We also discuss our techniques for computing stationary flames. The primary issue here is the automatic generation of initial approximations for the application of Newton's method. We use a novel time-stepping procedure, which allows the dynamic updating of the flame speed and forces the flame front towards a specified location. Numerical experiments are presented, primarily for the stationary flame problem. These illustrate the reliability of our techniques, and the dependence of the results on various code parameters.
Quantum Electrodynamical Shifts in Multivalent Heavy Ions.
Tupitsyn, I I; Kozlov, M G; Safronova, M S; Shabaev, V M; Dzuba, V A
2016-12-16
The quantum electrodynamics (QED) corrections are directly incorporated into the most accurate treatment of the correlation corrections for ions with complex electronic structure of interest to metrology and tests of fundamental physics. We compared the performance of four different QED potentials for various systems to access the accuracy of QED calculations and to make a prediction of highly charged ion properties urgently needed for planning future experiments. We find that all four potentials give consistent and reliable results for ions of interest. For the strongly bound electrons, the nonlocal potentials are more accurate than the local potential.
NASA Technical Reports Server (NTRS)
Chow, Chuen-Yen; Ryan, James S.
1987-01-01
While the zonal grid system of Transonic Navier-Stokes (TNS) provides excellent modeling of complex geometries, improved shock capturing, and a higher Mach number range will be required if flows about hypersonic aircraft are to be modeled accurately. A computational fluid dynamics (CFD) code, the Compressible Navier-Stokes (CNS), is under development to combine the required high Mach number capability with the existing TNS geometry capability. One of several candidate flow solvers for inclusion in the CNS is that of F3D. This upwinding flow solver promises improved shock capturing, and more accurate hypersonic solutions overall, compared to the solver currently used in TNS.
NASA Technical Reports Server (NTRS)
George, Kerry; Wu, Honglu; Willingham, Veronica; Cucinotta, Francis A.
2002-01-01
High-LET radiation is more efficient in producing complex-type chromosome exchanges than sparsely ionizing radiation, and this can potentially be used as a biomarker of radiation quality. To investigate if complex chromosome exchanges are induced by the high-LET component of space radiation exposure, damage was assessed in astronauts' blood lymphocytes before and after long duration missions of 3-4 months. The frequency of simple translocations increased significantly for most of the crewmembers studied. However, there were few complex exchanges detected and only one crewmember had a significant increase after flight. It has been suggested that the yield of complex chromosome damage could be underestimated when analyzing metaphase cells collected at one time point after irradiation, and analysis of chemically-induced PCC may be more accurate since problems with complicated cell-cycle delays are avoided. However, in this case the yields of chromosome damage were similar for metaphase and PCC analysis of astronauts' lymphocytes. It appears that the use of complex-type exchanges as biomarker of radiation quality in vivo after low-dose chronic exposure in mixed radiation fields is hampered by statistical uncertainties.
Iterative feature refinement for accurate undersampled MR image reconstruction
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
Actinic imaging and evaluation of phase structures on EUV lithography masks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mochi, Iacopo; Goldberg, Kenneth; Huh, Sungmin
2010-09-28
The authors describe the implementation of a phase-retrieval algorithm to reconstruct phase and complex amplitude of structures on EUV lithography masks. Many native defects commonly found on EUV reticles are difficult to detect and review accurately because they have a strong phase component. Understanding the complex amplitude of mask features is essential for predictive modeling of defect printability and defect repair. Besides printing in a stepper, the most accurate way to characterize such defects is with actinic inspection, performed at the design, EUV wavelength. Phase defect and phase structures show a distinct through-focus behavior that enables qualitative evaluation of themore » object phase from two or more high-resolution intensity measurements. For the first time, phase of structures and defects on EUV masks were quantitatively reconstructed based on aerial image measurements, using a modified version of a phase-retrieval algorithm developed to test optical phase shifting reticles.« less
Freire, Ricardo O; Rocha, Gerd B; Simas, Alfredo M
2006-03-01
lanthanide coordination compounds efficiently and accurately is central for the design of new ligands capable of forming stable and highly luminescent complexes. Accordingly, we present in this paper a report on the capability of various ab initio effective core potential calculations in reproducing the coordination polyhedron geometries of lanthanide complexes. Starting with all combinations of HF, B3LYP and MP2(Full) with STO-3G, 3-21G, 6-31G, 6-31G* and 6-31+G basis sets for [Eu(H2O)9]3+ and closing with more manageable calculations for the larger complexes, we computed the fully predicted ab initio geometries for a total of 80 calculations on 52 complexes of Sm(III), Eu(III), Gd(III), Tb(III), Dy(III), Ho(III), Er(III) and Tm(III), the largest containing 164 atoms. Our results indicate that RHF/STO-3G/ECP appears to be the most efficient model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. Moreover, both augmenting the basis set and/or including electron correlation generally enlarged the deviations and aggravated the quality of the predicted coordination polyhedron crystallographic geometry. Our results further indicate that Cosentino et al.'s suggestion of using RHF/3-21G/ECP geometries appears to be indeed a more robust, but not necessarily, more accurate recommendation to be adopted for the general lanthanide complex case. [Figure: see text].
STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.
Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik
2012-05-10
Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/
STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies
2012-01-01
Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates models of cellular reaction–diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/ PMID:22574658
Burgess, Alexandra J.; Retkute, Renata; Pound, Michael P.; Foulkes, John; Preston, Simon P.; Jensen, Oliver E.; Pridmore, Tony P.; Murchie, Erik H.
2015-01-01
Photoinhibition reduces photosynthetic productivity; however, it is difficult to quantify accurately in complex canopies partly because of a lack of high-resolution structural data on plant canopy architecture, which determines complex fluctuations of light in space and time. Here, we evaluate the effects of photoinhibition on long-term carbon gain (over 1 d) in three different wheat (Triticum aestivum) lines, which are architecturally diverse. We use a unique method for accurate digital three-dimensional reconstruction of canopies growing in the field. The reconstruction method captures unique architectural differences between lines, such as leaf angle, curvature, and leaf density, thus providing a sensitive method of evaluating the productivity of actual canopy structures that previously were difficult or impossible to obtain. We show that complex data on light distribution can be automatically obtained without conventional manual measurements. We use a mathematical model of photosynthesis parameterized by field data consisting of chlorophyll fluorescence, light response curves of carbon dioxide assimilation, and manual confirmation of canopy architecture and light attenuation. Model simulations show that photoinhibition alone can result in substantial reduction in carbon gain, but this is highly dependent on exact canopy architecture and the diurnal dynamics of photoinhibition. The use of such highly realistic canopy reconstructions also allows us to conclude that even a moderate change in leaf angle in upper layers of the wheat canopy led to a large increase in the number of leaves in a severely light-limited state. PMID:26282240
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Xiangjian; State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023; Zhang, Zhaojun, E-mail: zhangzhj@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn
2016-03-14
Understanding the role of reactant ro-vibrational degrees of freedom (DOFs) in reaction dynamics of polyatomic molecular dissociation on metal surfaces is of great importance to explore the complex chemical reaction mechanism. Here, we present an expensive quantum dynamics study of the dissociative chemisorption of CH{sub 4} on a rigid Ni(111) surface by developing an accurate nine-dimensional quantum dynamical model including the DOF of azimuth. Based on a highly accurate fifteen-dimensional potential energy surface built from first principles, our simulations elucidate that the dissociation probability of CH{sub 4} has the strong dependence on azimuth and surface impact site. Some improvements aremore » suggested to obtain the accurate dissociation probability from quantum dynamics simulations.« less
Kim, Sangmin; Raphael, Patrick D; Oghalai, John S; Applegate, Brian E
2016-04-01
Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms.
Kim, Sangmin; Raphael, Patrick D.; Oghalai, John S.; Applegate, Brian E.
2016-01-01
Swept-laser sources offer a number of advantages for Phase-sensitive Optical Coherence Tomography (PhOCT). However, inter- and intra-sweep variability leads to calibration errors that adversely affect phase sensitivity. While there are several approaches to overcoming this problem, our preferred method is to simply calibrate every sweep of the laser. This approach offers high accuracy and phase stability at the expense of a substantial processing burden. In this approach, the Hilbert phase of the interferogram from a reference interferometer provides the instantaneous wavenumber of the laser, but is computationally expensive. Fortunately, the Hilbert transform may be approximated by a Finite Impulse-Response (FIR) filter. Here we explore the use of several FIR filter based Hilbert transforms for calibration, explicitly considering the impact of filter choice on phase sensitivity and OCT image quality. Our results indicate that the complex FIR filter approach is the most robust and accurate among those considered. It provides similar image quality and slightly better phase sensitivity than the traditional FFT-IFFT based Hilbert transform while consuming fewer resources in an FPGA implementation. We also explored utilizing the Hilbert magnitude of the reference interferogram to calculate an ideal window function for spectral amplitude calibration. The ideal window function is designed to carefully control sidelobes on the axial point spread function. We found that after a simple chromatic correction, calculating the window function using the complex FIR filter and the reference interferometer gave similar results to window functions calculated using a mirror sample and the FFT-IFFT Hilbert transform. Hence, the complex FIR filter can enable accurate and high-speed calibration of the magnitude and phase of spectral interferograms. PMID:27446666
Dunning, F Mark; Piazza, Timothy M; Zeytin, Füsûn N; Tucker, Ward C
2014-03-03
Accurate detection and quantification of botulinum neurotoxin (BoNT) in complex matrices is required for pharmaceutical, environmental, and food sample testing. Rapid BoNT testing of foodstuffs is needed during outbreak forensics, patient diagnosis, and food safety testing while accurate potency testing is required for BoNT-based drug product manufacturing and patient safety. The widely used mouse bioassay for BoNT testing is highly sensitive but lacks the precision and throughput needed for rapid and routine BoNT testing. Furthermore, the bioassay's use of animals has resulted in calls by drug product regulatory authorities and animal-rights proponents in the US and abroad to replace the mouse bioassay for BoNT testing. Several in vitro replacement assays have been developed that work well with purified BoNT in simple buffers, but most have not been shown to be applicable to testing in highly complex matrices. Here, a protocol for the detection of BoNT in complex matrices using the BoTest Matrix assays is presented. The assay consists of three parts: The first part involves preparation of the samples for testing, the second part is an immunoprecipitation step using anti-BoNT antibody-coated paramagnetic beads to purify BoNT from the matrix, and the third part quantifies the isolated BoNT's proteolytic activity using a fluorogenic reporter. The protocol is written for high throughput testing in 96-well plates using both liquid and solid matrices and requires about 2 hr of manual preparation with total assay times of 4-26 hr depending on the sample type, toxin load, and desired sensitivity. Data are presented for BoNT/A testing with phosphate-buffered saline, a drug product, culture supernatant, 2% milk, and fresh tomatoes and includes discussion of critical parameters for assay success.
Fernandez-Miranda, Juan C; Pathak, Sudhir; Engh, Johnathan; Jarbo, Kevin; Verstynen, Timothy; Yeh, Fang-Cheng; Wang, Yibao; Mintz, Arlan; Boada, Fernando; Schneider, Walter; Friedlander, Robert
2012-08-01
High-definition fiber tracking (HDFT) is a novel combination of processing, reconstruction, and tractography methods that can track white matter fibers from cortex, through complex fiber crossings, to cortical and subcortical targets with subvoxel resolution. To perform neuroanatomical validation of HDFT and to investigate its neurosurgical applications. Six neurologically healthy adults and 36 patients with brain lesions were studied. Diffusion spectrum imaging data were reconstructed with a Generalized Q-Ball Imaging approach. Fiber dissection studies were performed in 20 human brains, and selected dissection results were compared with tractography. HDFT provides accurate replication of known neuroanatomical features such as the gyral and sulcal folding patterns, the characteristic shape of the claustrum, the segmentation of the thalamic nuclei, the decussation of the superior cerebellar peduncle, the multiple fiber crossing at the centrum semiovale, the complex angulation of the optic radiations, the terminal arborization of the arcuate tract, and the cortical segmentation of the dorsal Broca area. From a clinical perspective, we show that HDFT provides accurate structural connectivity studies in patients with intracerebral lesions, allowing qualitative and quantitative white matter damage assessment, aiding in understanding lesional patterns of white matter structural injury, and facilitating innovative neurosurgical applications. High-grade gliomas produce significant disruption of fibers, and low-grade gliomas cause fiber displacement. Cavernomas cause both displacement and disruption of fibers. Our HDFT approach provides an accurate reconstruction of white matter fiber tracts with unprecedented detail in both the normal and pathological human brain. Further studies to validate the clinical findings are needed.
NASA Technical Reports Server (NTRS)
Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.
2014-01-01
Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This optimized composite set of SeaUVSeaUVc algorithms will provide the optical community with improved ability to quantify the role of solar UV radiation in photochemical and photobiological processes in the ocean.
NASA Astrophysics Data System (ADS)
Su, Wei; Lindsay, Scott; Liu, Haihu; Wu, Lei
2017-08-01
Rooted from the gas kinetics, the lattice Boltzmann method (LBM) is a powerful tool in modeling hydrodynamics. In the past decade, it has been extended to simulate rarefied gas flows beyond the Navier-Stokes level, either by using the high-order Gauss-Hermite quadrature, or by introducing the relaxation time that is a function of the gas-wall distance. While the former method, with a limited number of discrete velocities (e.g., D2Q36), is accurate up to the early transition flow regime, the latter method (especially the multiple relaxation time (MRT) LBM), with the same discrete velocities as those used in simulating hydrodynamics (i.e., D2Q9), is accurate up to the free-molecular flow regime in the planar Poiseuille flow. This is quite astonishing in the sense that less discrete velocities are more accurate. In this paper, by solving the Bhatnagar-Gross-Krook kinetic equation accurately via the discrete velocity method, we find that the high-order Gauss-Hermite quadrature cannot describe the large variation in the velocity distribution function when the rarefaction effect is strong, but the MRT-LBM can capture the flow velocity well because it is equivalent to solving the Navier-Stokes equations with an effective shear viscosity. Since the MRT-LBM has only been validated in simple channel flows, and for complex geometries it is difficult to find the effective viscosity, it is necessary to assess its performance for the simulation of rarefied gas flows. Our numerical simulations based on the accurate discrete velocity method suggest that the accuracy of the MRT-LBM is reduced significantly in the simulation of rarefied gas flows through the rough surface and porous media. Our simulation results could serve as benchmarking cases for future development of the LBM for modeling and simulation of rarefied gas flows in complex geometries.
Su, Wei; Lindsay, Scott; Liu, Haihu; Wu, Lei
2017-08-01
Rooted from the gas kinetics, the lattice Boltzmann method (LBM) is a powerful tool in modeling hydrodynamics. In the past decade, it has been extended to simulate rarefied gas flows beyond the Navier-Stokes level, either by using the high-order Gauss-Hermite quadrature, or by introducing the relaxation time that is a function of the gas-wall distance. While the former method, with a limited number of discrete velocities (e.g., D2Q36), is accurate up to the early transition flow regime, the latter method (especially the multiple relaxation time (MRT) LBM), with the same discrete velocities as those used in simulating hydrodynamics (i.e., D2Q9), is accurate up to the free-molecular flow regime in the planar Poiseuille flow. This is quite astonishing in the sense that less discrete velocities are more accurate. In this paper, by solving the Bhatnagar-Gross-Krook kinetic equation accurately via the discrete velocity method, we find that the high-order Gauss-Hermite quadrature cannot describe the large variation in the velocity distribution function when the rarefaction effect is strong, but the MRT-LBM can capture the flow velocity well because it is equivalent to solving the Navier-Stokes equations with an effective shear viscosity. Since the MRT-LBM has only been validated in simple channel flows, and for complex geometries it is difficult to find the effective viscosity, it is necessary to assess its performance for the simulation of rarefied gas flows. Our numerical simulations based on the accurate discrete velocity method suggest that the accuracy of the MRT-LBM is reduced significantly in the simulation of rarefied gas flows through the rough surface and porous media. Our simulation results could serve as benchmarking cases for future development of the LBM for modeling and simulation of rarefied gas flows in complex geometries.
Disease Surveillance on Complex Social Networks
Herrera, Jose L.; Srinivasan, Ravi; Brownstein, John S.; Galvani, Alison P.; Meyers, Lauren Ancel
2016-01-01
As infectious disease surveillance systems expand to include digital, crowd-sourced, and social network data, public health agencies are gaining unprecedented access to high-resolution data and have an opportunity to selectively monitor informative individuals. Contact networks, which are the webs of interaction through which diseases spread, determine whether and when individuals become infected, and thus who might serve as early and accurate surveillance sensors. Here, we evaluate three strategies for selecting sensors—sampling the most connected, random, and friends of random individuals—in three complex social networks—a simple scale-free network, an empirical Venezuelan college student network, and an empirical Montreal wireless hotspot usage network. Across five different surveillance goals—early and accurate detection of epidemic emergence and peak, and general situational awareness—we find that the optimal choice of sensors depends on the public health goal, the underlying network and the reproduction number of the disease (R0). For diseases with a low R0, the most connected individuals provide the earliest and most accurate information about both the onset and peak of an outbreak. However, identifying network hubs is often impractical, and they can be misleading if monitored for general situational awareness, if the underlying network has significant community structure, or if R0 is high or unknown. Taking a theoretical approach, we also derive the optimal surveillance system for early outbreak detection but find that real-world identification of such sensors would be nearly impossible. By contrast, the friends-of-random strategy offers a more practical and robust alternative. It can be readily implemented without prior knowledge of the network, and by identifying sensors with higher than average, but not the highest, epidemiological risk, it provides reasonably early and accurate information. PMID:27415615
Disease Surveillance on Complex Social Networks.
Herrera, Jose L; Srinivasan, Ravi; Brownstein, John S; Galvani, Alison P; Meyers, Lauren Ancel
2016-07-01
As infectious disease surveillance systems expand to include digital, crowd-sourced, and social network data, public health agencies are gaining unprecedented access to high-resolution data and have an opportunity to selectively monitor informative individuals. Contact networks, which are the webs of interaction through which diseases spread, determine whether and when individuals become infected, and thus who might serve as early and accurate surveillance sensors. Here, we evaluate three strategies for selecting sensors-sampling the most connected, random, and friends of random individuals-in three complex social networks-a simple scale-free network, an empirical Venezuelan college student network, and an empirical Montreal wireless hotspot usage network. Across five different surveillance goals-early and accurate detection of epidemic emergence and peak, and general situational awareness-we find that the optimal choice of sensors depends on the public health goal, the underlying network and the reproduction number of the disease (R0). For diseases with a low R0, the most connected individuals provide the earliest and most accurate information about both the onset and peak of an outbreak. However, identifying network hubs is often impractical, and they can be misleading if monitored for general situational awareness, if the underlying network has significant community structure, or if R0 is high or unknown. Taking a theoretical approach, we also derive the optimal surveillance system for early outbreak detection but find that real-world identification of such sensors would be nearly impossible. By contrast, the friends-of-random strategy offers a more practical and robust alternative. It can be readily implemented without prior knowledge of the network, and by identifying sensors with higher than average, but not the highest, epidemiological risk, it provides reasonably early and accurate information.
NASA Astrophysics Data System (ADS)
Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.
2009-05-01
A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.
NASA Technical Reports Server (NTRS)
Maccormack, R. W.
1978-01-01
The calculation of flow fields past aircraft configuration at flight Reynolds numbers is considered. Progress in devising accurate and efficient numerical methods, in understanding and modeling the physics of turbulence, and in developing reliable and powerful computer hardware is discussed. Emphasis is placed on efficient solutions to the Navier-Stokes equations.
From Complex Data to Actionable Information: Institutional Research Supporting Enrollment Management
ERIC Educational Resources Information Center
Anderson, Douglas K.; Milner, Bridgett J.; Foley, Chris J.
2008-01-01
Producing analyses that are accurate, timely, and simple is a constant challenge for institutional researchers. The stakes are high: when the analysis is incomplete, arrives too late, does not adequately address the question, or is simply too much to comprehend, decision makers fall back on anecdotal thinking or gut-level reactions that can lead…
Tangible Models and Haptic Representations Aid Learning of Molecular Biology Concepts
ERIC Educational Resources Information Center
Johannes, Kristen; Powers, Jacklyn; Couper, Lisa; Silberglitt, Matt; Davenport, Jodi
2016-01-01
Can novel 3D models help students develop a deeper understanding of core concepts in molecular biology? We adapted 3D molecular models, developed by scientists, for use in high school science classrooms. The models accurately represent the structural and functional properties of complex DNA and Virus molecules, and provide visual and haptic…
Kirk M. Stueve; Ian W. Housman; Patrick L. Zimmerman; Mark D. Nelson; Jeremy B. Webb; Charles H. Perry; Robert A. Chastain; Dale D. Gormanson; Chengquan Huang; Sean P. Healey; Warren B. Cohen
2011-01-01
Accurate landscape-scale maps of forests and associated disturbances are critical to augment studies on biodiversity, ecosystem services, and the carbon cycle, especially in terms of understanding how the spatial and temporal complexities of damage sustained from disturbances influence forest structure and function. Vegetation change tracker (VCT) is a highly automated...
Visual Signaling in a High-Search Virtual World-Based Assessment: A SAVE Science Design Study
ERIC Educational Resources Information Center
Nelson, Brian C.; Kim, Younsu; Slack, Kent
2016-01-01
Education policy in the United States centers K-12 assessment efforts primarily on standardized tests. However, such tests may not provide an accurate and reliable representation of what students understand about the complexity of science. Research indicates that students tend to pass science tests, even if they do not understand the concepts…
Fürnstahl, Philipp; Vlachopoulos, Lazaros; Schweizer, Andreas; Fucentese, Sandro F; Koch, Peter P
2015-08-01
The accurate reduction of tibial plateau malunions can be challenging without guidance. In this work, we report on a novel technique that combines 3-dimensional computer-assisted planning with patient-specific surgical guides for improving reliability and accuracy of complex intraarticular corrective osteotomies. Preoperative planning based on 3-dimensional bone models was performed to simulate fragment mobilization and reduction in 3 cases. Surgical implementation of the preoperative plan using patient-specific cutting and reduction guides was evaluated; benefits and limitations of the approach were identified and discussed. The preliminary results are encouraging and show that complex, intraarticular corrective osteotomies can be accurately performed with this technique. For selective patients with complex malunions around the tibia plateau, this method might be an attractive option, with the potential to facilitate achieving the most accurate correction possible.
Rodriguez-Rivas, Juan; Marsili, Simone; Juan, David; Valencia, Alfonso
2016-01-01
Protein–protein interactions are fundamental for the proper functioning of the cell. As a result, protein interaction surfaces are subject to strong evolutionary constraints. Recent developments have shown that residue coevolution provides accurate predictions of heterodimeric protein interfaces from sequence information. So far these approaches have been limited to the analysis of families of prokaryotic complexes for which large multiple sequence alignments of homologous sequences can be compiled. We explore the hypothesis that coevolution points to structurally conserved contacts at protein–protein interfaces, which can be reliably projected to homologous complexes with distantly related sequences. We introduce a domain-centered protocol to study the interplay between residue coevolution and structural conservation of protein–protein interfaces. We show that sequence-based coevolutionary analysis systematically identifies residue contacts at prokaryotic interfaces that are structurally conserved at the interface of their eukaryotic counterparts. In turn, this allows the prediction of conserved contacts at eukaryotic protein–protein interfaces with high confidence using solely mutational patterns extracted from prokaryotic genomes. Even in the context of high divergence in sequence (the twilight zone), where standard homology modeling of protein complexes is unreliable, our approach provides sequence-based accurate information about specific details of protein interactions at the residue level. Selected examples of the application of prokaryotic coevolutionary analysis to the prediction of eukaryotic interfaces further illustrate the potential of this approach. PMID:27965389
Rodriguez-Rivas, Juan; Marsili, Simone; Juan, David; Valencia, Alfonso
2016-12-27
Protein-protein interactions are fundamental for the proper functioning of the cell. As a result, protein interaction surfaces are subject to strong evolutionary constraints. Recent developments have shown that residue coevolution provides accurate predictions of heterodimeric protein interfaces from sequence information. So far these approaches have been limited to the analysis of families of prokaryotic complexes for which large multiple sequence alignments of homologous sequences can be compiled. We explore the hypothesis that coevolution points to structurally conserved contacts at protein-protein interfaces, which can be reliably projected to homologous complexes with distantly related sequences. We introduce a domain-centered protocol to study the interplay between residue coevolution and structural conservation of protein-protein interfaces. We show that sequence-based coevolutionary analysis systematically identifies residue contacts at prokaryotic interfaces that are structurally conserved at the interface of their eukaryotic counterparts. In turn, this allows the prediction of conserved contacts at eukaryotic protein-protein interfaces with high confidence using solely mutational patterns extracted from prokaryotic genomes. Even in the context of high divergence in sequence (the twilight zone), where standard homology modeling of protein complexes is unreliable, our approach provides sequence-based accurate information about specific details of protein interactions at the residue level. Selected examples of the application of prokaryotic coevolutionary analysis to the prediction of eukaryotic interfaces further illustrate the potential of this approach.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions
NASA Astrophysics Data System (ADS)
Dong, Miao L.; Goyal, Kashika G.; Worth, Bradley W.; Makkar, Sorab S.; Calhoun, William R.; Bali, Lalit M.; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions.
Dong, Miao L; Goyal, Kashika G; Worth, Bradley W; Makkar, Sorab S; Calhoun, William R; Bali, Lalit M; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Gaze entropy reflects surgical task load.
Di Stasi, Leandro L; Diaz-Piedra, Carolina; Rieiro, Héctor; Sánchez Carrión, José M; Martin Berrido, Mercedes; Olivares, Gonzalo; Catena, Andrés
2016-11-01
Task (over-)load imposed on surgeons is a main contributing factor to surgical errors. Recent research has shown that gaze metrics represent a valid and objective index to asses operator task load in non-surgical scenarios. Thus, gaze metrics have the potential to improve workplace safety by providing accurate measurements of task load variations. However, the direct relationship between gaze metrics and surgical task load has not been investigated yet. We studied the effects of surgical task complexity on the gaze metrics of surgical trainees. We recorded the eye movements of 18 surgical residents, using a mobile eye tracker system, during the performance of three high-fidelity virtual simulations of laparoscopic exercises of increasing complexity level: Clip Applying exercise, Cutting Big exercise, and Translocation of Objects exercise. We also measured performance accuracy and subjective rating of complexity. Gaze entropy and velocity linearly increased with increased task complexity: Visual exploration pattern became less stereotyped (i.e., more random) and faster during the more complex exercises. Residents performed better the Clip Applying exercise and the Cutting Big exercise than the Translocation of Objects exercise and their perceived task complexity differed accordingly. Our data show that gaze metrics are a valid and reliable surgical task load index. These findings have potential impacts to improve patient safety by providing accurate measurements of surgeon task (over-)load and might provide future indices to assess residents' learning curves, independently of expensive virtual simulators or time-consuming expert evaluation.
NASA Technical Reports Server (NTRS)
Leitold, Veronika; Keller, Michael; Morton, Douglas C.; Cook, Bruce D.; Shimabukuro, Yosio E.
2015-01-01
Background: Carbon stocks and fluxes in tropical forests remain large sources of uncertainty in the global carbon budget. Airborne lidar remote sensing is a powerful tool for estimating aboveground biomass, provided that lidar measurements penetrate dense forest vegetation to generate accurate estimates of surface topography and canopy heights. Tropical forest areas with complex topography present a challenge for lidar remote sensing. Results: We compared digital terrain models (DTM) derived from airborne lidar data from a mountainous region of the Atlantic Forest in Brazil to 35 ground control points measured with survey grade GNSS receivers. The terrain model generated from full-density (approx. 20 returns/sq m) data was highly accurate (mean signed error of 0.19 +/-0.97 m), while those derived from reduced-density datasets (8/sq m, 4/sq m, 2/sq m and 1/sq m) were increasingly less accurate. Canopy heights calculated from reduced-density lidar data declined as data density decreased due to the inability to accurately model the terrain surface. For lidar return densities below 4/sq m, the bias in height estimates translated into errors of 80-125 Mg/ha in predicted aboveground biomass. Conclusions: Given the growing emphasis on the use of airborne lidar for forest management, carbon monitoring, and conservation efforts, the results of this study highlight the importance of careful survey planning and consistent sampling for accurate quantification of aboveground biomass stocks and dynamics. Approaches that rely primarily on canopy height to estimate aboveground biomass are sensitive to DTM errors from variability in lidar sampling density.
Leitold, Veronika; Keller, Michael; Morton, Douglas C; Cook, Bruce D; Shimabukuro, Yosio E
2015-12-01
Carbon stocks and fluxes in tropical forests remain large sources of uncertainty in the global carbon budget. Airborne lidar remote sensing is a powerful tool for estimating aboveground biomass, provided that lidar measurements penetrate dense forest vegetation to generate accurate estimates of surface topography and canopy heights. Tropical forest areas with complex topography present a challenge for lidar remote sensing. We compared digital terrain models (DTM) derived from airborne lidar data from a mountainous region of the Atlantic Forest in Brazil to 35 ground control points measured with survey grade GNSS receivers. The terrain model generated from full-density (~20 returns m -2 ) data was highly accurate (mean signed error of 0.19 ± 0.97 m), while those derived from reduced-density datasets (8 m -2 , 4 m -2 , 2 m -2 and 1 m -2 ) were increasingly less accurate. Canopy heights calculated from reduced-density lidar data declined as data density decreased due to the inability to accurately model the terrain surface. For lidar return densities below 4 m -2 , the bias in height estimates translated into errors of 80-125 Mg ha -1 in predicted aboveground biomass. Given the growing emphasis on the use of airborne lidar for forest management, carbon monitoring, and conservation efforts, the results of this study highlight the importance of careful survey planning and consistent sampling for accurate quantification of aboveground biomass stocks and dynamics. Approaches that rely primarily on canopy height to estimate aboveground biomass are sensitive to DTM errors from variability in lidar sampling density.
Zhao, Huaying; Fu, Yan; Glasser, Carla; Andrade Alba, Eric J; Mayer, Mark L; Patterson, George; Schuck, Peter
2016-01-01
The dynamic assembly of multi-protein complexes underlies fundamental processes in cell biology. A mechanistic understanding of assemblies requires accurate measurement of their stoichiometry, affinity and cooperativity, and frequently consideration of multiple co-existing complexes. Sedimentation velocity analytical ultracentrifugation equipped with fluorescence detection (FDS-SV) allows the characterization of protein complexes free in solution with high size resolution, at concentrations in the nanomolar and picomolar range. Here, we extend the capabilities of FDS-SV with a single excitation wavelength from single-component to multi-component detection using photoswitchable fluorescent proteins (psFPs). We exploit their characteristic quantum yield of photo-switching to imprint spatio-temporal modulations onto the sedimentation signal that reveal different psFP-tagged protein components in the mixture. This novel approach facilitates studies of heterogeneous multi-protein complexes at orders of magnitude lower concentrations and for higher-affinity systems than previously possible. Using this technique we studied high-affinity interactions between the amino-terminal domains of GluA2 and GluA3 AMPA receptors. DOI: http://dx.doi.org/10.7554/eLife.17812.001 PMID:27436096
Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A
2015-02-05
As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1). Copyright © 2013 Elsevier B.V. All rights reserved.
Computational structure analysis of biomacromolecule complexes by interface geometry.
Mahdavi, Sedigheh; Salehzadeh-Yazdi, Ali; Mohades, Ali; Masoudi-Nejad, Ali
2013-12-01
The ability to analyze and compare protein-nucleic acid and protein-protein interaction interface has critical importance in understanding the biological function and essential processes occurring in the cells. Since high-resolution three-dimensional (3D) structures of biomacromolecule complexes are available, computational characterizing of the interface geometry become an important research topic in the field of molecular biology. In this study, the interfaces of a set of 180 protein-nucleic acid and protein-protein complexes are computed to understand the principles of their interactions. The weighted Voronoi diagram of the atoms and the Alpha complex has provided an accurate description of the interface atoms. Our method is implemented in the presence and absence of water molecules. A comparison among the three types of interaction interfaces show that RNA-protein complexes have the largest size of an interface. The results show a high correlation coefficient between our method and the PISA server in the presence and absence of water molecules in the Voronoi model and the traditional model based on solvent accessibility and the high validation parameters in comparison to the classical model. Copyright © 2013 Elsevier Ltd. All rights reserved.
Baghaie, Ahmadreza; Pahlavan Tafti, Ahmad; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun
2017-01-01
Scanning Electron Microscope (SEM) as one of the major research and industrial equipment for imaging of micro-scale samples and surfaces has gained extensive attention from its emerge. However, the acquired micrographs still remain two-dimensional (2D). In the current work a novel and highly accurate approach is proposed to recover the hidden third-dimension by use of multi-view image acquisition of the microscopic samples combined with pre/post-processing steps including sparse feature-based stereo rectification, nonlocal-based optical flow estimation for dense matching and finally depth estimation. Employing the proposed approach, three-dimensional (3D) reconstructions of highly complex microscopic samples were achieved to facilitate the interpretation of topology and geometry of surface/shape attributes of the samples. As a byproduct of the proposed approach, high-definition 3D printed models of the samples can be generated as a tangible means of physical understanding. Extensive comparisons with the state-of-the-art reveal the strength and superiority of the proposed method in uncovering the details of the highly complex microscopic samples.
Performance of Improved High-Order Filter Schemes for Turbulent Flows with Shocks
NASA Technical Reports Server (NTRS)
Kotov, Dmitry Vladimirovich; Yee, Helen M C.
2013-01-01
The performance of the filter scheme with improved dissipation control ? has been demonstrated for different flow types. The scheme with local ? is shown to obtain more accurate results than its counterparts with global or constant ?. At the same time no additional tuning is needed to achieve high accuracy of the method when using the local ? technique. However, further improvement of the method might be needed for even more complex and/or extreme flows.
Complex Chemical Reaction Networks from Heuristics-Aided Quantum Chemistry.
Rappoport, Dmitrij; Galvin, Cooper J; Zubarev, Dmitry Yu; Aspuru-Guzik, Alán
2014-03-11
While structures and reactivities of many small molecules can be computed efficiently and accurately using quantum chemical methods, heuristic approaches remain essential for modeling complex structures and large-scale chemical systems. Here, we present a heuristics-aided quantum chemical methodology applicable to complex chemical reaction networks such as those arising in cell metabolism and prebiotic chemistry. Chemical heuristics offer an expedient way of traversing high-dimensional reactive potential energy surfaces and are combined here with quantum chemical structure optimizations, which yield the structures and energies of the reaction intermediates and products. Application of heuristics-aided quantum chemical methodology to the formose reaction reproduces the experimentally observed reaction products, major reaction pathways, and autocatalytic cycles.
NASA Astrophysics Data System (ADS)
Owers, Christopher J.; Rogers, Kerrylee; Woodroffe, Colin D.
2018-05-01
Above-ground biomass represents a small yet significant contributor to carbon storage in coastal wetlands. Despite this, above-ground biomass is often poorly quantified, particularly in areas where vegetation structure is complex. Traditional methods for providing accurate estimates involve harvesting vegetation to develop mangrove allometric equations and quantify saltmarsh biomass in quadrats. However broad scale application of these methods may not capture structural variability in vegetation resulting in a loss of detail and estimates with considerable uncertainty. Terrestrial laser scanning (TLS) collects high resolution three-dimensional point clouds capable of providing detailed structural morphology of vegetation. This study demonstrates that TLS is a suitable non-destructive method for estimating biomass of structurally complex coastal wetland vegetation. We compare volumetric models, 3-D surface reconstruction and rasterised volume, and point cloud elevation histogram modelling techniques to estimate biomass. Our results show that current volumetric modelling approaches for estimating TLS-derived biomass are comparable to traditional mangrove allometrics and saltmarsh harvesting. However, volumetric modelling approaches oversimplify vegetation structure by under-utilising the large amount of structural information provided by the point cloud. The point cloud elevation histogram model presented in this study, as an alternative to volumetric modelling, utilises all of the information within the point cloud, as opposed to sub-sampling based on specific criteria. This method is simple but highly effective for both mangrove (r2 = 0.95) and saltmarsh (r2 > 0.92) vegetation. Our results provide evidence that application of TLS in coastal wetlands is an effective non-destructive method to accurately quantify biomass for structurally complex vegetation.
Accurate modeling and evaluation of microstructures in complex materials
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman
2018-02-01
Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.
Large eddy simulation modeling of particle-laden flows in complex terrain
NASA Astrophysics Data System (ADS)
Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.
2017-12-01
The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.
A drone detection with aircraft classification based on a camera array
NASA Astrophysics Data System (ADS)
Liu, Hao; Qu, Fangchao; Liu, Yingjian; Zhao, Wei; Chen, Yitong
2018-03-01
In recent years, because of the rapid popularity of drones, many people have begun to operate drones, bringing a range of security issues to sensitive areas such as airports and military locus. It is one of the important ways to solve these problems by realizing fine-grained classification and providing the fast and accurate detection of different models of drone. The main challenges of fine-grained classification are that: (1) there are various types of drones, and the models are more complex and diverse. (2) the recognition test is fast and accurate, in addition, the existing methods are not efficient. In this paper, we propose a fine-grained drone detection system based on the high resolution camera array. The system can quickly and accurately recognize the detection of fine grained drone based on hd camera.
Implications of Weak Link Effects on Thermal Characteristics of Transition-Edge Sensors
NASA Technical Reports Server (NTRS)
Bailey, Catherine
2011-01-01
Weak link behavior in transition-edge sensor (TES) devices creates the need for a more careful characterization of a device's thermal characteristics through its transition. This is particularly true for small TESs where a small change in the measurement current results in large changes in temperature. A highly current-dependent transition shape makes accurate thermal characterization of the TES parameters through the transition challenging. To accurately interpret measurements, especially complex impedance, it is crucial to know the temperature-dependent thermal conductance, G(T), and heat capacity, C(T), at each point through the transition. We will present data illustrating these effects and discuss how we overcome the challenges that are present in accurately determining G and T from IV curves. We will also show how these weak link effects vary with TES size.
Ross, James D.; Cullen, D. Kacy; Harris, James P.; LaPlaca, Michelle C.; DeWeerth, Stephen P.
2015-01-01
Three-dimensional (3-D) image analysis techniques provide a powerful means to rapidly and accurately assess complex morphological and functional interactions between neural cells. Current software-based identification methods of neural cells generally fall into two applications: (1) segmentation of cell nuclei in high-density constructs or (2) tracing of cell neurites in single cell investigations. We have developed novel methodologies to permit the systematic identification of populations of neuronal somata possessing rich morphological detail and dense neurite arborization throughout thick tissue or 3-D in vitro constructs. The image analysis incorporates several novel automated features for the discrimination of neurites and somata by initially classifying features in 2-D and merging these classifications into 3-D objects; the 3-D reconstructions automatically identify and adjust for over and under segmentation errors. Additionally, the platform provides for software-assisted error corrections to further minimize error. These features attain very accurate cell boundary identifications to handle a wide range of morphological complexities. We validated these tools using confocal z-stacks from thick 3-D neural constructs where neuronal somata had varying degrees of neurite arborization and complexity, achieving an accuracy of ≥95%. We demonstrated the robustness of these algorithms in a more complex arena through the automated segmentation of neural cells in ex vivo brain slices. These novel methods surpass previous techniques by improving the robustness and accuracy by: (1) the ability to process neurites and somata, (2) bidirectional segmentation correction, and (3) validation via software-assisted user input. This 3-D image analysis platform provides valuable tools for the unbiased analysis of neural tissue or tissue surrogates within a 3-D context, appropriate for the study of multi-dimensional cell-cell and cell-extracellular matrix interactions. PMID:26257609
Gray, J W; Milner, P J; Edwards, E H; Daniels, J P; Khan, K S
2012-07-01
Point-of-care testing (POCT) is one of the fastest growing sectors of laboratory diagnostics. Most tests in routine use are haematology or biochemistry tests that are of low complexity. Microbiology POCT has been constrained by a lack of tests that are both accurate and of low complexity. We describe our experience of the practical issues around using more complex POCT for detection of Group B streptococci (GBS) in swabs from labouring women. We evaluated two tests for their feasibility in POCT: an optical immune assay (Biostar OIA Strep B, Inverness Medical, Princetown, NJ) and a PCR (IDI-Strep B, Cepheid, Sunnyvale, CA), which have been categorised as being of moderate and high complexity, respectively. A total of 12 unqualified midwifery assistants (MA) were trained to undertake testing on the delivery suite. A systematic approach to the introduction and management of POC testing was used. Modelling showed that the probability of test results being available within a clinically useful timescale was high. However, in the clinical setting, we found it impossible to maintain reliable availability of trained testers. Implementation of more complex POC testing is technically feasible, but it is expensive, and may be difficult to achieve in a busy delivery suite.
NASA Technical Reports Server (NTRS)
Loh, Ching Y.; Jorgenson, Philip C. E.
2007-01-01
A time-accurate, upwind, finite volume method for computing compressible flows on unstructured grids is presented. The method is second order accurate in space and time and yields high resolution in the presence of discontinuities. For efficiency, the Roe approximate Riemann solver with an entropy correction is employed. In the basic Euler/Navier-Stokes scheme, many concepts of high order upwind schemes are adopted: the surface flux integrals are carefully treated, a Cauchy-Kowalewski time-stepping scheme is used in the time-marching stage, and a multidimensional limiter is applied in the reconstruction stage. However even with these up-to-date improvements, the basic upwind scheme is still plagued by the so-called "pathological behaviors," e.g., the carbuncle phenomenon, the expansion shock, etc. A solution to these limitations is presented which uses a very simple dissipation model while still preserving second order accuracy. This scheme is referred to as the enhanced time-accurate upwind (ETAU) scheme in this paper. The unstructured grid capability renders flexibility for use in complex geometry; and the present ETAU Euler/Navier-Stokes scheme is capable of handling a broad spectrum of flow regimes from high supersonic to subsonic at very low Mach number, appropriate for both CFD (computational fluid dynamics) and CAA (computational aeroacoustics). Numerous examples are included to demonstrate the robustness of the methods.
ERIC Educational Resources Information Center
Almeida, Renita A.; Dickinson, J. Edwin; Maybery, Murray T.; Badcock, Johanna C.; Badcock, David R.
2010-01-01
The Embedded Figures Test (EFT) requires detecting a shape within a complex background and individuals with autism or high Autism-spectrum Quotient (AQ) scores are faster and more accurate on this task than controls. This research aimed to uncover the visual processes producing this difference. Previously we developed a search task using radial…
High spatial resolution spectral unmixing for mapping ash species across a complex urban environment
Jennifer Pontius; Ryan P. Hanavan; Richard A. Hallett; Bruce D. Cook; Lawrence A. Corp
2017-01-01
Ash (Fraxinus L.) species are currently threatened by the emerald ash borer (EAB; Agrilus planipennis Fairmaire) across a growing area in the eastern US. Accurate mapping of ash species is required to monitor the host resource, predict EAB spread and better understand the short- and long-term effects of EAB on the ash resource...
Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand
DeLuca, Samuel; Khar, Karen; Meiler, Jens
2015-01-01
RosettaLigand has been successfully used to predict binding poses in protein-small molecule complexes. However, the RosettaLigand docking protocol is comparatively slow in identifying an initial starting pose for the small molecule (ligand) making it unfeasible for use in virtual High Throughput Screening (vHTS). To overcome this limitation, we developed a new sampling approach for placing the ligand in the protein binding site during the initial ‘low-resolution’ docking step. It combines the translational and rotational adjustments to the ligand pose in a single transformation step. The new algorithm is both more accurate and more time-efficient. The docking success rate is improved by 10–15% in a benchmark set of 43 protein/ligand complexes, reducing the number of models that typically need to be generated from 1000 to 150. The average time to generate a model is reduced from 50 seconds to 10 seconds. As a result we observe an effective 30-fold speed increase, making RosettaLigand appropriate for docking medium sized ligand libraries. We demonstrate that this improved initial placement of the ligand is critical for successful prediction of an accurate binding position in the ‘high-resolution’ full atom refinement step. PMID:26207742
EBIC: an evolutionary-based parallel biclustering algorithm for pattern discovery.
Orzechowski, Patryk; Sipper, Moshe; Huang, Xiuzhen; Moore, Jason H
2018-05-22
Biclustering algorithms are commonly used for gene expression data analysis. However, accurate identification of meaningful structures is very challenging and state-of-the-art methods are incapable of discovering with high accuracy different patterns of high biological relevance. In this paper a novel biclustering algorithm based on evolutionary computation, a subfield of artificial intelligence (AI), is introduced. The method called EBIC aims to detect order-preserving patterns in complex data. EBIC is capable of discovering multiple complex patterns with unprecedented accuracy in real gene expression datasets. It is also one of the very few biclustering methods designed for parallel environments with multiple graphics processing units (GPUs). We demonstrate that EBIC greatly outperforms state-of-the-art biclustering methods, in terms of recovery and relevance, on both synthetic and genetic datasets. EBIC also yields results over 12 times faster than the most accurate reference algorithms. EBIC source code is available on GitHub at https://github.com/EpistasisLab/ebic. Correspondence and requests for materials should be addressed to P.O. (email: patryk.orzechowski@gmail.com) and J.H.M. (email: jhmoore@upenn.edu). Supplementary Data with results of analyses and additional information on the method is available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Verechagin, V.; Kris, R.; Schwarzband, I.; Milstein, A.; Cohen, B.; Shkalim, A.; Levy, S.; Price, D.; Bal, E.
2018-03-01
Over the years, mask and wafers defects dispositioning has become an increasingly challenging and time consuming task. With design rules getting smaller, OPC getting complex and scanner illumination taking on free-form shapes - the probability of a user to perform accurate and repeatable classification of defects detected by mask inspection tools into pass/fail bins is reducing. The critical challenging of mask defect metrology for small nodes ( < 30 nm) was reviewed in [1]. While Critical Dimension (CD) variation measurement is still the method of choice for determining a mask defect future impact on wafer, the high complexity of OPCs combined with high variability in pattern shapes poses a challenge for any automated CD variation measurement method. In this study, a novel approach for measurement generalization is presented. CD variation assessment performance is evaluated on multiple different complex shape patterns, and is benchmarked against an existing qualified measurement methodology.
Bordiga, Matteo; Travaglia, Fabiano; Meyrand, Mickael; German, J Bruce; Lebrilla, Carlito B; Coïsson, Jean Daniel; Arlorio, Marco; Barile, Daniela
2012-04-11
Over forty-five complex free oligosaccharides (of which several are novel) have been isolated and chemically characterized by gas chromatography and high resolution and high mass accuracy matrix-assisted laser desorption/ionization mass spectrometry (MALDI-FTICR MS) in red and white wines, Grignolino and Chardonnay, respectively. Oligosaccharides with a degree of polymerization between 3 and 14 were separated from simple monosaccharides and disaccharides by solid-phase extraction. The concentrations of free oligosaccharides were over 100 mg/L in both red and white wines. The free oligosaccharides-characterized for the first time in the present study-include hexose-oligosaccharides, xyloglucans, and arabinogalactans and may be the natural byproduct of the degradation of cell wall polysaccharides. The coupled gas chromatography and accurate mass spectrometry approach revealed an effective method to characterize and quantify complex functional oligosaccharides in both red and white wine.
Bordiga, Matteo; Travaglia, Fabiano; Meyrand, Mickael; German, J. Bruce; Lebrilla, Carlito B.; Coïsson, Jean Daniel; Arlorio, Marco; Barile, Daniela
2012-01-01
Over forty-five complex free oligosaccharides (of which several are novel) have been isolated and chemically characterized by gas chromatography and high resolution and high mass accuracy matrix-assisted laser desorption/ionization mass spectrometry (MALDI-FTICR MS) in red and white wines, Grignolino and Chardonnay, respectively. Oligosaccharides with a degree of polymerization between 3 and 14 were separated from simple monosaccharides and disaccharides by solid-phase extraction. The concentrations free oligosaccharides were over 100 mg/L in both red and white wines. The free oligosaccharides—characterized for the first time in the present study include hexose-oligosaccharides, xyloglucans and arabinogalactans, and may be the natural by-products of the degradation of cell wall polysaccharides. The coupled gas chromatography and accurate mass spectrometry approach revealed an effective method to characterize and quantify complex functional oligosaccharides in both red and white wine. PMID:22429017
Chen, Nan; Majda, Andrew J
2017-12-05
Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.
NASA Astrophysics Data System (ADS)
Khalili, Ashkan
Wave propagation analysis in 1-D and 2-D composite structures is performed efficiently and accurately through the formulation of a User-Defined Element (UEL) based on the wavelet spectral finite element (WSFE) method. The WSFE method is based on the first order shear deformation theory which yields accurate results for wave motion at high frequencies. The wave equations are reduced to ordinary differential equations using Daubechies compactly supported, orthonormal, wavelet scaling functions for approximations in time and one spatial dimension. The 1-D and 2-D WSFE models are highly efficient computationally and provide a direct relationship between system input and output in the frequency domain. The UEL is formulated and implemented in Abaqus for wave propagation analysis in composite structures with complexities. Frequency domain formulation of WSFE leads to complex valued parameters, which are decoupled into real and imaginary parts and presented to Abaqus as real values. The final solution is obtained by forming a complex value using the real number solutions given by Abaqus. Several numerical examples are presented here for 1-D and 2-D composite waveguides. Wave motions predicted by the developed UEL correlate very well with Abaqus simulations using shear flexible elements. The results also show that the UEL largely retains computational efficiency of the WSFE method and extends its ability to model complex features. An enhanced cross-correlation method (ECCM) is developed in order to accurately predict damage location in plates. Three major modifications are proposed to the widely used cross-correlation method (CCM) to improve damage localization capabilities, namely actuator-sensor configuration, signal pre-processing method, and signal post-processing method. The ECCM is investigated numerically (FEM simulation) and experimentally. Experimental investigations for damage detection employ a PZT transducer as actuator and laser Doppler vibrometer as sensor. Both numerical and experimental results show that the developed method is capable of damage localization with high precision. Further, ECCM is used to detect and localize debonding in a composite material skin-stiffener joint. The UEL is used to represent the healthy case whereas the damaged case is simulated using Abaqus. It is shown that the ECCM successfully detects the location of the debond in the skin-stiffener joint.
NASA Astrophysics Data System (ADS)
Guo, L.; Yin, Y.; Deng, M.; Guo, L.; Yan, J.
2017-12-01
At present, most magnetotelluric (MT) forward modelling and inversion codes are based on finite difference method. But its structured mesh gridding cannot be well adapted for the conditions with arbitrary topography or complex tectonic structures. By contrast, the finite element method is more accurate in calculating complex and irregular 3-D region and has lower requirement of function smoothness. However, the complexity of mesh gridding and limitation of computer capacity has been affecting its application. COMSOL Multiphysics is a cross-platform finite element analysis, solver and multiphysics full-coupling simulation software. It achieves highly accurate numerical simulations with high computational performance and outstanding multi-field bi-directional coupling analysis capability. In addition, its AC/DC and RF module can be used to easily calculate the electromagnetic responses of complex geological structures. Using the adaptive unstructured grid, the calculation is much faster. In order to improve the discretization technique of computing area, we use the combination of Matlab and COMSOL Multiphysics to establish a general procedure for calculating the MT responses for arbitrary resistivity models. The calculated responses include the surface electric and magnetic field components, impedance components, magnetic transfer functions and phase tensors. Then, the reliability of this procedure is certificated by 1-D, 2-D and 3-D and anisotropic forward modeling tests. Finally, we establish the 3-D lithospheric resistivity model for the Proterozoic Wutai-Hengshan Mts. within the North China Craton by fitting the real MT data collected there. The reliability of the model is also verified by induced vectors and phase tensors. Our model shows more details and better resolution, compared with the previously published 3-D model based on the finite difference method. In conclusion, COMSOL Multiphysics package is suitable for modeling the 3-D lithospheric resistivity structures under complex tectonic deformation backgrounds, which could be a good complement to the existing finite-difference inversion algorithms.
Cluster-Expansion Model for Complex Quinary Alloys: Application to Alnico Permanent Magnets
NASA Astrophysics Data System (ADS)
Nguyen, Manh Cuong; Zhou, Lin; Tang, Wei; Kramer, Matthew J.; Anderson, Iver E.; Wang, Cai-Zhuang; Ho, Kai-Ming
2017-11-01
An accurate and transferable cluster-expansion model for complex quinary alloys is developed. Lattice Monte Carlo simulation enabled by this cluster-expansion model is used to investigate temperature-dependent atomic structure of alnico alloys, which are considered as promising high-performance non-rare-earth permanent-magnet materials for high-temperature applications. The results of the Monte Carlo simulations are consistent with available experimental data and provide useful insights into phase decomposition, selection, and chemical ordering in alnico. The simulations also reveal a previously unrecognized D 03 alloy phase. This phase is very rich in Ni and exhibits very weak magnetization. Manipulating the size and location of this phase provides a possible route to improve the magnetic properties of alnico, especially coercivity.
Mesoscale simulations of atmospheric flow and tracer transport in Phoenix, Arizona
NASA Astrophysics Data System (ADS)
Wang, Ge; Ostoja-Starzewski, Martin
2006-09-01
Large urban centres located within confining rugged or complex terrain can frequently experience episodes of high concentrations of lower atmospheric pollution. Metropolitan Phoenix, Arizona (United States), is a good example, as the general population is occasionally subjected to high levels of lower atmospheric ozone, carbon monoxide and suspended particulate matter. As a result of dramatic but continuous increase in population, the accompanying environmental stresses and the local atmospheric circulation that dominates the background flow, an accurate simulation of the mesoscale pollutant transport across Phoenix and similar urban areas is becoming increasingly important. This is particularly the case in an airshed, such as that of Phoenix, where the local atmospheric circulation is complicated by the complex terrain of the area.
Calculating High Speed Centrifugal Compressor Performance from Averaged Measurements
NASA Astrophysics Data System (ADS)
Lou, Fangyuan; Fleming, Ryan; Key, Nicole L.
2012-12-01
To improve the understanding of high performance centrifugal compressors found in modern aircraft engines, the aerodynamics through these machines must be experimentally studied. To accurately capture the complex flow phenomena through these devices, research facilities that can accurately simulate these flows are necessary. One such facility has been recently developed, and it is used in this paper to explore the effects of averaging total pressure and total temperature measurements to calculate compressor performance. Different averaging techniques (including area averaging, mass averaging, and work averaging) have been applied to the data. Results show that there is a negligible difference in both the calculated total pressure ratio and efficiency for the different techniques employed. However, the uncertainty in the performance parameters calculated with the different averaging techniques is significantly different, with area averaging providing the least uncertainty.
Shen, Zhitao; Ma, Haitao; Zhang, Chunfang; Fu, Mingkai; Wu, Yanan; Bian, Wensheng; Cao, Jianwei
2017-01-01
Encouraged by recent advances in revealing significant effects of van der Waals wells on reaction dynamics, many people assume that van der Waals wells are inevitable in chemical reactions. Here we find that the weak long-range forces cause van der Waals saddles in the prototypical C(1D)+D2 complex-forming reaction that have very different dynamical effects from van der Waals wells at low collision energies. Accurate quantum dynamics calculations on our highly accurate ab initio potential energy surfaces with van der Waals saddles yield cross-sections in close agreement with crossed-beam experiments, whereas the same calculations on an earlier surface with van der Waals wells produce much smaller cross-sections at low energies. Further trajectory calculations reveal that the van der Waals saddle leads to a torsion then sideways insertion reaction mechanism, whereas the well suppresses reactivity. Quantum diffraction oscillations and sharp resonances are also predicted based on our ground- and excited-state potential energy surfaces. PMID:28094253
The Voronoi Implicit Interface Method for computing multiphase physics
Saye, Robert I.; Sethian, James A.
2011-01-01
We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces. PMID:22106269
Elucidating Proteoform Families from Proteoform Intact-Mass and Lysine-Count Measurements
2016-01-01
Proteomics is presently dominated by the “bottom-up” strategy, in which proteins are enzymatically digested into peptides for mass spectrometric identification. Although this approach is highly effective at identifying large numbers of proteins present in complex samples, the digestion into peptides renders it impossible to identify the proteoforms from which they were derived. We present here a powerful new strategy for the identification of proteoforms and the elucidation of proteoform families (groups of related proteoforms) from the experimental determination of the accurate proteoform mass and number of lysine residues contained. Accurate proteoform masses are determined by standard LC–MS analysis of undigested protein mixtures in an Orbitrap mass spectrometer, and the lysine count is determined using the NeuCode isotopic tagging method. We demonstrate the approach in analysis of the yeast proteome, revealing 8637 unique proteoforms and 1178 proteoform families. The elucidation of proteoforms and proteoform families afforded here provides an unprecedented new perspective upon proteome complexity and dynamics. PMID:26941048
The Voronoi Implicit Interface Method for computing multiphase physics.
Saye, Robert I; Sethian, James A
2011-12-06
We introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarily high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. We test the method's accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann's law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.
The Voronoi Implicit Interface Method for computing multiphase physics
Saye, Robert I.; Sethian, James A.
2011-11-21
In this paper, we introduce a numerical framework, the Voronoi Implicit Interface Method for tracking multiple interacting and evolving regions (phases) whose motion is determined by complex physics (fluids, mechanics, elasticity, etc.), intricate jump conditions, internal constraints, and boundary conditions. The method works in two and three dimensions, handles tens of thousands of interfaces and separate phases, and easily and automatically handles multiple junctions, triple points, and quadruple points in two dimensions, as well as triple lines, etc., in higher dimensions. Topological changes occur naturally, with no surgery required. The method is first-order accurate at junction points/lines, and of arbitrarilymore » high-order accuracy away from such degeneracies. The method uses a single function to describe all phases simultaneously, represented on a fixed Eulerian mesh. Finally, we test the method’s accuracy through convergence tests, and demonstrate its applications to geometric flows, accurate prediction of von Neumann’s law for multiphase curvature flow, and robustness under complex fluid flow with surface tension and large shearing forces.« less
First results of the delayed fluorescence velocimetry as applied to diesel spray diagnostics
NASA Astrophysics Data System (ADS)
Megahed, M.; Roosen, P.
1993-08-01
One of the main parameters governing diesel spray formation is the fuel's velocity just beneath the nozzle. The high density of the injected liquid within the first few millimeters under the injector prohibits accurate measurements of this velocity. The liquid's velocity in this region has been mainly measured using intrusive methods and has been numerically calculated without considering the complex flow fields in the nozzle. A new optical method based on laser induced delayed fluorescence allowing the measurement of the fuel's velocity close to the nozzle is reported. The results are accurate to about 14% and represent the velocities of heavy oils within the first 2 - 5 mm beneath the nozzle. The development of the velocity over the injection period showed a drastic deceleration of the fuel within the first 3 mm beneath the nozzle. This is assumed to be due to the complex interaction of cavitation in the injection hole and pressure waves in the injection system which causes the start of atomization in the nozzle hole.
Gymnastic judges benefit from their own motor experience as gymnasts.
Pizzera, Alexandra
2012-12-01
Gymnastic judges have the difficult task of evaluating highly complex skills. My purpose in the current study was to examine evidence that judges use their sensorimotor experiences to enhance their perceptual judgments. In a video test, 58 judges rated 31 gymnasts performing a balance beam skill. I compared decision quality between judges who could perform the skill themselves on the balance beam (specific motor experience = SME) and those who could not. Those with SME showed better performance than those without SME. These data suggest that judges use their personal experiences as information to accurately assess complex gymnastic skills. [corrected].
Thermooptic two-mode interference device for reconfigurable quantum optic circuits
NASA Astrophysics Data System (ADS)
Sahu, Partha Pratim
2018-06-01
Reconfigurable large-scale integrated quantum optic circuits require compact component having capability of accurate manipulation of quantum entanglement for quantum communication and information processing applications. Here, a thermooptic two-mode interference coupler has been introduced as a compact component for generation of reconfigurable complex multi-photons quantum interference. Both theoretical and experimental approaches are used for the demonstration of two-photon and four-photon quantum entanglement manipulated with thermooptic phase change in TMI region. Our results demonstrate complex multi-photon quantum interference with high fabrication tolerance and quantum fidelity in smaller dimension than previous thermooptic Mach-Zehnder implementations.
A High Resolution Clinical PET with Breast and Whole Body Transfigurations
2005-04-01
diameters 3 mm and 5 nun) such as patient discomfort, implementation complexity, or rel- placed on the abdomen close to the navel was scanned in the atively... placed at the end of the subject’s nostril, as shown in Fig. 1(b). The authors are with University of Texas M. D. Anderson Cancer Center, The air...bringing together adjacent modules. HOTPETs detectors are la/dbY’ highly pixilated (crystal pitch 2.6 mm), requiring accurate place - ment of the
Automated, per pixel Cloud Detection from High-Resolution VNIR Data
NASA Technical Reports Server (NTRS)
Varlyguin, Dmitry L.
2007-01-01
CASA is a fully automated software program for the per-pixel detection of clouds and cloud shadows from medium- (e.g., Landsat, SPOT, AWiFS) and high- (e.g., IKONOS, QuickBird, OrbView) resolution imagery without the use of thermal data. CASA is an object-based feature extraction program which utilizes a complex combination of spectral, spatial, and contextual information available in the imagery and the hierarchical self-learning logic for accurate detection of clouds and their shadows.
Higher order solution of the Euler equations on unstructured grids using quadratic reconstruction
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Frederickson, Paul O.
1990-01-01
High order accurate finite-volume schemes for solving the Euler equations of gasdynamics are developed. Central to the development of these methods are the construction of a k-exact reconstruction operator given cell-averaged quantities and the use of high order flux quadrature formulas. General polygonal control volumes (with curved boundary edges) are considered. The formulations presented make no explicit assumption as to complexity or convexity of control volumes. Numerical examples are presented for Ringleb flow to validate the methodology.
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng
2017-10-01
Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.
Highly accurate potential energy surface for the He-H2 dimer
NASA Astrophysics Data System (ADS)
Bakr, Brandon W.; Smith, Daniel G. A.; Patkowski, Konrad
2013-10-01
A new highly accurate interaction potential is constructed for the He-H2 van der Waals complex. This potential is fitted to 1900 ab initio energies computed at the very large-basis coupled-cluster level and augmented by corrections for higher-order excitations (up to full configuration interaction level) and the diagonal Born-Oppenheimer correction. At the vibrationally averaged H-H bond length of 1.448736 bohrs, the well depth of our potential, 15.870 ± 0.065 K, is nearly 1 K larger than the most accurate previous studies have indicated. In addition to constructing our own three-dimensional potential in the van der Waals region, we present a reparameterization of the Boothroyd-Martin-Peterson potential surface [A. I. Boothroyd, P. G. Martin, and M. R. Peterson, J. Chem. Phys. 119, 3187 (2003)] that is suitable for all configurations of the triatomic system. Finally, we use the newly developed potentials to compute the properties of the lone bound states of 4He-H2 and 3He-H2 and the interaction second virial coefficient of the hydrogen-helium mixture.
NASA Astrophysics Data System (ADS)
Zhang, Jiang; Loo, Rachel R. Ogorzalek; Loo, Joseph A.
2017-09-01
Native mass spectrometry (MS) with electrospray ionization (ESI) has evolved as an invaluable tool for the characterization of intact native proteins and non-covalently bound protein complexes. Here we report the structural characterization by high resolution native top-down MS of human thrombin and its complex with the Bock thrombin binding aptamer (TBA), a 15-nucleotide DNA with high specificity and affinity for thrombin. Accurate mass measurements revealed that the predominant form of native human α-thrombin contains a glycosylation mass of 2205 Da, corresponding to a sialylated symmetric biantennary oligosaccharide structure without fucosylation. Native MS showed that thrombin and TBA predominantly form a 1:1 complex under near physiological conditions (pH 6.8, 200 mM NH4OAc), but the binding stoichiometry is influenced by the solution ionic strength. In 20 mM ammonium acetate solution, up to two TBAs were bound to thrombin, whereas increasing the solution ionic strength destabilized the thrombin-TBA complex and 1 M NH4OAc nearly completely dissociated the complex. This observation is consistent with the mediation of thrombin-aptamer binding through electrostatic interactions and it is further consistent with the human thrombin structure that contains two anion binding sites on the surface. Electron capture dissociation (ECD) top-down MS of the thrombin-TBA complex performed with a high resolution 15 Tesla Fourier transform ion cyclotron resonance (FTICR) mass spectrometer showed the primary binding site to be at exosite I located near the N-terminal sequence of the heavy chain, consistent with crystallographic data. High resolution native top-down MS is complementary to traditional structural biology methods for structurally characterizing native proteins and protein-DNA complexes. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.
2014-12-01
Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.
NASA Astrophysics Data System (ADS)
Eaton, M.; Pearson, M.; Lee, W.; Pullin, R.
2015-07-01
The ability to accurately locate damage in any given structure is a highly desirable attribute for an effective structural health monitoring system and could help to reduce operating costs and improve safety. This becomes a far greater challenge in complex geometries and materials, such as modern composite airframes. The poor translation of promising laboratory based SHM demonstrators to industrial environments forms a barrier to commercial up take of technology. The acoustic emission (AE) technique is a passive NDT method that detects elastic stress waves released by the growth of damage. It offers very sensitive damage detection, using a sparse array of sensors to detect and globally locate damage within a structure. However its application to complex structures commonly yields poor accuracy due to anisotropic wave propagation and the interruption of wave propagation by structural features such as holes and thickness changes. This work adopts an empirical mapping technique for AE location, known as Delta T Mapping, which uses experimental training data to account for such structural complexities. The technique is applied to a complex geometry composite aerospace structure undergoing certification testing. The component consists of a carbon fibre composite tube with varying wall thickness and multiple holes, that was loaded under bending. The damage location was validated using X-ray CT scanning and the Delta T Mapping technique was shown to improve location accuracy when compared with commercial algorithms. The onset and progression of damage were monitored throughout the test and used to inform future design iterations.
Huang, Wei; Ravikumar, Krishnakumar M; Parisien, Marc; Yang, Sichun
2016-12-01
Structural determination of protein-protein complexes such as multidomain nuclear receptors has been challenging for high-resolution structural techniques. Here, we present a combined use of multiple biophysical methods, termed iSPOT, an integration of shape information from small-angle X-ray scattering (SAXS), protection factors probed by hydroxyl radical footprinting, and a large series of computationally docked conformations from rigid-body or molecular dynamics (MD) simulations. Specifically tested on two model systems, the power of iSPOT is demonstrated to accurately predict the structures of a large protein-protein complex (TGFβ-FKBP12) and a multidomain nuclear receptor homodimer (HNF-4α), based on the structures of individual components of the complexes. Although neither SAXS nor footprinting alone can yield an unambiguous picture for each complex, the combination of both, seamlessly integrated in iSPOT, narrows down the best-fit structures that are about 3.2Å and 4.2Å in RMSD from their corresponding crystal structures, respectively. Furthermore, this proof-of-principle study based on the data synthetically derived from available crystal structures shows that the iSPOT-using either rigid-body or MD-based flexible docking-is capable of overcoming the shortcomings of standalone computational methods, especially for HNF-4α. By taking advantage of the integration of SAXS-based shape information and footprinting-based protection/accessibility as well as computational docking, this iSPOT platform is set to be a powerful approach towards accurate integrated modeling of many challenging multiprotein complexes. Copyright © 2016 Elsevier Inc. All rights reserved.
Introducing GAMER: A fast and accurate method for ray-tracing galaxies using procedural noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groeneboom, N. E.; Dahle, H., E-mail: nicolaag@astro.uio.no
2014-03-10
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images thatmore » can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.« less
Accurate van der Waals coefficients from density functional theory
Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn
2012-01-01
The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765
Introducing GAMER: A Fast and Accurate Method for Ray-tracing Galaxies Using Procedural Noise
NASA Astrophysics Data System (ADS)
Groeneboom, N. E.; Dahle, H.
2014-03-01
We developed a novel approach for fast and accurate ray-tracing of galaxies using procedural noise fields. Our method allows for efficient and realistic rendering of synthetic galaxy morphologies, where individual components such as the bulge, disk, stars, and dust can be synthesized in different wavelengths. These components follow empirically motivated overall intensity profiles but contain an additional procedural noise component that gives rise to complex natural patterns that mimic interstellar dust and star-forming regions. These patterns produce more realistic-looking galaxy images than using analytical expressions alone. The method is fully parallelized and creates accurate high- and low- resolution images that can be used, for example, in codes simulating strong and weak gravitational lensing. In addition to having a user-friendly graphical user interface, the C++ software package GAMER is easy to implement into an existing code.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhleh, Luay
I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbialmore » genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.« less
Tang, Yat T; Marshall, Garland R
2011-02-28
Binding affinity prediction is one of the most critical components to computer-aided structure-based drug design. Despite advances in first-principle methods for predicting binding affinity, empirical scoring functions that are fast and only relatively accurate are still widely used in structure-based drug design. With the increasing availability of X-ray crystallographic structures in the Protein Data Bank and continuing application of biophysical methods such as isothermal titration calorimetry to measure thermodynamic parameters contributing to binding free energy, sufficient experimental data exists that scoring functions can now be derived by separating enthalpic (ΔH) and entropic (TΔS) contributions to binding free energy (ΔG). PHOENIX, a scoring function to predict binding affinities of protein-ligand complexes, utilizes the increasing availability of experimental data to improve binding affinity predictions by the following: model training and testing using high-resolution crystallographic data to minimize structural noise, independent models of enthalpic and entropic contributions fitted to thermodynamic parameters assumed to be thermodynamically biased to calculate binding free energy, use of shape and volume descriptors to better capture entropic contributions. A set of 42 descriptors and 112 protein-ligand complexes were used to derive functions using partial least-squares for change of enthalpy (ΔH) and change of entropy (TΔS) to calculate change of binding free energy (ΔG), resulting in a predictive r2 (r(pred)2) of 0.55 and a standard error (SE) of 1.34 kcal/mol. External validation using the 2009 version of the PDBbind "refined set" (n = 1612) resulted in a Pearson correlation coefficient (R(p)) of 0.575 and a mean error (ME) of 1.41 pK(d). Enthalpy and entropy predictions were of limited accuracy individually. However, their difference resulted in a relatively accurate binding free energy. While the development of an accurate and applicable scoring function was an objective of this study, the main focus was evaluation of the use of high-resolution X-ray crystal structures with high-quality thermodynamic parameters from isothermal titration calorimetry for scoring function development. With the increasing application of structure-based methods in molecular design, this study suggests that using high-resolution crystal structures, separating enthalpy and entropy contributions to binding free energy, and including descriptors to better capture entropic contributions may prove to be effective strategies toward rapid and accurate calculation of binding affinity.
SWOT Oceanography and Hydrology Data Product Simulators
NASA Technical Reports Server (NTRS)
Peral, Eva; Rodriguez, Ernesto; Fernandez, Daniel Esteban; Johnson, Michael P.; Blumstein, Denis
2013-01-01
The proposed Surface Water and Ocean Topography (SWOT) mission would demonstrate a new measurement technique using radar interferometry to obtain wide-swath measurements of water elevation at high resolution over ocean and land, addressing the needs of both the hydrology and oceanography science communities. To accurately evaluate the performance of the proposed SWOT mission, we have developed several data product simulators at different levels of fidelity and complexity.
Where's water? The many binding sites of hydantoin.
Gruet, Sébastien; Pérez, Cristóbal; Steber, Amanda L; Schnell, Melanie
2018-02-21
Prebiotic hydantoin and its complexes with one and two water molecules are investigated using high-resolution broadband rotational spectroscopy in the 2-8 GHz frequency range. The hyperfine structure due to the nuclear quadrupole coupling of the two 14 N atoms is analysed for the monomer and the complexes. This characteristic hyperfine structure will support a definitive assignment from low frequency radioastronomy data. Experiments with H 2 18 O provide accurate experimental information on the preferred binding sites of water, which are compared with quantum-chemically calculated coordinates. In the 2-water complexes, the water molecules bind to hydantoin as a dimer instead of individually, indicating the strong water-water interactions. This information provides first insight on how hydantoin interacts with water on the molecular level.
Electron-Atom Ionization Calculations using Propagating Exterior Complex Scaling
NASA Astrophysics Data System (ADS)
Bartlett, Philip
2007-10-01
The exterior complex scaling method (Science 286 (1999) 2474), pioneered by Rescigno, McCurdy and coworkers, provided highly accurate ab initio solutions for electron-hydrogen collisions by directly solving the time-independent Schr"odinger equation in coordinate space. An extension of this method, propagating exterior complex scaling (PECS), was developed by Bartlett and Stelbovics (J. Phys. B 37 (2004) L69, J. Phys. B 39 (2006) R379) and has been demonstrated to provide computationally efficient and accurate calculations of ionization and scattering cross sections over a large range of energies below, above and near the ionization threshold. An overview of the PECS method for three-body collisions and the computational advantages of its propagation and iterative coupling techniques will be presented along with results of: (1) near-threshold ionization of electron-hydrogen collisions and the Wannier threshold laws, (2) scattering cross section resonances below the ionization threshold, and (3) total and differential cross sections for electron collisions with excited targets and hydrogenic ions from low through to high energies. Recently, the PECS method has been extended to solve four-body collisions using time-independent methods in coordinate space and has initially been applied to the s-wave model for electron-helium collisions. A description of the extensions made to the PECS method to facilitate these significantly more computationally demanding calculations will be given, and results will be presented for elastic, single-excitation, double-excitation, single-ionization and double-ionization collisions.
Self-similar slip distributions on irregular shaped faults
NASA Astrophysics Data System (ADS)
Herrero, A.; Murphy, S.
2018-06-01
We propose a strategy to place a self-similar slip distribution on a complex fault surface that is represented by an unstructured mesh. This is possible by applying a strategy based on the composite source model where a hierarchical set of asperities, each with its own slip function which is dependent on the distance from the asperity centre. Central to this technique is the efficient, accurate computation of distance between two points on the fault surface. This is known as the geodetic distance problem. We propose a method to compute the distance across complex non-planar surfaces based on a corollary of the Huygens' principle. The difference between this method compared to others sample-based algorithms which precede it is the use of a curved front at a local level to calculate the distance. This technique produces a highly accurate computation of the distance as the curvature of the front is linked to the distance from the source. Our local scheme is based on a sequence of two trilaterations, producing a robust algorithm which is highly precise. We test the strategy on a planar surface in order to assess its ability to keep the self-similarity properties of a slip distribution. We also present a synthetic self-similar slip distribution on a real slab topography for a M8.5 event. This method for computing distance may be extended to the estimation of first arrival times in both complex 3D surfaces or 3D volumes.
Transforming Multidisciplinary Customer Requirements to Product Design Specifications
NASA Astrophysics Data System (ADS)
Ma, Xiao-Jie; Ding, Guo-Fu; Qin, Sheng-Feng; Li, Rong; Yan, Kai-Yin; Xiao, Shou-Ne; Yang, Guang-Wu
2017-09-01
With the increasing of complexity of complex mechatronic products, it is necessary to involve multidisciplinary design teams, thus, the traditional customer requirements modeling for a single discipline team becomes difficult to be applied in a multidisciplinary team and project since team members with various disciplinary backgrounds may have different interpretations of the customers' requirements. A new synthesized multidisciplinary customer requirements modeling method is provided for obtaining and describing the common understanding of customer requirements (CRs) and more importantly transferring them into a detailed and accurate product design specifications (PDS) to interact with different team members effectively. A case study of designing a high speed train verifies the rationality and feasibility of the proposed multidisciplinary requirement modeling method for complex mechatronic product development. This proposed research offersthe instruction to realize the customer-driven personalized customization of complex mechatronic product.
NASA Astrophysics Data System (ADS)
Xu, Kaixuan; Wang, Jun
2017-02-01
In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model.
Low-Complexity Noncoherent Signal Detection for Nanoscale Molecular Communications.
Li, Bin; Sun, Mengwei; Wang, Siyi; Guo, Weisi; Zhao, Chenglin
2016-01-01
Nanoscale molecular communication is a viable way of exchanging information between nanomachines. In this investigation, a low-complexity and noncoherent signal detection technique is proposed to mitigate the inter-symbol-interference (ISI) and additive noise. In contrast to existing coherent detection methods of high complexity, the proposed noncoherent signal detector is more practical when the channel conditions are hard to acquire accurately or hidden from the receiver. The proposed scheme employs the molecular concentration difference to detect the ISI corrupted signals and we demonstrate that it can suppress the ISI effectively. The difference in molecular concentration is a stable characteristic, irrespective of the diffusion channel conditions. In terms of complexity, by excluding matrix operations or likelihood calculations, the new detection scheme is particularly suitable for nanoscale molecular communication systems with a small energy budget or limited computation resource.
NASA Astrophysics Data System (ADS)
Wang, Wei; Yao, Xinfeng; Ji, Minhe
2016-01-01
Despite recent rapid advancement in remote sensing technology, accurate mapping of the urban landscape in China still faces a great challenge due to unusually high spectral complexity in many big cities. Much of this complication comes from severe spectral confusion of impervious surfaces with polluted water bodies and bright bare soils. This paper proposes a two-step land cover decomposition method, which combines optical and thermal spectra from different seasons to cope with the issue of urban spectral complexity. First, a linear spectral mixture analysis was employed to generate fraction images for three preliminary endmembers (high albedo, low albedo, and vegetation). Seasonal change analysis on land surface temperature induced from thermal infrared spectra and coarse component fractions obtained from the first step was then used to reduce the confusion between impervious surfaces and nonimpervious materials. This method was tested with two-date Landsat multispectral data in Shanghai, one of China's megacities. The results showed that the method was capable of consistently estimating impervious surfaces in highly complex urban environments with an accuracy of R2 greater than 0.70 and both root mean square error and mean average error less than 0.20 for all test sites. This strategy seemed very promising for landscape mapping of complex urban areas.
NASA Astrophysics Data System (ADS)
Berthon, Beatrice; Dansette, Pierre-Marc; Tanter, Mickaël; Pernot, Mathieu; Provost, Jean
2017-07-01
Direct imaging of the electrical activation of the heart is crucial to better understand and diagnose diseases linked to arrhythmias. This work presents an ultrafast acoustoelectric imaging (UAI) system for direct and non-invasive ultrafast mapping of propagating current densities using the acoustoelectric effect. Acoustoelectric imaging is based on the acoustoelectric effect, the modulation of the medium’s electrical impedance by a propagating ultrasonic wave. UAI triggers this effect with plane wave emissions to image current densities. An ultrasound research platform was fitted with electrodes connected to high common-mode rejection ratio amplifiers and sampled by up to 128 independent channels. The sequences developed allow for both real-time display of acoustoelectric maps and long ultrafast acquisition with fast off-line processing. The system was evaluated by injecting controlled currents into a saline pool via copper wire electrodes. Sensitivity to low current and low acoustic pressure were measured independently. Contrast and spatial resolution were measured for varying numbers of plane waves and compared to line per line acoustoelectric imaging with focused beams at equivalent peak pressure. Temporal resolution was assessed by measuring time-varying current densities associated with sinusoidal currents. Complex intensity distributions were also imaged in 3D. Electrical current densities were detected for injected currents as low as 0.56 mA. UAI outperformed conventional focused acoustoelectric imaging in terms of contrast and spatial resolution when using 3 and 13 plane waves or more, respectively. Neighboring sinusoidal currents with opposed phases were accurately imaged and separated. Time-varying currents were mapped and their frequency accurately measured for imaging frame rates up to 500 Hz. Finally, a 3D image of a complex intensity distribution was obtained. The results demonstrated the high sensitivity of the UAI system proposed. The plane wave based approach provides a highly flexible trade-off between frame rate, resolution and contrast. In conclusion, the UAI system shows promise for non-invasive, direct and accurate real-time imaging of electrical activation in vivo.
Liu, E-Hu; Qi, Lian-Wen; Li, Bin; Peng, Yong-Bo; Li, Ping; Li, Chang-Yin; Cao, Jun
2009-01-01
A fast high-performance liquid chromatography (HPLC) method coupled with diode-array detection (DAD) and electrospray ionization time-of-flight mass spectrometry (ESI-TOFMS) has been developed for rapid separation and sensitive identification of major constituents in Radix Paeoniae Rubra (RPR). The total analysis time on a short column packed with 1.8-microm porous particles was about 20 min without a loss in resolution, six times faster than the performance of a conventional column analysis (115 min). The MS fragmentation behavior and structural characterization of major compounds in RPR were investigated here for the first time. The targets were rapidly screened from RPR matrix using a narrow mass window of 0.01 Da to restructure extracted ion chromatograms. Accurate mass measurements (less than 5 ppm error) for both the deprotonated molecule and characteristic fragment ions represent reliable identification criteria for these compounds in complex matrices with similar if not even better performance compared with tandem mass spectrometry. A total of 26 components were screened and identified in RPR including 11 monoterpene glycosides, 11 galloyl glucoses and 4 other phenolic compounds. From the point of time savings, resolving power, accurate mass measurement capability and full spectral sensitivity, the established fast HPLC/DAD/TOFMS method turns out to be a highly useful technique to identify constituents in complex herbal medicines. (c) 2008 John Wiley & Sons, Ltd.
The Scirtothrips dorsalis Species Complex: Endemism and Invasion in a Global Pest
Dickey, Aaron M.; Kumar, Vivek; Hoddle, Mark S.; Funderburk, Joe E.; Morgan, J. Kent; Jara-Cavieres, Antonella; Shatters, Robert G. Jr.; Osborne, Lance S.; McKenzie, Cindy L.
2015-01-01
Invasive arthropods pose unique management challenges in various environments, the first of which is correct identification. This apparently mundane task is particularly difficult if multiple species are morphologically indistinguishable but accurate identification can be determined with DNA barcoding provided an adequate reference set is available. Scirtothrips dorsalis is a highly polyphagous plant pest with a rapidly expanding global distribution and this species, as currently recognized, may be comprised of cryptic species. Here we report the development of a comprehensive DNA barcode library for S. dorsalis and seven nuclear markers via next-generation sequencing for identification use within the complex. We also report the delimitation of nine cryptic species and two morphologically distinguishable species comprising the S. dorsalis species complex using histogram analysis of DNA barcodes, Bayesian phylogenetics, and the multi-species coalescent. One member of the complex, here designated the South Asia 1 cryptic species, is highly invasive, polyphagous, and likely the species implicated in tospovirus transmission. Two other species, South Asia 2, and East Asia 1 are also highly polyphagous and appear to be at an earlier stage of global invasion. The remaining members of the complex are regionally endemic, varying in their pest status and degree of polyphagy. In addition to patterns of invasion and endemism, our results provide a framework both for identifying members of the complex based on their DNA barcode, and for future species delimiting efforts. PMID:25893251
Zhu, Linzhao; Zhao, Zhiyong; Zhang, Xiongzhi; Zhang, Haijun; Liang, Feng; Liu, Simin
2018-04-18
Amantadine (AMA) and its derivatives are illicit veterinary drugs that are hard to detect at very low concentrations. Developing a fast, simple and highly sensitive method for the detection of AMA is highly in demand. Here, we designed an anthracyclic compound (ABAM) that binds to a cucurbit[7]uril (CB[7]) host with a high association constant of up to 8.7 × 10⁸ M −1 . The host-guest complex was then used as a fluorescent probe for the detection of AMA. Competition by AMA for occupying the cavity of CB[7] allows ABAM to release from the CB[7]-ABAM complex, causing significant fluorescence quenching of ABAM (indicator displacement assay, IDA). The linear range of the method is from 0.000188 to 0.375 μg/mL, and the detection limit can be as low as 6.5 × 10 −5 μg/mL (0.35 nM). Most importantly, due to the high binding affinity between CB[7] and ABAM, this fluorescence host-guest system shows great anti-interference capacity. Thus, we are able to accurately determine the concentration of AMA in various samples, including pharmaceutical formulations.
Krivov, Sergei V
2011-07-01
Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game--the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.
NASA Astrophysics Data System (ADS)
Krivov, Sergei V.
2011-07-01
Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game—the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Nielsen, Eric J.; Nishikawa, Hiroaki; White, Jeffery A.
2009-01-01
Discretization of the viscous terms in current finite-volume unstructured-grid schemes are compared using node-centered and cell-centered approaches in two dimensions. Accuracy and efficiency are studied for six nominally second-order accurate schemes: a node-centered scheme, cell-centered node-averaging schemes with and without clipping, and cell-centered schemes with unweighted, weighted, and approximately mapped least-square face gradient reconstruction. The grids considered range from structured (regular) grids to irregular grids composed of arbitrary mixtures of triangles and quadrilaterals, including random perturbations of the grid points to bring out the worst possible behavior of the solution. Two classes of tests are considered. The first class of tests involves smooth manufactured solutions on both isotropic and highly anisotropic grids with discontinuous metrics, typical of those encountered in grid adaptation. The second class concerns solutions and grids varying strongly anisotropically over a curved body, typical of those encountered in high-Reynolds number turbulent flow simulations. Results from the first class indicate the face least-square methods, the node-averaging method without clipping, and the node-centered method demonstrate second-order convergence of discretization errors with very similar accuracies per degree of freedom. The second class of tests are more discriminating. The node-centered scheme is always second order with an accuracy and complexity in linearization comparable to the best of the cell-centered schemes. In comparison, the cell-centered node-averaging schemes are less accurate, have a higher complexity in linearization, and can fail to converge to the exact solution when clipping of the node-averaged values is used. The cell-centered schemes using least-square face gradient reconstruction have more compact stencils with a complexity similar to the complexity of the node-centered scheme. For simulations on highly anisotropic curved grids, the least-square methods have to be amended either by introducing a local mapping of the surface anisotropy or modifying the scheme stencil to reflect the direction of strong coupling.
NASA Astrophysics Data System (ADS)
Soltanzadeh, Iman; Bonnardot, Valérie; Sturman, Andrew; Quénol, Hervé; Zawar-Reza, Peyman
2017-08-01
Global warming has implications for thermal stress for grapevines during ripening, so that wine producers need to adapt their viticultural practices to ensure optimum physiological response to environmental conditions in order to maintain wine quality. The aim of this paper is to assess the ability of the Weather Research and Forecasting (WRF) model to accurately represent atmospheric processes at high resolution (500 m) during two events during the grapevine ripening period in the Stellenbosch Wine of Origin district of South Africa. Two case studies were selected to identify areas of potentially high daytime heat stress when grapevine photosynthesis and grape composition were expected to be affected. The results of high-resolution atmospheric model simulations were compared to observations obtained from an automatic weather station (AWS) network in the vineyard region. Statistical analysis was performed to assess the ability of the WRF model to reproduce spatial and temporal variations of meteorological parameters at 500-m resolution. The model represented the spatial and temporal variation of meteorological variables very well, with an average model air temperature bias of 0.1 °C, while that for relative humidity was -5.0 % and that for wind speed 0.6 m s-1. Variation in model performance varied between AWS and with time of day, as WRF was not always able to accurately represent effects of nocturnal cooling within the complex terrain. Variations in performance between the two case studies resulted from effects of atmospheric boundary layer processes in complex terrain under the influence of the different synoptic conditions prevailing during the two periods.
Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters
NASA Astrophysics Data System (ADS)
Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.
2004-12-01
Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various large heterogeneous spatial-temporal datasets provide evidence that the benefits of the proposed methodology for efficient and accurate learning exist beyond the area of retrieval of geophysical parameters.
An evaluation of the accuracy and speed of metagenome analysis tools
Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.
2016-01-01
Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510
New Equation of State Models for Hydrodynamic Applications
NASA Astrophysics Data System (ADS)
Young, David A.; Barbee, Troy W., III; Rogers, Forrest J.
1997-07-01
Accurate models of the equation of state of matter at high pressures and temperatures are increasingly required for hydrodynamic simulations. We have developed two new approaches to accurate EOS modeling: 1) ab initio phonons from electron band structure theory for condensed matter and 2) the ACTEX dense plasma model for ultrahigh pressure shocks. We have studied the diamond and high pressure phases of carbon with the ab initio model and find good agreement between theory and experiment for shock Hugoniots, isotherms, and isobars. The theory also predicts a comprehensive phase diagram for carbon. For ultrahigh pressure shock states, we have studied the comparison of ACTEX theory with experiments for deuterium, beryllium, polystyrene, water, aluminum, and silicon dioxide. The agreement is good, showing that complex multispecies plasmas are treated adequately by the theory. These models will be useful in improving the numerical EOS tables used by hydrodynamic codes.
MR arthrography in glenohumeral instability.
Van der Woude, H J; Vanhoenacker, F M
2007-01-01
The impact of accurate imaging in the work-up of patients with glenohumeral instability is high. Results of imaging may directly influence the surgeon's strategy to perform an arthroscopic or open treatment for (recurrent) instability. Magnetic resonance (MR) imaging, and MR arthrography in particular, is the optimal technique to detect, localize and characterize injuries of the capsular-labrum complex. Besides TI-weighted sequences with fat suppression in axial, oblique sagital and coronal directions, an additional series in abduction and exoroation position is highly advocated. This ABER series optimally depicts abnormalities of the inferior capsular-labrum complex and partial undersurface tears of the spinatus tendons. Knowledge of different anatomical variants that may mimic labral tears and of variants of the classic Bankart lesion are useful in the analysis of shoulder MR arthrograms in patients with glenohumeral instability.
Lee, Bumshik; Kim, Munchurl
2016-08-01
In this paper, a low complexity coding unit (CU)-level rate and distortion estimation scheme is proposed for High Efficiency Video Coding (HEVC) hardware-friendly implementation where a Walsh-Hadamard transform (WHT)-based low-complexity integer discrete cosine transform (DCT) is employed for distortion estimation. Since HEVC adopts quadtree structures of coding blocks with hierarchical coding depths, it becomes more difficult to estimate accurate rate and distortion values without actually performing transform, quantization, inverse transform, de-quantization, and entropy coding. Furthermore, DCT for rate-distortion optimization (RDO) is computationally high, because it requires a number of multiplication and addition operations for various transform block sizes of 4-, 8-, 16-, and 32-orders and requires recursive computations to decide the optimal depths of CU or transform unit. Therefore, full RDO-based encoding is highly complex, especially for low-power implementation of HEVC encoders. In this paper, a rate and distortion estimation scheme is proposed in CU levels based on a low-complexity integer DCT that can be computed in terms of WHT whose coefficients are produced in prediction stages. For rate and distortion estimation in CU levels, two orthogonal matrices of 4×4 and 8×8 , which are applied to WHT that are newly designed in a butterfly structure only with addition and shift operations. By applying the integer DCT based on the WHT and newly designed transforms in each CU block, the texture rate can precisely be estimated after quantization using the number of non-zero quantized coefficients and the distortion can also be precisely estimated in transform domain without de-quantization and inverse transform required. In addition, a non-texture rate estimation is proposed by using a pseudoentropy code to obtain accurate total rate estimates. The proposed rate and the distortion estimation scheme can effectively be used for HW-friendly implementation of HEVC encoders with 9.8% loss over HEVC full RDO, which much less than 20.3% and 30.2% loss of a conventional approach and Hadamard-only scheme, respectively.
Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.
Abbasi, Mahdi
2014-01-01
Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.
NASA Technical Reports Server (NTRS)
Mizell, Carolyn; Malone, Linda
2007-01-01
It is very difficult for project managers to develop accurate cost and schedule estimates for large, complex software development projects. None of the approaches or tools available today can estimate the true cost of software with any high degree of accuracy early in a project. This paper provides an approach that utilizes a software development process simulation model that considers and conveys the level of uncertainty that exists when developing an initial estimate. A NASA project will be analyzed using simulation and data from the Software Engineering Laboratory to show the benefits of such an approach.
Fetal anterior abdominal wall defects: prenatal imaging by magnetic resonance imaging.
Victoria, Teresa; Andronikou, Savvas; Bowen, Diana; Laje, Pablo; Weiss, Dana A; Johnson, Ann M; Peranteau, William H; Canning, Douglas A; Adzick, N Scott
2018-04-01
Abdominal wall defects range from the mild umbilical cord hernia to the highly complex limb-body wall syndrome. The most common defects are gastroschisis and omphalocele, and the rarer ones include the exstrophy complex, pentalogy of Cantrell and limb-body wall syndrome. Although all have a common feature of viscera herniation through a defect in the anterior body wall, their imaging features and, more important, postnatal management, differ widely. Correct diagnosis of each entity is imperative in order to achieve appropriate and accurate prenatal counseling and postnatal management. In this paper, we discuss fetal abdominal wall defects and present diagnostic pearls to aid with diagnosis.
An improvement of vehicle detection under shadow regions in satellite imagery
NASA Astrophysics Data System (ADS)
Karim, Shahid; Zhang, Ye; Ali, Saad; Asif, Muhammad Rizwan
2018-04-01
The processing of satellite imagery is dependent upon the quality of imagery. Due to low resolution, it is difficult to extract accurate information according to the requirements of applications. For the purpose of vehicle detection under shadow regions, we have used HOG for feature extraction, SVM is used for classification and HOG is discerned worthwhile tool for complex environments. Shadow images have been scrutinized and found very complex for detection as observed very low detection rates therefore our dedication is towards enhancement of detection rate under shadow regions by implementing appropriate preprocessing. Vehicles are precisely detected under non-shadow regions with high detection rate than shadow regions.
Single Nucleobase Identification Using Biophysical Signatures from Nanoelectronic Quantum Tunneling.
Korshoj, Lee E; Afsari, Sepideh; Khan, Sajida; Chatterjee, Anushree; Nagpal, Prashant
2017-03-01
Nanoelectronic DNA sequencing can provide an important alternative to sequencing-by-synthesis by reducing sample preparation time, cost, and complexity as a high-throughput next-generation technique with accurate single-molecule identification. However, sample noise and signature overlap continue to prevent high-resolution and accurate sequencing results. Probing the molecular orbitals of chemically distinct DNA nucleobases offers a path for facile sequence identification, but molecular entropy (from nucleotide conformations) makes such identification difficult when relying only on the energies of lowest-unoccupied and highest-occupied molecular orbitals (LUMO and HOMO). Here, nine biophysical parameters are developed to better characterize molecular orbitals of individual nucleobases, intended for single-molecule DNA sequencing using quantum tunneling of charges. For this analysis, theoretical models for quantum tunneling are combined with transition voltage spectroscopy to obtain measurable parameters unique to the molecule within an electronic junction. Scanning tunneling spectroscopy is then used to measure these nine biophysical parameters for DNA nucleotides, and a modified machine learning algorithm identified nucleobases. The new parameters significantly improve base calling over merely using LUMO and HOMO frontier orbital energies. Furthermore, high accuracies for identifying DNA nucleobases were observed at different pH conditions. These results have significant implications for developing a robust and accurate high-throughput nanoelectronic DNA sequencing technique. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Yu, Haiqing; Lu, Joann J.; Rao, Wei
2016-01-01
Density gradient centrifugation is widely utilized for various high purity sample preparations, and density gradient ultracentrifugation (DGU) is often used for more resolution-demanding purification of organelles and protein complexes. Accurately locating different isopycnic layers and precisely extracting solutions from these layers play a critical role in achieving high-resolution DGU separations. In this technique note, we develop a DGU procedure by freezing the solution rapidly (but gently) after centrifugation to fix the resolved layers and by slicing the frozen solution to fractionate the sample. Because the thickness of each slice can be controlled to be as thin as 10 micrometers, we retain virtually all the resolution produced by DGU. To demonstrate the effectiveness of this method, we fractionate complex V from HeLa mitochondria using a conventional technique and this freezing-slicing (F-S) method. The comparison indicates that our F-S method can reduce complex V layer thicknesses by ~40%. After fractionation, we analyze complex V proteins directly on a matrix assisted laser desorption/ionization, time-of-flight mass spectrometer. Twelve out of fifteen subunits of complex V are positively identified. Our method provides a practical protocol to identify proteins from complexes, which is useful to investigate biomolecular complexes and pathways in various conditions and cell types. PMID:27668122
Pointing and tracking space mechanism for laser communication
NASA Technical Reports Server (NTRS)
Brunschvig, A.; Deboisanger, M.
1994-01-01
Space optical communication is considered a promising technology regarding its high data rate and confidentiality capabilities. However, it requires today complex satellite systems involving highly accurate mechanisms. This paper aims to highlight the stringent requirements which had to be fulfilled for such a mechanism, the way an existing design has been adapted to meet these requirements, and the main technical difficulties which have been overcome thanks to extensive development tests throughout the C/D phase initiated in 1991. The expected on-orbit performance of this mechanism is also presented.
An adaptive discontinuous Galerkin solver for aerodynamic flows
NASA Astrophysics Data System (ADS)
Burgess, Nicholas K.
This work considers the accuracy, efficiency, and robustness of an unstructured high-order accurate discontinuous Galerkin (DG) solver for computational fluid dynamics (CFD). Recently, there has been a drive to reduce the discretization error of CFD simulations using high-order methods on unstructured grids. However, high-order methods are often criticized for lacking robustness and having high computational cost. The goal of this work is to investigate methods that enhance the robustness of high-order discontinuous Galerkin (DG) methods on unstructured meshes, while maintaining low computational cost and high accuracy of the numerical solutions. This work investigates robustness enhancement of high-order methods by examining effective non-linear solvers, shock capturing methods, turbulence model discretizations and adaptive refinement techniques. The goal is to develop an all encompassing solver that can simulate a large range of physical phenomena, where all aspects of the solver work together to achieve a robust, efficient and accurate solution strategy. The components and framework for a robust high-order accurate solver that is capable of solving viscous, Reynolds Averaged Navier-Stokes (RANS) and shocked flows is presented. In particular, this work discusses robust discretizations of the turbulence model equation used to close the RANS equations, as well as stable shock capturing strategies that are applicable across a wide range of discretization orders and applicable to very strong shock waves. Furthermore, refinement techniques are considered as both efficiency and robustness enhancement strategies. Additionally, efficient non-linear solvers based on multigrid and Krylov subspace methods are presented. The accuracy, efficiency, and robustness of the solver is demonstrated using a variety of challenging aerodynamic test problems, which include turbulent high-lift and viscous hypersonic flows. Adaptive mesh refinement was found to play a critical role in obtaining a robust and efficient high-order accurate flow solver. A goal-oriented error estimation technique has been developed to estimate the discretization error of simulation outputs. For high-order discretizations, it is shown that functional output error super-convergence can be obtained, provided the discretization satisfies a property known as dual consistency. The dual consistency of the DG methods developed in this work is shown via mathematical analysis and numerical experimentation. Goal-oriented error estimation is also used to drive an hp-adaptive mesh refinement strategy, where a combination of mesh or h-refinement, and order or p-enrichment, is employed based on the smoothness of the solution. The results demonstrate that the combination of goal-oriented error estimation and hp-adaptation yield superior accuracy, as well as enhanced robustness and efficiency for a variety of aerodynamic flows including flows with strong shock waves. This work demonstrates that DG discretizations can be the basis of an accurate, efficient, and robust CFD solver. Furthermore, enhancing the robustness of DG methods does not adversely impact the accuracy or efficiency of the solver for challenging and complex flow problems. In particular, when considering the computation of shocked flows, this work demonstrates that the available shock capturing techniques are sufficiently accurate and robust, particularly when used in conjunction with adaptive mesh refinement . This work also demonstrates that robust solutions of the Reynolds Averaged Navier-Stokes (RANS) and turbulence model equations can be obtained for complex and challenging aerodynamic flows. In this context, the most robust strategy was determined to be a low-order turbulence model discretization coupled to a high-order discretization of the RANS equations. Although RANS solutions using high-order accurate discretizations of the turbulence model were obtained, the behavior of current-day RANS turbulence models discretized to high-order was found to be problematic, leading to solver robustness issues. This suggests that future work is warranted in the area of turbulence model formulation for use with high-order discretizations. Alternately, the use of Large-Eddy Simulation (LES) subgrid scale models with high-order DG methods offers the potential to leverage the high accuracy of these methods for very high fidelity turbulent simulations. This thesis has developed the algorithmic improvements that will lay the foundation for the development of a three-dimensional high-order flow solution strategy that can be used as the basis for future LES simulations.
Beach, Connor A; Krumm, Christoph; Spanjers, Charles S; Maduskar, Saurabh; Jones, Andrew J; Dauenhauer, Paul J
2016-03-07
Analysis of trace compounds, such as pesticides and other contaminants, within consumer products, fuels, and the environment requires quantification of increasingly complex mixtures of difficult-to-quantify compounds. Many compounds of interest are non-volatile and exhibit poor response in current gas chromatography and flame ionization systems. Here we show the reaction of trimethylsilylated chemical analytes to methane using a quantitative carbon detector (QCD; the Polyarc™ reactor) within a gas chromatograph (GC), thereby enabling enhanced detection (up to 10×) of highly functionalized compounds including carbohydrates, acids, drugs, flavorants, and pesticides. Analysis of a complex mixture of compounds shows that the GC-QCD method exhibits faster and more accurate analysis of complex mixtures commonly encountered in everyday products and the environment.
On the nature of Ni···Ni interaction in a model dimeric Ni complex.
Kamiński, Radosław; Herbaczyńska, Beata; Srebro, Monika; Pietrzykowski, Antoni; Michalak, Artur; Jerzykiewicz, Lucjan B; Woźniak, Krzysztof
2011-06-07
A new dinuclear complex (NiC(5)H(4)SiMe(2)CHCH(2))(2) (2) was prepared by reacting nickelocene derivative [(C(5)H(4)SiMe(2)CH=CH(2))(2)Ni] (1) with methyllithium (MeLi). Good quality crystals were subjected to a high-resolution X-ray measurement. Subsequent multipole refinement yielded accurate description of electron density distribution. Detailed inspection of experimental electron density in Ni···Ni contact revealed that the nickel atoms are bonded and significant deformation of the metal valence shell is related to different populations of the d-orbitals. The existence of the Ni···Ni bond path explains the lack of unpaired electrons in the complex due to a possible exchange channel.
Landrum, Peter F; Chapman, Peter M; Neff, Jerry; Page, David S
2012-04-01
Experimental designs for evaluating complex mixture toxicity in aquatic environments can be highly variable and, if not appropriate, can produce and have produced data that are difficult or impossible to interpret accurately. We build on and synthesize recent critical reviews of mixture toxicity using lessons learned from 4 case studies, ranging from binary to more complex mixtures of primarily polycyclic aromatic hydrocarbons and petroleum hydrocarbons, to provide guidance for evaluating the aquatic toxicity of complex mixtures of organic chemicals. Two fundamental requirements include establishing a dose-response relationship and determining the causative agent (or agents) of any observed toxicity. Meeting these 2 requirements involves ensuring appropriate exposure conditions and measurement endpoints, considering modifying factors (e.g., test conditions, test organism life stages and feeding behavior, chemical transformations, mixture dilutions, sorbing phases), and correctly interpreting dose-response relationships. Specific recommendations are provided. Copyright © 2011 SETAC.
Xue, Min; Pan, Shilong; Zhao, Yongjiu
2015-02-15
A novel optical vector network analyzer (OVNA) based on optical single-sideband (OSSB) modulation and balanced photodetection is proposed and experimentally demonstrated, which can eliminate the measurement error induced by the high-order sidebands in the OSSB signal. According to the analytical model of the conventional OSSB-based OVNA, if the optical carrier in the OSSB signal is fully suppressed, the measurement result is exactly the high-order-sideband-induced measurement error. By splitting the OSSB signal after the optical device-under-test (ODUT) into two paths, removing the optical carrier in one path, and then detecting the two signals in the two paths using a balanced photodetector (BPD), high-order-sideband-induced measurement error can be ideally eliminated. As a result, accurate responses of the ODUT can be achieved without complex post-signal processing. A proof-of-concept experiment is carried out. The magnitude and phase responses of a fiber Bragg grating (FBG) measured by the proposed OVNA with different modulation indices are superimposed, showing that the high-order-sideband-induced measurement error is effectively removed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruebel, Oliver
2009-11-20
Knowledge discovery from large and complex collections of today's scientific datasets is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the increasing number of data dimensions and data objects is presenting tremendous challenges for data analysis and effective data exploration methods and tools. Researchers are overwhelmed with data and standard tools are often insufficient to enable effective data analysis and knowledge discovery. The main objective of this thesis is to provide important new capabilities to accelerate scientific knowledge discovery form large, complex, and multivariate scientific data. The research coveredmore » in this thesis addresses these scientific challenges using a combination of scientific visualization, information visualization, automated data analysis, and other enabling technologies, such as efficient data management. The effectiveness of the proposed analysis methods is demonstrated via applications in two distinct scientific research fields, namely developmental biology and high-energy physics.Advances in microscopy, image analysis, and embryo registration enable for the first time measurement of gene expression at cellular resolution for entire organisms. Analysis of high-dimensional spatial gene expression datasets is a challenging task. By integrating data clustering and visualization, analysis of complex, time-varying, spatial gene expression patterns and their formation becomes possible. The analysis framework MATLAB and the visualization have been integrated, making advanced analysis tools accessible to biologist and enabling bioinformatic researchers to directly integrate their analysis with the visualization. Laser wakefield particle accelerators (LWFAs) promise to be a new compact source of high-energy particles and radiation, with wide applications ranging from medicine to physics. To gain insight into the complex physical processes of particle acceleration, physicists model LWFAs computationally. The datasets produced by LWFA simulations are (i) extremely large, (ii) of varying spatial and temporal resolution, (iii) heterogeneous, and (iv) high-dimensional, making analysis and knowledge discovery from complex LWFA simulation data a challenging task. To address these challenges this thesis describes the integration of the visualization system VisIt and the state-of-the-art index/query system FastBit, enabling interactive visual exploration of extremely large three-dimensional particle datasets. Researchers are especially interested in beams of high-energy particles formed during the course of a simulation. This thesis describes novel methods for automatic detection and analysis of particle beams enabling a more accurate and efficient data analysis process. By integrating these automated analysis methods with visualization, this research enables more accurate, efficient, and effective analysis of LWFA simulation data than previously possible.« less
Flavel, Richard J; Guppy, Chris N; Rabbi, Sheikh M R; Young, Iain M
2017-01-01
The objective of this study was to develop a flexible and free image processing and analysis solution, based on the Public Domain ImageJ platform, for the segmentation and analysis of complex biological plant root systems in soil from x-ray tomography 3D images. Contrasting root architectures from wheat, barley and chickpea root systems were grown in soil and scanned using a high resolution micro-tomography system. A macro (Root1) was developed that reliably identified with good to high accuracy complex root systems (10% overestimation for chickpea, 1% underestimation for wheat, 8% underestimation for barley) and provided analysis of root length and angle. In-built flexibility allowed the user interaction to (a) amend any aspect of the macro to account for specific user preferences, and (b) take account of computational limitations of the platform. The platform is free, flexible and accurate in analysing root system metrics.
NASA Astrophysics Data System (ADS)
Niebuhr, Cole
2018-04-01
Papers published in the astronomical community, particularly in the field of double star research, often contain plots that display the positions of the component stars relative to each other on a Cartesian coordinate plane. Due to the complexities of plotting a three-dimensional orbit into a two-dimensional image, it is often difficult to include an accurate reproduction of the orbit for comparison purposes. Methods to circumvent this obstacle do exist; however, many of these protocols result in low-quality blurred images or require specific and often expensive software. Here, a method is reported using Microsoft Paint and Microsoft Excel to produce high-quality images with an accurate reproduction of a partial orbit.
NASA Astrophysics Data System (ADS)
Roeth, O.; Zaum, D.; Brenner, C.
2017-05-01
Highly automated driving (HAD) requires maps not only of high spatial precision but also of yet unprecedented actuality. Traditionally small highly specialized fleets of measurement vehicles are used to generate such maps. Nevertheless, for achieving city-wide or even nation-wide coverage, automated map update mechanisms based on very large vehicle fleet data gain importance since highly frequent measurements are only to be obtained using such an approach. Furthermore, the processing of imprecise mass data in contrast to few dedicated highly accurate measurements calls for a high degree of automation. We present a method for the generation of lane-accurate road network maps from vehicle trajectory data (GPS or better). Our approach therefore allows for exploiting today's connected vehicle fleets for the generation of HAD maps. The presented algorithm is based on elementary building blocks which guarantees useful lane models and uses a Reversible Jump Markov chain Monte Carlo method to explore the models parameters in order to reconstruct the one most likely emitting the input data. The approach is applied to a challenging urban real-world scenario of different trajectory accuracy levels and is evaluated against a LIDAR-based ground truth map.
Ojanperä, Ilkka; Kolmonen, Marjo; Pelander, Anna
2012-05-01
Clinical and forensic toxicology and doping control deal with hundreds or thousands of drugs that may cause poisoning or are abused, are illicit, or are prohibited in sports. Rapid and reliable screening for all these compounds of different chemical and pharmaceutical nature, preferably in a single analytical method, is a substantial effort for analytical toxicologists. Combined chromatography-mass spectrometry techniques with standardised reference libraries have been most commonly used for the purpose. In the last ten years, the focus has shifted from gas chromatography-mass spectrometry to liquid chromatography-mass spectrometry, because of progress in instrument technology and partly because of the polarity and low volatility of many new relevant substances. High-resolution mass spectrometry (HRMS), which enables accurate mass measurement at high resolving power, has recently evolved to the stage that is rapidly causing a shift from unit-resolution, quadrupole-dominated instrumentation. The main HRMS techniques today are time-of-flight mass spectrometry and Orbitrap Fourier-transform mass spectrometry. Both techniques enable a range of different drug-screening strategies that essentially rely on measuring a compound's or a fragment's mass with sufficiently high accuracy that its elemental composition can be determined directly. Accurate mass and isotopic pattern acts as a filter for confirming the identity of a compound or even identification of an unknown. High mass resolution is essential for improving confidence in accurate mass results in the analysis of complex biological samples. This review discusses recent applications of HRMS in analytical toxicology.
Localization of optic disc and fovea in retinal images using intensity based line scanning analysis.
Kamble, Ravi; Kokare, Manesh; Deshmukh, Girish; Hussin, Fawnizu Azmadi; Mériaudeau, Fabrice
2017-08-01
Accurate detection of diabetic retinopathy (DR) mainly depends on identification of retinal landmarks such as optic disc and fovea. Present methods suffer from challenges like less accuracy and high computational complexity. To address this issue, this paper presents a novel approach for fast and accurate localization of optic disc (OD) and fovea using one-dimensional scanned intensity profile analysis. The proposed method utilizes both time and frequency domain information effectively for localization of OD. The final OD center is located using signal peak-valley detection in time domain and discontinuity detection in frequency domain analysis. However, with the help of detected OD location, the fovea center is located using signal valley analysis. Experiments were conducted on MESSIDOR dataset, where OD was successfully located in 1197 out of 1200 images (99.75%) and fovea in 1196 out of 1200 images (99.66%) with an average computation time of 0.52s. The large scale evaluation has been carried out extensively on nine publicly available databases. The proposed method is highly efficient in terms of quickly and accurately localizing OD and fovea structure together compared with the other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain
NASA Astrophysics Data System (ADS)
Wang, Benfeng; Lu, Wenkai
2017-12-01
Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.
Water Planetary and Cometary Atmospheres: H2O/HDO Transmittance and Fluorescence Models
NASA Technical Reports Server (NTRS)
Villanueva, G. L.; Mumma, M. J.; Bonev, B. P.; Novak, R. E.; Barber, R. J.; DiSanti, M. A.
2012-01-01
We developed a modern methodology to retrieve water (H2O) and deuterated water (HDO) in planetary and cometary atmospheres, and constructed an accurate spectral database that combines theoretical and empirical results. Based on a greatly expanded set of spectroscopic parameters, we built a full non-resonance cascade fluorescence model and computed fluorescence efficiencies for H2O (500 million lines) and HDO (700 million lines). The new line list was also integrated into an advanced terrestrial radiative transfer code (LBLRTM) and adapted to the CO2 rich atmosphere of Mars, for which we adopted the complex Robert-Bonamy formalism for line shapes. We then retrieved water and D/H in the atmospheres of Mars, comet C/2007 WI, and Earth by applying the new formalism to spectra obtained with the high-resolution spectrograph NIRSPEC/Keck II atop Mauna Kea (Hawaii). The new model accurately describes the complex morphology of the water bands and greatly increases the accuracy of the retrieved abundances (and the D/H ratio in water) with respect to previously available models. The new model provides improved agreement of predicted and measured intensities for many H2O lines already identified in comets, and it identifies several unassigned cometary emission lines as new emission lines of H2O. The improved spectral accuracy permits retrieval of more accurate rotational temperatures and production rates for cometary water.
Moment expansion for ionospheric range error
NASA Technical Reports Server (NTRS)
Mallinckrodt, A.; Reich, R.; Parker, H.; Berbert, J.
1972-01-01
On a plane earth, the ionospheric or tropospheric range error depends only on the total refractivity content or zeroth moment of the refracting layer and the elevation angle. On a spherical earth, however, the dependence is more complex; so for more accurate results it has been necessary to resort to complex ray-tracing calculations. A simple, high-accuracy alternative to the ray-tracing calculation is presented. By appropriate expansion of the angular dependence in the ray-tracing integral in a power series in height, an expression is obtained for the range error in terms of a simple function of elevation angle, E, at the expansion height and of the mth moment of the refractivity, N, distribution about the expansion height. The rapidity of convergence is heavily dependent on the choice of expansion height. For expansion heights in the neighborhood of the centroid of the layer (300-490 km), the expansion to N = 2 (three terms) gives results accurate to about 0.4% at E = 10 deg. As an analytic tool, the expansion affords some insight on the influence of layer shape on range errors in special problems.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Dudley, Peter N; Bonazza, Riccardo; Jones, T Todd; Wyneken, Jeanette; Porter, Warren P
2014-01-01
As global temperatures increase throughout the coming decades, species ranges will shift. New combinations of abiotic conditions will make predicting these range shifts difficult. Biophysical mechanistic niche modeling places bounds on an animal's niche through analyzing the animal's physical interactions with the environment. Biophysical mechanistic niche modeling is flexible enough to accommodate these new combinations of abiotic conditions. However, this approach is difficult to implement for aquatic species because of complex interactions among thrust, metabolic rate and heat transfer. We use contemporary computational fluid dynamic techniques to overcome these difficulties. We model the complex 3D motion of a swimming neonate and juvenile leatherback sea turtle to find power and heat transfer rates during the stroke. We combine the results from these simulations and a numerical model to accurately predict the core temperature of a swimming leatherback. These results are the first steps in developing a highly accurate mechanistic niche model, which can assists paleontologist in understanding biogeographic shifts as well as aid contemporary species managers about potential range shifts over the coming decades.
Causality, apparent ``superluminality,'' and reshaping in barrier penetration
NASA Astrophysics Data System (ADS)
Sokolovski, D.
2010-04-01
We consider tunneling of a nonrelativistic particle across a potential barrier. It is shown that the barrier acts as an effective beam splitter which builds up the transmitted pulse from the copies of the initial envelope shifted in the coordinate space backward relative to the free propagation. Although along each pathway causality is explicitly obeyed, in special cases reshaping can result an overall reduction of the initial envelope, accompanied by an arbitrary coordinate shift. In the case of a high barrier the delay amplitude distribution (DAD) mimics a Dirac δ function, the transmission amplitude is superoscillatory for finite momenta and tunneling leads to an accurate advancement of the (reduced) initial envelope by the barrier width. In the case of a wide barrier, initial envelope is accurately translated into the complex coordinate plane. The complex shift, given by the first moment of the DAD, accounts for both the displacement of the maximum of the transmitted probability density and the increase in its velocity. It is argued that analyzing apparent “superluminality” in terms of spacial displacements helps avoid contradiction associated with time parameters such as the phase time.
CAMERA: An integrated strategy for compound spectra extraction and annotation of LC/MS data sets
Kuhl, Carsten; Tautenhahn, Ralf; Böttcher, Christoph; Larson, Tony R.; Neumann, Steffen
2013-01-01
Liquid chromatography coupled to mass spectrometry is routinely used for metabolomics experiments. In contrast to the fairly routine and automated data acquisition steps, subsequent compound annotation and identification require extensive manual analysis and thus form a major bottle neck in data interpretation. Here we present CAMERA, a Bioconductor package integrating algorithms to extract compound spectra, annotate isotope and adduct peaks, and propose the accurate compound mass even in highly complex data. To evaluate the algorithms, we compared the annotation of CAMERA against a manually defined annotation for a mixture of known compounds spiked into a complex matrix at different concentrations. CAMERA successfully extracted accurate masses for 89.7% and 90.3% of the annotatable compounds in positive and negative ion mode, respectively. Furthermore, we present a novel annotation approach that combines spectral information of data acquired in opposite ion modes to further improve the annotation rate. We demonstrate the utility of CAMERA in two different, easily adoptable plant metabolomics experiments, where the application of CAMERA drastically reduced the amount of manual analysis. PMID:22111785
Experimental validation of numerical simulations on a cerebral aneurysm phantom model
Seshadhri, Santhosh; Janiga, Gábor; Skalej, Martin; Thévenin, Dominique
2012-01-01
The treatment of cerebral aneurysms, found in roughly 5% of the population and associated in case of rupture to a high mortality rate, is a major challenge for neurosurgery and neuroradiology due to the complexity of the intervention and to the resulting, high hazard ratio. Improvements are possible but require a better understanding of the associated, unsteady blood flow patterns in complex 3D geometries. It would be very useful to carry out such studies using suitable numerical models, if it is proven that they reproduce accurately enough the real conditions. This validation step is classically based on comparisons with measured data. Since in vivo measurements are extremely difficult and therefore of limited accuracy, complementary model-based investigations considering realistic configurations are essential. In the present study, simulations based on computational fluid dynamics (CFD) have been compared with in situ, laser-Doppler velocimetry (LDV) measurements in the phantom model of a cerebral aneurysm. The employed 1:1 model is made from transparent silicone. A liquid mixture composed of water, glycerin, xanthan gum and sodium chloride has been specifically adapted for the present investigation. It shows physical flow properties similar to real blood and leads to a refraction index perfectly matched to that of the silicone model, allowing accurate optical measurements of the flow velocity. For both experiments and simulations, complex pulsatile flow waveforms and flow rates were accounted for. This finally allows a direct, quantitative comparison between measurements and simulations. In this manner, the accuracy of the employed computational model can be checked. PMID:24265876
Architecture and Flexibility of the Yeast Ndc80 Kinetochore Complex
Wang, Hong-Wei; Long, Sydney; Ciferri, Claudio; Westermann, Stefan; Drubin, David; Barnes, Georjana; Nogales, Eva
2008-01-01
Kinetochores mediate microtubule–chromosome attachment and ensure accurate segregation of sister chromatids. The highly conserved Ndc80 kinetochore complex makes direct contacts with the microtubule and is essential for spindle checkpoint signaling. It contains a long coiled-coil region with globular domains at each end involved in kinetochore localization and microtubule binding, respectively. We have directly visualized the architecture of the yeast Ndc80 complex and found a dramatic kink within the 560-Å coiled-coil rod located about 160 Å from the larger globular head. Comparison of our electron microscopy images to the structure of the human Ndc80 complex allowed us to position the kink proximal to the microtubule-binding end and to define the conformational range of the complex. The position of the kink coincides with a coiled-coil breaking region conserved across eukaryotes. We hypothesize that the kink in Ndc80 is essential for correct kinetochore geometry and could be part of a tension-sensing mechanism at the kinetochore. PMID:18793650
Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353
NASA Astrophysics Data System (ADS)
Belov, Arseniy M.; Viner, Rosa; Santos, Marcia R.; Horn, David M.; Bern, Marshall; Karger, Barry L.; Ivanov, Alexander R.
2017-12-01
Native mass spectrometry (MS) is a rapidly advancing field in the analysis of proteins, protein complexes, and macromolecular species of various types. The majority of native MS experiments reported to-date has been conducted using direct infusion of purified analytes into a mass spectrometer. In this study, capillary zone electrophoresis (CZE) was coupled online to Orbitrap mass spectrometers using a commercial sheathless interface to enable high-performance separation, identification, and structural characterization of limited amounts of purified proteins and protein complexes, the latter with preserved non-covalent associations under native conditions. The performance of both bare-fused silica and polyacrylamide-coated capillaries was assessed using mixtures of protein standards known to form non-covalent protein-protein and protein-ligand complexes. High-efficiency separation of native complexes is demonstrated using both capillary types, while the polyacrylamide neutral-coated capillary showed better reproducibility and higher efficiency for more complex samples. The platform was then evaluated for the determination of monoclonal antibody aggregation and for analysis of proteomes of limited complexity using a ribosomal isolate from E. coli. Native CZE-MS, using accurate single stage and tandem-MS measurements, enabled identification of proteoforms and non-covalent complexes at femtomole levels. This study demonstrates that native CZE-MS can serve as an orthogonal and complementary technique to conventional native MS methodologies with the advantages of low sample consumption, minimal sample processing and losses, and high throughput and sensitivity. This study presents a novel platform for analysis of ribosomes and other macromolecular complexes and organelles, with the potential for discovery of novel structural features defining cellular phenotypes (e.g., specialized ribosomes). [Figure not available: see fulltext.
Improved Phase Characterization of Far-Regional Body Wave Arrivals in Central Asia
2008-09-30
developing array -based methods that can more accurately characterize far-regional (14*-29*) seismic wavefield structure. Far- regional (14*-29*) seismograms...arrivals with the primary arrivals. These complexities can be region and earthquake specific. The regional seismic arrays that have been built in the last...fifteen years should be a rich data source for the study of far-regional phase behavior. The arrays are composed of high-quality borehole seismometers
System-level simulation of liquid filling in microfluidic chips.
Song, Hongjun; Wang, Yi; Pant, Kapil
2011-06-01
Liquid filling in microfluidic channels is a complex process that depends on a variety of geometric, operating, and material parameters such as microchannel geometry, flow velocity∕pressure, liquid surface tension, and contact angle of channel surface. Accurate analysis of the filling process can provide key insights into the filling time, air bubble trapping, and dead zone formation, and help evaluate trade-offs among the various design parameters and lead to optimal chip design. However, efficient modeling of liquid filling in complex microfluidic networks continues to be a significant challenge. High-fidelity computational methods, such as the volume of fluid method, are prohibitively expensive from a computational standpoint. Analytical models, on the other hand, are primarily applicable to idealized geometries and, hence, are unable to accurately capture chip level behavior of complex microfluidic systems. This paper presents a parametrized dynamic model for the system-level analysis of liquid filling in three-dimensional (3D) microfluidic networks. In our approach, a complex microfluidic network is deconstructed into a set of commonly used components, such as reservoirs, microchannels, and junctions. The components are then assembled according to their spatial layout and operating rationale to achieve a rapid system-level model. A dynamic model based on the transient momentum equation is developed to track the liquid front in the microchannels. The principle of mass conservation at the junction is used to link the fluidic parameters in the microchannels emanating from the junction. Assembly of these component models yields a set of differential and algebraic equations, which upon integration provides temporal information of the liquid filling process, particularly liquid front propagation (i.e., the arrival time). The models are used to simulate the transient liquid filling process in a variety of microfluidic constructs and in a multiplexer, representing a complex microfluidic network. The accuracy (relative error less than 7%) and orders-of-magnitude speedup (30 000X-4 000 000X) of our system-level models are verified by comparison against 3D high-fidelity numerical studies. Our findings clearly establish the utility of our models and simulation methodology for fast, reliable analysis of liquid filling to guide the design optimization of complex microfluidic networks.
Microfluidics to Mimic Blood Flow in Health and Disease
NASA Astrophysics Data System (ADS)
Sebastian, Bernhard; Dittrich, Petra S.
2018-01-01
Throughout history, capillary systems have aided the establishment of the fundamental laws of blood flow and its non-Newtonian properties. The advent of microfluidics technology in the 1990s propelled the development of highly integrated lab-on-a-chip platforms that allow highly accurate replication of vascular systems' dimensions, mechanical properties, and biological complexity. Applications include the detection of pathological changes to red blood cells, white blood cells, and platelets at unparalleled sensitivity and the efficacy assessment of drug treatment. Recent efforts have aimed at the development of microfluidics-based tests usable in a clinial environment or the replication of more complex diseases such as thrombosis. These microfluidic disease models enable the study of onset and progression of disease as well as the identification of key players and risk factors, which have led to a spectrum of clinically relevant findings.
NASA Astrophysics Data System (ADS)
Ismail, I.; Guillemin, R.; Marchenko, T.; Travnikova, O.; Ablett, J. M.; Rueff, J.-P.; Piancastelli, M.-N.; Simon, M.; Journel, L.
2018-06-01
A new setup has been designed and built to study organometallic complexes in gas phase at the third-generation Synchrotron radiation sources. This setup consists of a new homemade computer-controlled gas cell that allows us to sublimate solid samples by accurately controlling the temperature. This cell has been developed to be a part of the high-resolution X-ray emission spectrometer permanently installed at the GALAXIES beamline of the French National Synchrotron Facility SOLEIL. To illustrate the capabilities of the setup, the cell has been successfully used to record high-resolution Kα emission spectra of gas-phase ferrocene F e (C5H5) 2 and to characterize their dependence with the excitation energy. This will allow to extend resonant X-ray emission to different organometallic molecules.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3
NASA Astrophysics Data System (ADS)
Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.
2016-04-01
Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.
Automated selected reaction monitoring software for accurate label-free protein quantification.
Teleman, Johan; Karlsson, Christofer; Waldemarson, Sofia; Hansson, Karin; James, Peter; Malmström, Johan; Levander, Fredrik
2012-07-06
Selected reaction monitoring (SRM) is a mass spectrometry method with documented ability to quantify proteins accurately and reproducibly using labeled reference peptides. However, the use of labeled reference peptides becomes impractical if large numbers of peptides are targeted and when high flexibility is desired when selecting peptides. We have developed a label-free quantitative SRM workflow that relies on a new automated algorithm, Anubis, for accurate peak detection. Anubis efficiently removes interfering signals from contaminating peptides to estimate the true signal of the targeted peptides. We evaluated the algorithm on a published multisite data set and achieved results in line with manual data analysis. In complex peptide mixtures from whole proteome digests of Streptococcus pyogenes we achieved a technical variability across the entire proteome abundance range of 6.5-19.2%, which was considerably below the total variation across biological samples. Our results show that the label-free SRM workflow with automated data analysis is feasible for large-scale biological studies, opening up new possibilities for quantitative proteomics and systems biology.
Swanson, Jon; Audie, Joseph
2018-01-01
A fundamental and unsolved problem in biophysical chemistry is the development of a computationally simple, physically intuitive, and generally applicable method for accurately predicting and physically explaining protein-protein binding affinities from protein-protein interaction (PPI) complex coordinates. Here, we propose that the simplification of a previously described six-term PPI scoring function to a four term function results in a simple expression of all physically and statistically meaningful terms that can be used to accurately predict and explain binding affinities for a well-defined subset of PPIs that are characterized by (1) crystallographic coordinates, (2) rigid-body association, (3) normal interface size, and hydrophobicity and hydrophilicity, and (4) high quality experimental binding affinity measurements. We further propose that the four-term scoring function could be regarded as a core expression for future development into a more general PPI scoring function. Our work has clear implications for PPI modeling and structure-based drug design.
NASA Astrophysics Data System (ADS)
Hong, Pengyu; Sun, Hui; Sha, Long; Pu, Yi; Khatri, Kshitij; Yu, Xiang; Tang, Yang; Lin, Cheng
2017-08-01
A major challenge in glycomics is the characterization of complex glycan structures that are essential for understanding their diverse roles in many biological processes. We present a novel efficient computational approach, named GlycoDeNovo, for accurate elucidation of the glycan topologies from their tandem mass spectra. Given a spectrum, GlycoDeNovo first builds an interpretation-graph specifying how to interpret each peak using preceding interpreted peaks. It then reconstructs the topologies of peaks that contribute to interpreting the precursor ion. We theoretically prove that GlycoDeNovo is highly efficient. A major innovative feature added to GlycoDeNovo is a data-driven IonClassifier which can be used to effectively rank candidate topologies. IonClassifier is automatically learned from experimental spectra of known glycans to distinguish B- and C-type ions from all other ion types. Our results showed that GlycoDeNovo is robust and accurate for topology reconstruction of glycans from their tandem mass spectra. [Figure not available: see fulltext.
Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network
NASA Astrophysics Data System (ADS)
Yang, Bin
2017-07-01
Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately
NASA Astrophysics Data System (ADS)
Stampanoni, M.; Reichold, J.; Weber, B.; Haberthür, D.; Schittny, J.; Eller, J.; Büchi, F. N.; Marone, F.
2010-09-01
Nowadays, thanks to the high brilliance available at modern, third generation synchrotron facilities and recent developments in detector technology, it is possible to record volumetric information at the micrometer scale within few minutes. High signal-to-noise ratio, quantitative information on very complex structures like the brain micro vessel architecture, lung airways or fuel cells can be obtained thanks to the combination of dedicated sample preparation protocols, in-situ acquisition schemes and cutting-edge imaging analysis instruments. In this work we report on recent experiments carried out at the TOMCAT beamline of the Swiss Light Source [1] where synchrotron-based tomographic microscopy has been successfully used to obtain fundamental information on preliminary models for cerebral fluid flow [2], to provide an accurate mesh for 3D finite-element simulation of the alveolar structure of the pulmonary acinus [3] and to investigate the complex functional mechanism of fuel cells [4]. Further, we introduce preliminary results on the combination of absorption and phase contrast microscopy for the visualization of high-Z nanoparticles in soft tissues, a fundamental information when designing modern drug delivery systems [5]. As an outlook we briefly discuss the new possibilities offered by high sensitivity, high resolution grating interferomtery as well as Zernike Phase contrast nanotomography [6].
Highly sensitive pseudo-differential ac-nanocalorimeter for the study of the glass transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laarraj, Mohcine; University Grenoble Alpes, Institut NEEL, F-38042 Grenoble; Laboratoire d’Ingénierie et des Matériaux
2015-11-15
We present a nanocalorimeter designed for the measurement of the dynamic heat capacity of thin films. The microfabricated sensor, the thermal conditioning of the sensor, as well as the highly stable and low noise electronic chain allow measurements of the real and imaginary parts of the complex specific heat with a resolution Δ C/C of about 10{sup −5}. The performances of this quasi-differential nanocalorimeter were tested on a model of polymeric glass-former, the polyvinyl acetate (PVAc). The high stability and low noise of the device are essential for accurate studies on non-equilibrium slow relaxing systems such as glasses.
An accurate, fast, and scalable solver for high-frequency wave propagation
NASA Astrophysics Data System (ADS)
Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.
2017-12-01
In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and in parallel. We demonstrate that this produces an even more effective and parallelizable preconditioner for a single right-hand side. As before, additional speed can be gained by pipelining several right-hand-sides.
NASA Astrophysics Data System (ADS)
Fan, Qiang; Huang, Zhenyu; Zhang, Bing; Chen, Dayue
2013-02-01
Properties of discontinuities, such as bolt joints and cracks in the waveguide structures, are difficult to evaluate by either analytical or numerical methods due to the complexity and uncertainty of the discontinuities. In this paper, the discontinuity in a Timoshenko beam is modeled with high-order parameters and then these parameters are identified by using reflection coefficients at the discontinuity. The high-order model is composed of several one-order sub-models in series and each sub-model consists of inertia, stiffness and damping components in parallel. The order of the discontinuity model is determined based on the characteristics of the reflection coefficient curve and the accuracy requirement of the dynamic modeling. The model parameters are identified through the least-square fitting iteration method, of which the undetermined model parameters are updated in iteration to fit the dynamic reflection coefficient curve with the wave-based one. By using the spectral super-element method (SSEM), simulation cases, including one-order discontinuities on infinite- and finite-beams and a two-order discontinuity on an infinite beam, were employed to evaluate both the accuracy of the discontinuity model and the effectiveness of the identification method. For practical considerations, effects of measurement noise on the discontinuity parameter identification are investigated by adding different levels of noise to the simulated data. The simulation results were then validated by the corresponding experiments. Both the simulation and experimental results show that (1) the one-order discontinuities can be identified accurately with the maximum errors of 6.8% and 8.7%, respectively; (2) and the high-order discontinuities can be identified with the maximum errors of 15.8% and 16.2%, respectively; and (3) the high-order model can predict the complex discontinuity much more accurately than the one-order discontinuity model.
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
Unger, Bertram J; Kraut, Jay; Rhodes, Charlotte; Hochman, Jordan
2014-01-01
Physical models of complex bony structures can be used for surgical skills training. Current models focus on surface rendering but suffer from a lack of internal accuracy due to limitations in the manufacturing process. We describe a technique for generating internally accurate rapid-prototyped anatomical models with solid and hollow structures from clinical and microCT data using a 3D printer. In a face validation experiment, otolaryngology residents drilled a cadaveric bone and its corresponding printed model. The printed bone models were deemed highly realistic representations across all measured parameters and the educational value of the models was strongly appreciated.
NASA Astrophysics Data System (ADS)
Kozak, J.; Gulbinowicz, D.; Gulbinowicz, Z.
2009-05-01
The need for complex and accurate three dimensional (3-D) microcomponents is increasing rapidly for many industrial and consumer products. Electrochemical machining process (ECM) has the potential of generating desired crack-free and stress-free surfaces of microcomponents. This paper reports a study of pulse electrochemical micromachining (PECMM) using ultrashort (nanoseconds) pulses for generating complex 3-D microstructures of high accuracy. A mathematical model of the microshaping process with taking into consideration unsteady phenomena in electrical double layer has been developed. The software for computer simulation of PECM has been developed and the effects of machining parameters on anodic localization and final shape of machined surface are presented.
Outlier-resilient complexity analysis of heartbeat dynamics
NASA Astrophysics Data System (ADS)
Lo, Men-Tzung; Chang, Yi-Chung; Lin, Chen; Young, Hsu-Wen Vincent; Lin, Yen-Hung; Ho, Yi-Lwun; Peng, Chung-Kang; Hu, Kun
2015-03-01
Complexity in physiological outputs is believed to be a hallmark of healthy physiological control. How to accurately quantify the degree of complexity in physiological signals with outliers remains a major barrier for translating this novel concept of nonlinear dynamic theory to clinical practice. Here we propose a new approach to estimate the complexity in a signal by analyzing the irregularity of the sign time series of its coarse-grained time series at different time scales. Using surrogate data, we show that the method can reliably assess the complexity in noisy data while being highly resilient to outliers. We further apply this method to the analysis of human heartbeat recordings. Without removing any outliers due to ectopic beats, the method is able to detect a degradation of cardiac control in patients with congestive heart failure and a more degradation in critically ill patients whose life continuation relies on extracorporeal membrane oxygenator (ECMO). Moreover, the derived complexity measures can predict the mortality of ECMO patients. These results indicate that the proposed method may serve as a promising tool for monitoring cardiac function of patients in clinical settings.
Spacecraft Complexity Subfactors and Implications on Future Cost Growth
NASA Technical Reports Server (NTRS)
Leising, Charles J.; Wessen, Randii; Ellyin, Ray; Rosenberg, Leigh; Leising, Adam
2013-01-01
During the last ten years the Jet Propulsion Laboratory has used a set of cost-risk subfactors to independently estimate the magnitude of development risks that may not be covered in the high level cost models employed during early concept development. Within the last several years the Laboratory has also developed a scale of Concept Maturity Levels with associated criteria to quantitatively assess a concept's maturity. This latter effort has been helpful in determining whether a concept is mature enough for accurate costing but it does not provide any quantitative estimate of cost risk. Unfortunately today's missions are significantly more complex than when the original cost-risk subfactors were first formulated. Risks associated with complex missions are not being adequately evaluated and future cost growth is being underestimated. The risk subfactor process needed to be updated.
Shah, Jai L.; Tandon, Neeraj; Keshavan, Matcheri S.
2016-01-01
Aim Accurate prediction of which individuals will go on to develop psychosis would assist early intervention and prevention paradigms. We sought to review investigations of prospective psychosis prediction based on markers and variables examined in longitudinal familial high-risk (FHR) studies. Methods We performed literature searches in MedLine, PubMed and PsycINFO for articles assessing performance characteristics of predictive clinical tests in FHR studies of psychosis. Studies were included if they reported one or more predictive variables in subjects at FHR for psychosis. We complemented this search strategy with references drawn from articles, reviews, book chapters and monographs. Results Across generations of familial high-risk projects, predictive studies have investigated behavioral, cognitive, psychometric, clinical, neuroimaging, and other markers. Recent analyses have incorporated multivariate and multi-domain approaches to risk ascertainment, although with still generally modest results. Conclusions While a broad range of risk factors has been identified, no individual marker or combination of markers can at this time enable accurate prospective prediction of emerging psychosis for individuals at FHR. We outline the complex and multi-level nature of psychotic illness, the myriad of factors influencing its development, and methodological hurdles to accurate and reliable prediction. Prospects and challenges for future generations of FHR studies are discussed in the context of early detection and intervention strategies. PMID:23693118
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-11-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon-bone-muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18-30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data.
Anastasi, Giuseppe; Cutroneo, Giuseppina; Bruschetta, Daniele; Trimarchi, Fabio; Ielitro, Giuseppe; Cammaroto, Simona; Duca, Antonio; Bramanti, Placido; Favaloro, Angelo; Vaccarino, Gianluigi; Milardi, Demetrio
2009-01-01
We have applied high-quality medical imaging techniques to study the structure of the human ankle. Direct volume rendering, using specific algorithms, transforms conventional two-dimensional (2D) magnetic resonance image (MRI) series into 3D volume datasets. This tool allows high-definition visualization of single or multiple structures for diagnostic, research, and teaching purposes. No other image reformatting technique so accurately highlights each anatomic relationship and preserves soft tissue definition. Here, we used this method to study the structure of the human ankle to analyze tendon–bone–muscle relationships. We compared ankle MRI and computerized tomography (CT) images from 17 healthy volunteers, aged 18–30 years (mean 23 years). An additional subject had a partial rupture of the Achilles tendon. The MRI images demonstrated superiority in overall quality of detail compared to the CT images. The MRI series accurately rendered soft tissue and bone in simultaneous image acquisition, whereas CT required several window-reformatting algorithms, with loss of image data quality. We obtained high-quality digital images of the human ankle that were sufficiently accurate for surgical and clinical intervention planning, as well as for teaching human anatomy. Our approach demonstrates that complex anatomical structures such as the ankle, which is rich in articular facets and ligaments, can be easily studied non-invasively using MRI data. PMID:19678857
Desktop aligner for fabrication of multilayer microfluidic devices.
Li, Xiang; Yu, Zeta Tak For; Geraldo, Dalton; Weng, Shinuo; Alve, Nitesh; Dun, Wu; Kini, Akshay; Patel, Karan; Shu, Roberto; Zhang, Feng; Li, Gang; Jin, Qinghui; Fu, Jianping
2015-07-01
Multilayer assembly is a commonly used technique to construct multilayer polydimethylsiloxane (PDMS)-based microfluidic devices with complex 3D architecture and connectivity for large-scale microfluidic integration. Accurate alignment of structure features on different PDMS layers before their permanent bonding is critical in determining the yield and quality of assembled multilayer microfluidic devices. Herein, we report a custom-built desktop aligner capable of both local and global alignments of PDMS layers covering a broad size range. Two digital microscopes were incorporated into the aligner design to allow accurate global alignment of PDMS structures up to 4 in. in diameter. Both local and global alignment accuracies of the desktop aligner were determined to be about 20 μm cm(-1). To demonstrate its utility for fabrication of integrated multilayer PDMS microfluidic devices, we applied the desktop aligner to achieve accurate alignment of different functional PDMS layers in multilayer microfluidics including an organs-on-chips device as well as a microfluidic device integrated with vertical passages connecting channels located in different PDMS layers. Owing to its convenient operation, high accuracy, low cost, light weight, and portability, the desktop aligner is useful for microfluidic researchers to achieve rapid and accurate alignment for generating multilayer PDMS microfluidic devices.
Desktop aligner for fabrication of multilayer microfluidic devices
Li, Xiang; Yu, Zeta Tak For; Geraldo, Dalton; Weng, Shinuo; Alve, Nitesh; Dun, Wu; Kini, Akshay; Patel, Karan; Shu, Roberto; Zhang, Feng; Li, Gang; Jin, Qinghui; Fu, Jianping
2015-01-01
Multilayer assembly is a commonly used technique to construct multilayer polydimethylsiloxane (PDMS)-based microfluidic devices with complex 3D architecture and connectivity for large-scale microfluidic integration. Accurate alignment of structure features on different PDMS layers before their permanent bonding is critical in determining the yield and quality of assembled multilayer microfluidic devices. Herein, we report a custom-built desktop aligner capable of both local and global alignments of PDMS layers covering a broad size range. Two digital microscopes were incorporated into the aligner design to allow accurate global alignment of PDMS structures up to 4 in. in diameter. Both local and global alignment accuracies of the desktop aligner were determined to be about 20 μm cm−1. To demonstrate its utility for fabrication of integrated multilayer PDMS microfluidic devices, we applied the desktop aligner to achieve accurate alignment of different functional PDMS layers in multilayer microfluidics including an organs-on-chips device as well as a microfluidic device integrated with vertical passages connecting channels located in different PDMS layers. Owing to its convenient operation, high accuracy, low cost, light weight, and portability, the desktop aligner is useful for microfluidic researchers to achieve rapid and accurate alignment for generating multilayer PDMS microfluidic devices. PMID:26233409
Fast and accurate automated cell boundary determination for fluorescence microscopy
NASA Astrophysics Data System (ADS)
Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider
2013-07-01
Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.
Ga(+) Basicity and Affinity Scales Based on High-Level Ab Initio Calculations.
Brea, Oriana; Mó, Otilia; Yáñez, Manuel
2015-10-26
The structure, relative stability and bonding of complexes formed by the interaction between Ga(+) and a large set of compounds, including hydrocarbons, aromatic systems, and oxygen-, nitrogen-, fluorine and sulfur-containing Lewis bases have been investigated through the use of the high-level composite ab initio Gaussian-4 theory. This allowed us to establish rather accurate Ga(+) cation affinity (GaCA) and Ga(+) cation basicity (GaCB) scales. The bonding analysis of the complexes under scrutiny shows that, even though one of the main ingredients of the Ga(+) -base interaction is electrostatic, it exhibits a non-negligible covalent character triggered by the presence of the low-lying empty 4p orbital of Ga(+) , which favors a charge donation from occupied orbitals of the base to the metal ion. This partial covalent character, also observed in AlCA scales, is behind the dissimilarities observed when GaCA are compared with Li(+) cation affinities, where these covalent contributions are practically nonexistent. Quite unexpectedly, there are some dissimilarities between several Ga(+) -complexes and the corresponding Al(+) -analogues, mainly affecting the relative stability of π-complexes involving aromatic compounds. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bartschat, Klaus; Kushner, Mark J.
2016-01-01
Electron collisions with atoms, ions, molecules, and surfaces are critically important to the understanding and modeling of low-temperature plasmas (LTPs), and so in the development of technologies based on LTPs. Recent progress in obtaining experimental benchmark data and the development of highly sophisticated computational methods is highlighted. With the cesium-based diode-pumped alkali laser and remote plasma etching of Si3N4 as examples, we demonstrate how accurate and comprehensive datasets for electron collisions enable complex modeling of plasma-using technologies that empower our high-technology–based society. PMID:27317740
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo
2015-01-01
Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.
NASA Astrophysics Data System (ADS)
Hantry, Francois; Papazoglou, Mike; van den Heuvel, Willem-Jan; Haque, Rafique; Whelan, Eoin; Carroll, Noel; Karastoyanova, Dimka; Leymann, Frank; Nikolaou, Christos; Lammersdorf, Winfried; Hacid, Mohand-Said
Business process management is one of the core drivers of business innovation and is based on strategic technology and capable of creating and successfully executing end-to-end business processes. The trend will be to move from relatively stable, organization-specific applications to more dynamic, high-value ones where business process interactions and trends are examined closely to understand more accurately an application's requirements. Such collaborative, complex end-to-end service interactions give rise to the concept of Service Networks (SNs).
A Multigrid Approach to Embedded-Grid Solvers
1992-08-01
previously as a Master’s Thesis at the University of Florida. Not edited by TESCO , Inc. 12a. DISTRIBUTION / AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE...domain decomposition techniques in order to accurately model the aerodynamics of complex geometries , 5, 11, 12, 13, 24’. Although these high...quantities subscripted by oc denote reference values in the undisturbed gas. Uv v, e e P - (10) Where • = (7b,/•)1/2, is the speed of sound in the
Optimization of interplanetary trajectories with unpowered planetary swingbys
NASA Technical Reports Server (NTRS)
Sauer, Carl G., Jr.
1988-01-01
A method is presented for calculating and optimizing unpowered planetary swingby trajectories using a patched conic trajectory generator. Examples of unpowered swingby trajectories are given to demonstrate the method. The method, which uses primer vector theory, is not highly accurate, but provides projections for preliminary mission definition studies. Advantages to using a patched conic trajectory simulation for preliminary studies which examine many different and complex missions include calculation speed and adaptability to changes or additions to the formulation.
Wang, Xiupin; Peng, Qingzhi; Li, Peiwu; Zhang, Qi; Ding, Xiaoxia; Zhang, Wen; Zhang, Liangxiao
2016-10-12
High complexity of identification for non-target triacylglycerols (TAGs) is a major challenge in lipidomics analysis. To identify non-target TAGs, a powerful tool named accurate MS(n) spectrometry generating so-called ion trees is used. In this paper, we presented a technique for efficient structural elucidation of TAGs on MS(n) spectral trees produced by LTQ Orbitrap MS(n), which was implemented as an open source software package, or TIT. The TIT software was used to support automatic annotation of non-target TAGs on MS(n) ion trees from a self-built fragment ion database. This database includes 19108 simulate TAG molecules from a random combination of fatty acids and corresponding 500582 self-built multistage fragment ions (MS ≤ 3). Our software can identify TAGs using a "stage-by-stage elimination" strategy. By utilizing the MS(1) accurate mass and referenced RKMD, the TIT software can discriminate unique elemental composition candidates. The regiospecific isomers of fatty acyl chains will be distinguished using MS(2) and MS(3) fragment spectra. We applied the algorithm to the selection of 45 TAG standards and demonstrated that the molecular ions could be 100% correctly assigned. Therefore, the TIT software could be applied to TAG identification in complex biological samples such as mouse plasma extracts. Copyright © 2016 Elsevier B.V. All rights reserved.
High order solution of Poisson problems with piecewise constant coefficients and interface jumps
NASA Astrophysics Data System (ADS)
Marques, Alexandre Noll; Nave, Jean-Christophe; Rosales, Rodolfo Ruben
2017-04-01
We present a fast and accurate algorithm to solve Poisson problems in complex geometries, using regular Cartesian grids. We consider a variety of configurations, including Poisson problems with interfaces across which the solution is discontinuous (of the type arising in multi-fluid flows). The algorithm is based on a combination of the Correction Function Method (CFM) and Boundary Integral Methods (BIM). Interface and boundary conditions can be treated in a fast and accurate manner using boundary integral equations, and the associated BIM. Unfortunately, BIM can be costly when the solution is needed everywhere in a grid, e.g. fluid flow problems. We use the CFM to circumvent this issue. The solution from the BIM is used to rewrite the problem as a series of Poisson problems in rectangular domains-which requires the BIM solution at interfaces/boundaries only. These Poisson problems involve discontinuities at interfaces, of the type that the CFM can handle. Hence we use the CFM to solve them (to high order of accuracy) with finite differences and a Fast Fourier Transform based fast Poisson solver. We present 2-D examples of the algorithm applied to Poisson problems involving complex geometries, including cases in which the solution is discontinuous. We show that the algorithm produces solutions that converge with either 3rd or 4th order of accuracy, depending on the type of boundary condition and solution discontinuity.
Localizing Submarine Earthquakes by Listening to the Water Reverberations
NASA Astrophysics Data System (ADS)
Castillo, J.; Zhan, Z.; Wu, W.
2017-12-01
Mid-Ocean Ridge (MOR) earthquakes generally occur far from any land based station and are of moderate magnitude, making it complicated to detect and in most cases, locate accurately. This limits our understanding of how MOR normal and transform faults move and the manner in which they slip. Different from continental events, seismic records from earthquakes occurring beneath the ocean floor show complex reverberations caused by P-wave energy trapped in the water column that are highly dependent of the source location and the efficiency to which energy propagated to the near-source surface. These later arrivals are commonly considered to be only a nuisance as they might sometimes interfere with the primary arrivals. However, in this study, we take advantage of the wavefield's high sensitivity to small changes in the seafloor topography and the present-day availability of worldwide multi-beam bathymetry to relocate submarine earthquakes by modeling these water column reverberations in teleseismic signals. Using a three-dimensional hybrid method for modeling body wave arrivals, we demonstrate that an accurate hypocentral location of a submarine earthquake (<5 km) can be achieved if the structural complexities near the source region are appropriately accounted for. This presents a novel way of studying earthquake source properties and will serve as a means to explore the influence of physical fault structure on the seismic behavior of transform faults.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlahović, Filip; Perić, Marko; Zlatar, Matija, E-mail: matijaz@chem.bg.ac.rs
2015-06-07
Herein, we present the systematic, comparative computational study of the d − d transitions in a series of first row transition metal hexaaqua complexes, [M(H{sub 2}O){sub 6}]{sup n+} (M{sup 2+/3+} = V {sup 2+/3+}, Cr{sup 2+/3+}, Mn{sup 2+/3+}, Fe{sup 2+/3+}, Co{sup 2+/3+}, Ni{sup 2+}) by the means of Time-dependent Density Functional Theory (TD-DFT) and Ligand Field Density Functional Theory (LF-DFT). Influence of various exchange-correlation (XC) approximations have been studied, and results have been compared to the experimental transition energies, as well as, to the previous high-level ab initio calculations. TD-DFT gives satisfactory results in the cases of d{sup 2}, d{supmore » 4}, and low-spin d{sup 6} complexes, but fails in the cases when transitions depend only on the ligand field splitting, and for states with strong character of double excitation. LF-DFT, as a non-empirical approach to the ligand field theory, takes into account in a balanced way both dynamic and non-dynamic correlation effects and hence accurately describes the multiplets of transition metal complexes, even in difficult cases such as sextet-quartet splitting in d{sup 5} complexes. Use of the XC functionals designed for the accurate description of the spin-state splitting, e.g., OPBE, OPBE0, or SSB-D, is found to be crucial for proper prediction of the spin-forbidden excitations by LF-DFT. It is shown that LF-DFT is a valuable alternative to both TD-DFT and ab initio methods.« less
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Fast and Robust STEM Reconstruction in Complex Environments Using Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Wang, D.; Hollaus, M.; Puttonen, E.; Pfeifer, N.
2016-06-01
Terrestrial Laser Scanning (TLS) is an effective tool in forest research and management. However, accurate estimation of tree parameters still remains challenging in complex forests. In this paper, we present a novel algorithm for stem modeling in complex environments. This method does not require accurate delineation of stem points from the original point cloud. The stem reconstruction features a self-adaptive cylinder growing scheme. This algorithm is tested for a landslide region in the federal state of Vorarlberg, Austria. The algorithm results are compared with field reference data, which show that our algorithm is able to accurately retrieve the diameter at breast height (DBH) with a root mean square error (RMSE) of ~1.9 cm. This algorithm is further facilitated by applying an advanced sampling technique. Different sampling rates are applied and tested. It is found that a sampling rate of 7.5% is already able to retain the stem fitting quality and simultaneously reduce the computation time significantly by ~88%.
Nuclear Magnetic Resonance Spectroscopy-Based Identification of Yeast.
Himmelreich, Uwe; Sorrell, Tania C; Daniel, Heide-Marie
2017-01-01
Rapid and robust high-throughput identification of environmental, industrial, or clinical yeast isolates is important whenever relatively large numbers of samples need to be processed in a cost-efficient way. Nuclear magnetic resonance (NMR) spectroscopy generates complex data based on metabolite profiles, chemical composition and possibly on medium consumption, which can not only be used for the assessment of metabolic pathways but also for accurate identification of yeast down to the subspecies level. Initial results on NMR based yeast identification where comparable with conventional and DNA-based identification. Potential advantages of NMR spectroscopy in mycological laboratories include not only accurate identification but also the potential of automated sample delivery, automated analysis using computer-based methods, rapid turnaround time, high throughput, and low running costs.We describe here the sample preparation, data acquisition and analysis for NMR-based yeast identification. In addition, a roadmap for the development of classification strategies is given that will result in the acquisition of a database and analysis algorithms for yeast identification in different environments.
Preliminary analysis of turbochargers rotors dynamic behaviour
NASA Astrophysics Data System (ADS)
Monoranu, R.; Ştirbu, C.; Bujoreanu, C.
2016-08-01
Turbocharger rotors for the spark and compression ignition engines are resistant steels manufactured in order to support the exhaust gas temperatures exceeding 1200 K. In fact, the mechanical stress is not large as the power consumption of these systems is up to 10 kW, but the operating speeds are high, ranging between 30000 ÷ 250000 rpm. Therefore, the correct turbochargers functioning involves, even from the design stage, the accurate evaluation of the temperature effects, of the turbine torque due to the engine exhaust gases and of the vibration system behaviour caused by very high operating speeds. In addition, the turbocharger lubrication complicates the model, because the classical hydrodynamic theory cannot be applied to evaluate the floating bush bearings. The paper proposes a FEM study using CATIA environment, both as modeling medium and as tool for the numerical analysis, in order to highlight the turbocharger complex behaviour. An accurate design may prevent some major issues which can occur during its operation.
New atom probe approaches to studying segregation in nanocrystalline materials.
Samudrala, S K; Felfer, P J; Araullo-Peters, V J; Cao, Y; Liao, X Z; Cairney, J M
2013-09-01
Atom probe is a technique that is highly suited to the study of nanocrystalline materials. It can provide accurate atomic-scale information about the composition of grain boundaries in three dimensions. In this paper we have analysed the microstructure of a nanocrystalline super-duplex stainless steel prepared by high pressure torsion (HPT). Not all of the grain boundaries in this alloy display obvious segregation, making visualisation of the microstructure challenging. In addition, the grain boundaries present in the atom probe data acquired from this alloy have complex shapes that are curved at the scale of the dataset and the interfacial excess varies considerably over the boundaries, making the accurate characterisation of the distribution of solute challenging using existing analysis techniques. In this paper we present two new data treatment methods that allow the visualisation of boundaries with little or no segregation, the delineation of boundaries for further analysis and the quantitative analysis of Gibbsian interfacial excess at boundaries, including the capability of excess mapping. Copyright © 2013 Elsevier B.V. All rights reserved.
Retegan, Marius; Collomb, Marie-Noëlle; Neese, Frank; Duboc, Carole
2013-01-07
The electronic and magnetic properties of polynuclear complexes, in particular the magnetic anisotropy (zero field splitting, ZFS), the leading term of the spin Hamiltonian (SH), are commonly analyzed in a global manner and no attempt is usually made to understand the various contributions to the anisotropy at the atomic scale. This is especially true in weakly magnetically coupled systems. The present study addresses this problem and investigates the local SH parameters using a methodology based on experimental measurements and theoretical calculations. This work focuses on the challenging mono μ-oxo bis μ-acetato dinuclear Mn(III) complex: [Mn(2)(III)(μ-O)(μ-OAc)(2)L(2)](PF(6))(2) (with L = trispyrrolidine-1,4,7-triazacyclononane) (1), which is particularly difficult for EPR spectroscopy because of its large magnetic anisotropy and the weak ferromagnetic interaction between the two Mn(III) ions. High field (up to 12 T) and high frequency (190-345 GHz) EPR experiments have been recorded for 1 between 5 and 50 K. These data have been analyzed by employing a complex Hamiltonian, which encompasses terms describing the local and inter-site interactions. Density functional theory and multireference correlated ab initio calculations have been used to estimate the ZFS of the Mn(III) ions (D(Mn) = +4.29 cm(-1), E(Mn)/D(Mn) = 0.19) and the Euler angles reflecting the relative orientation of the ZFS tensor for each Mn(III) (α = -52°, β = 28°, γ = 3°). This analysis allowed the accurate determination of the local parameters: D(Mn) = +4.50 cm(-1), E(Mn)/D(Mn) = 0.07, α = -35°, β = 23°, γ = 2°. The spin ladder approach has also been applied, but only the parameters of the ground spin state of 1 have been accurately determined (D(4) = +1.540 cm(-1), E(4)/D(4) = 0.107). This is not sufficient to allow for the determination of the local parameters. The validity and practical performance of both approaches have been discussed.
NASA Astrophysics Data System (ADS)
Chen, Hudong
2001-06-01
There have been considerable advances in Lattice Boltzmann (LB) based methods in the last decade. By now, the fundamental concept of using the approach as an alternative tool for computational fluid dynamics (CFD) has been substantially appreciated and validated in mainstream scientific research and in industrial engineering communities. Lattice Boltzmann based methods possess several major advantages: a) less numerical dissipation due to the linear Lagrange type advection operator in the Boltzmann equation; b) local dynamic interactions suitable for highly parallel processing; c) physical handling of boundary conditions for complicated geometries and accurate control of fluxes; d) microscopically consistent modeling of thermodynamics and of interface properties in complex multiphase flows. It provides a great opportunity to apply the method to practical engineering problems encountered in a wide range of industries from automotive, aerospace to chemical, biomedical, petroleum, nuclear, and others. One of the key challenges is to extend the applicability of this alternative approach to regimes of highly turbulent flows commonly encountered in practical engineering situations involving high Reynolds numbers. Over the past ten years, significant efforts have been made on this front at Exa Corporation in developing a lattice Boltzmann based commercial CFD software, PowerFLOW. It has become a useful computational tool for the simulation of turbulent aerodynamics in practical engineering problems involving extremely complex geometries and flow situations, such as in new automotive vehicle designs world wide. In this talk, we present an overall LB based algorithm concept along with certain key extensions in order to accurately handle turbulent flows involving extremely complex geometries. To demonstrate the accuracy of turbulent flow simulations, we provide a set of validation results for some well known academic benchmarks. These include straight channels, backward-facing steps, flows over a curved hill and typical NACA airfoils at various angles of attack including prediction of stall angle. We further provide numerous engineering cases, ranging from external aerodynamics around various car bodies to internal flows involved in various industrial devices. We conclude with a discussion of certain future extensions for complex fluids.
Freeman, Robin; Dean, Ben; Kirk, Holly; Leonard, Kerry; Phillips, Richard A.; Perrins, Chris M.; Guilford, Tim
2013-01-01
Understanding the behaviour of animals in the wild is fundamental to conservation efforts. Advances in bio-logging technologies have offered insights into the behaviour of animals during foraging, migration and social interaction. However, broader application of these systems has been limited by device mass, cost and longevity. Here, we use information from multiple logger types to predict individual behaviour in a highly pelagic, migratory seabird, the Manx Shearwater (Puffinus puffinus). Using behavioural states resolved from GPS tracking of foraging during the breeding season, we demonstrate that individual behaviours can be accurately predicted during multi-year migrations from low cost, lightweight, salt-water immersion devices. This reveals a complex pattern of migratory stopovers: some involving high proportions of foraging, and others of rest behaviour. We use this technique to examine three consecutive years of global migrations, revealing the prominence of foraging behaviour during migration and the importance of highly productive waters during migratory stopover. PMID:23635496
Real-time dual-comb spectroscopy with a free-running bidirectionally mode-locked fiber laser
NASA Astrophysics Data System (ADS)
Mehravar, S.; Norwood, R. A.; Peyghambarian, N.; Kieu, K.
2016-06-01
Dual-comb technique has enabled exciting applications in high resolution spectroscopy, precision distance measurements, and 3D imaging. Major advantages over traditional methods can be achieved with dual-comb technique. For example, dual-comb spectroscopy provides orders of magnitude improvement in acquisition speed over standard Fourier-transform spectroscopy while still preserving the high resolution capability. Wider adoption of the technique has, however, been hindered by the need for complex and expensive ultrafast laser systems. Here, we present a simple and robust dual-comb system that employs a free-running bidirectionally mode-locked fiber laser operating at telecommunication wavelength. Two femtosecond frequency combs (with a small difference in repetition rates) are generated from a single laser cavity to ensure mutual coherent properties and common noise cancellation. As the result, we have achieved real-time absorption spectroscopy measurements without the need for complex servo locking with accurate frequency referencing, and relatively high signal-to-noise ratio.
NASA Astrophysics Data System (ADS)
Li, Hechao
An accurate knowledge of the complex microstructure of a heterogeneous material is crucial for quantitative structure-property relations establishment and its performance prediction and optimization. X-ray tomography has provided a non-destructive means for microstructure characterization in both 3D and 4D (i.e., structural evolution over time). Traditional reconstruction algorithms like filtered-back-projection (FBP) method or algebraic reconstruction techniques (ART) require huge number of tomographic projections and segmentation process before conducting microstructural quantification. This can be quite time consuming and computationally intensive. In this thesis, a novel procedure is first presented that allows one to directly extract key structural information in forms of spatial correlation functions from limited x-ray tomography data. The key component of the procedure is the computation of a "probability map", which provides the probability of an arbitrary point in the material system belonging to specific phase. The correlation functions of interest are then readily computed from the probability map. Using effective medium theory, accurate predictions of physical properties (e.g., elastic moduli) can be obtained. Secondly, a stochastic optimization procedure that enables one to accurately reconstruct material microstructure from a small number of x-ray tomographic projections (e.g., 20 - 40) is presented. Moreover, a stochastic procedure for multi-modal data fusion is proposed, where both X-ray projections and correlation functions computed from limited 2D optical images are fused to accurately reconstruct complex heterogeneous materials in 3D. This multi-modal reconstruction algorithm is proved to be able to integrate the complementary data to perform an excellent optimization procedure, which indicates its high efficiency in using limited structural information. Finally, the accuracy of the stochastic reconstruction procedure using limited X-ray projection data is ascertained by analyzing the microstructural degeneracy and the roughness of energy landscape associated with different number of projections. Ground-state degeneracy of a microstructure is found to decrease with increasing number of projections, which indicates a higher probability that the reconstructed configurations match the actual microstructure. The roughness of energy landscape can also provide information about the complexity and convergence behavior of the reconstruction for given microstructures and projection number.
Estimation of Dynamic Systems for Gene Regulatory Networks from Dependent Time-Course Data.
Kim, Yoonji; Kim, Jaejik
2018-06-15
Dynamic system consisting of ordinary differential equations (ODEs) is a well-known tool for describing dynamic nature of gene regulatory networks (GRNs), and the dynamic features of GRNs are usually captured through time-course gene expression data. Owing to high-throughput technologies, time-course gene expression data have complex structures such as heteroscedasticity, correlations between genes, and time dependence. Since gene experiments typically yield highly noisy data with small sample size, for a more accurate prediction of the dynamics, the complex structures should be taken into account in ODE models. Hence, this study proposes an ODE model considering such data structures and a fast and stable estimation method for the ODE parameters based on the generalized profiling approach with data smoothing techniques. The proposed method also provides statistical inference for the ODE estimator and it is applied to a zebrafish retina cell network.
OpenMS: a flexible open-source software platform for mass spectrometry data analysis.
Röst, Hannes L; Sachsenberg, Timo; Aiche, Stephan; Bielow, Chris; Weisser, Hendrik; Aicheler, Fabian; Andreotti, Sandro; Ehrlich, Hans-Christian; Gutenbrunner, Petra; Kenar, Erhan; Liang, Xiao; Nahnsen, Sven; Nilse, Lars; Pfeuffer, Julianus; Rosenberger, George; Rurik, Marc; Schmitt, Uwe; Veit, Johannes; Walzer, Mathias; Wojnar, David; Wolski, Witold E; Schilling, Oliver; Choudhary, Jyoti S; Malmström, Lars; Aebersold, Ruedi; Reinert, Knut; Kohlbacher, Oliver
2016-08-30
High-resolution mass spectrometry (MS) has become an important tool in the life sciences, contributing to the diagnosis and understanding of human diseases, elucidating biomolecular structural information and characterizing cellular signaling networks. However, the rapid growth in the volume and complexity of MS data makes transparent, accurate and reproducible analysis difficult. We present OpenMS 2.0 (http://www.openms.de), a robust, open-source, cross-platform software specifically designed for the flexible and reproducible analysis of high-throughput MS data. The extensible OpenMS software implements common mass spectrometric data processing tasks through a well-defined application programming interface in C++ and Python and through standardized open data formats. OpenMS additionally provides a set of 185 tools and ready-made workflows for common mass spectrometric data processing tasks, which enable users to perform complex quantitative mass spectrometric analyses with ease.
NASA Astrophysics Data System (ADS)
Broglia, Riccardo; Durante, Danilo
2017-11-01
This paper focuses on the analysis of a challenging free surface flow problem involving a surface vessel moving at high speeds, or planing. The investigation is performed using a general purpose high Reynolds free surface solver developed at CNR-INSEAN. The methodology is based on a second order finite volume discretization of the unsteady Reynolds-averaged Navier-Stokes equations (Di Mascio et al. in A second order Godunov—type scheme for naval hydrodynamics, Kluwer Academic/Plenum Publishers, Dordrecht, pp 253-261, 2001; Proceedings of 16th international offshore and polar engineering conference, San Francisco, CA, USA, 2006; J Mar Sci Technol 14:19-29, 2009); air/water interface dynamics is accurately modeled by a non standard level set approach (Di Mascio et al. in Comput Fluids 36(5):868-886, 2007a), known as the single-phase level set method. In this algorithm the governing equations are solved only in the water phase, whereas the numerical domain in the air phase is used for a suitable extension of the fluid dynamic variables. The level set function is used to track the free surface evolution; dynamic boundary conditions are enforced directly on the interface. This approach allows to accurately predict the evolution of the free surface even in the presence of violent breaking waves phenomena, maintaining the interface sharp, without any need to smear out the fluid properties across the two phases. This paper is aimed at the prediction of the complex free-surface flow field generated by a deep-V planing boat at medium and high Froude numbers (from 0.6 up to 1.2). In the present work, the planing hull is treated as a two-degree-of-freedom rigid object. Flow field is characterized by the presence of thin water sheets, several energetic breaking waves and plungings. The computational results include convergence of the trim angle, sinkage and resistance under grid refinement; high-quality experimental data are used for the purposes of validation, allowing to compare the hydrodynamic forces and the attitudes assumed at different velocities. A very good agreement between numerical and experimental results demonstrates the reliability of the single-phase level set approach for the predictions of high Froude numbers flows.
A procedure to estimate proximate analysis of mixed organic wastes.
Zaher, U; Buffiere, P; Steyer, J P; Chen, S
2009-04-01
In waste materials, proximate analysis measuring the total concentration of carbohydrate, protein, and lipid contents from solid wastes is challenging, as a result of the heterogeneous and solid nature of wastes. This paper presents a new procedure that was developed to estimate such complex chemical composition of the waste using conventional practical measurements, such as chemical oxygen demand (COD) and total organic carbon. The procedure is based on mass balance of macronutrient elements (carbon, hydrogen, nitrogen, oxygen, and phosphorus [CHNOP]) (i.e., elemental continuity), in addition to the balance of COD and charge intensity that are applied in mathematical modeling of biological processes. Knowing the composition of such a complex substrate is crucial to study solid waste anaerobic degradation. The procedure was formulated to generate the detailed input required for the International Water Association (London, United Kingdom) Anaerobic Digestion Model number 1 (IWA-ADM1). The complex particulate composition estimated by the procedure was validated with several types of food wastes and animal manures. To make proximate analysis feasible for validation, the wastes were classified into 19 types to allow accurate extraction and proximate analysis. The estimated carbohydrates, proteins, lipids, and inerts concentrations were highly correlated to the proximate analysis; correlation coefficients were 0.94, 0.88, 0.99, and 0.96, respectively. For most of the wastes, carbohydrate was the highest fraction and was estimated accurately by the procedure over an extended range with high linearity. For wastes that are rich in protein and fiber, the procedure was even more consistent compared with the proximate analysis. The new procedure can be used for waste characterization in solid waste treatment design and optimization.
Garcia, Jair E; Greentree, Andrew D; Shrestha, Mani; Dorin, Alan; Dyer, Adrian G
2014-01-01
The study of the signal-receiver relationship between flowering plants and pollinators requires a capacity to accurately map both the spectral and spatial components of a signal in relation to the perceptual abilities of potential pollinators. Spectrophotometers can typically recover high resolution spectral data, but the spatial component is difficult to record simultaneously. A technique allowing for an accurate measurement of the spatial component in addition to the spectral factor of the signal is highly desirable. Consumer-level digital cameras potentially provide access to both colour and spatial information, but they are constrained by their non-linear response. We present a robust methodology for recovering linear values from two different camera models: one sensitive to ultraviolet (UV) radiation and another to visible wavelengths. We test responses by imaging eight different plant species varying in shape, size and in the amount of energy reflected across the UV and visible regions of the spectrum, and compare the recovery of spectral data to spectrophotometer measurements. There is often a good agreement of spectral data, although when the pattern on a flower surface is complex a spectrophotometer may underestimate the variability of the signal as would be viewed by an animal visual system. Digital imaging presents a significant new opportunity to reliably map flower colours to understand the complexity of these signals as perceived by potential pollinators. Compared to spectrophotometer measurements, digital images can better represent the spatio-chromatic signal variability that would likely be perceived by the visual system of an animal, and should expand the possibilities for data collection in complex, natural conditions. However, and in spite of its advantages, the accuracy of the spectral information recovered from camera responses is subject to variations in the uncertainty levels, with larger uncertainties associated with low radiance levels.
NASA Astrophysics Data System (ADS)
Hearst, Anthony A.
Complex planting schemes are common in experimental crop fields and can make it difficult to extract plots of interest from high-resolution imagery of the fields gathered by Unmanned Aircraft Systems (UAS). This prevents UAS imagery from being applied in High-Throughput Precision Phenotyping and other areas of agricultural research. If the imagery is accurately geo-registered, then it may be possible to extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. Future work will focus on further enhancing the plot extraction accuracy through additional image processing techniques so that it becomes sufficiently accurate for all practical purposes in agricultural research and potentially other areas of research.
Patel, Trushar R; Chojnowski, Grzegorz; Astha; Koul, Amit; McKenna, Sean A; Bujnicki, Janusz M
2017-04-15
The diverse functional cellular roles played by ribonucleic acids (RNA) have emphasized the need to develop rapid and accurate methodologies to elucidate the relationship between the structure and function of RNA. Structural biology tools such as X-ray crystallography and Nuclear Magnetic Resonance are highly useful methods to obtain atomic-level resolution models of macromolecules. However, both methods have sample, time, and technical limitations that prevent their application to a number of macromolecules of interest. An emerging alternative to high-resolution structural techniques is to employ a hybrid approach that combines low-resolution shape information about macromolecules and their complexes from experimental hydrodynamic (e.g. analytical ultracentrifugation) and solution scattering measurements (e.g., solution X-ray or neutron scattering), with computational modeling to obtain atomic-level models. While promising, scattering methods rely on aggregation-free, monodispersed preparations and therefore the careful development of a quality control pipeline is fundamental to an unbiased and reliable structural determination. This review article describes hydrodynamic techniques that are highly valuable for homogeneity studies, scattering techniques useful to study the low-resolution shape, and strategies for computational modeling to obtain high-resolution 3D structural models of RNAs, proteins, and RNA-protein complexes. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Min-Sun; Park, Sung Yong; Miller, Keith R.
2013-11-01
Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformationalmore » changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.« less
NASA Technical Reports Server (NTRS)
Lee, Timothy J.
1989-01-01
HF, H2O, CN- and their hydrogen-bonded complexes were studied using state-of-the-art ab initio quantum mechanical methods. A large Gaussian one particle basis set consisting of triple zeta plus double polarization plus diffuse s and p functions (TZ2P + diffuse) was used. The theoretical methods employed include self consistent field, second order Moller-Plesset perturbation theory, singles and doubles configuration interaction theory and the singles and doubles coupled cluster approach. The FH-CN- and FH-NC- and H2O-CN-, H2O-NC- pairs of complexes are found to be essentially isoenergetic. The first pair of complexes are predicted to be bound by approx. 24 kcal/mole and the latter pair bound by approximately 15 kcal/mole. The ab initio binding energies are in good agreement with the experimental values. The two being shorter than the analogous C-N hydrogen bond. The infrared (IR) spectra of the two pairs of complexes are also very similar, though a severe perturbation of the potential energy surface by proton exchange means that the accurate prediction of the band center of the most intense IR mode requires a high level of electronic structure theory as well as a complete treatment of anharmonic effects. The bonding of anionic hydrogen-bonded complexes is discussed and contrasted with that of neutral hydrogen-bonded complexes.
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2017-12-01
Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.
Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.
Ding, Lei; Yuan, Han
2013-04-01
Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Tsogbayar, Tsednee; Yeager, Danny L.
2017-01-01
We further apply the complex scaled multiconfigurational spin-tensor electron propagator method (CMCSTEP) for the theoretical determination of resonance parameters with electron-atom systems including open-shell and highly correlated (non-dynamical correlation) atoms and molecules. The multiconfigurational spin-tensor electron propagator method (MCSTEP) developed and implemented by Yeager and his coworkers for real space gives very accurate and reliable ionization potentials and electron affinities. CMCSTEP uses a complex scaled multiconfigurational self-consistent field (CMCSCF) state as an initial state along with a dilated Hamiltonian where all of the electronic coordinates are scaled by a complex factor. CMCSTEP is designed for determining resonances. We apply CMCSTEP to get the lowest 2P (Be-, Mg-) and 2D (Mg-, Ca-) shape resonances using several different basis sets each with several complete active spaces. Many of these basis sets we employ have been used by others with different methods. Hence, we can directly compare results with different methods but using the same basis sets.
Advances in Rotor Performance and Turbulent Wake Simulation Using DES and Adaptive Mesh Refinement
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.
2012-01-01
Time-dependent Navier-Stokes simulations have been carried out for a rigid V22 rotor in hover, and a flexible UH-60A rotor in forward flight. Emphasis is placed on understanding and characterizing the effects of high-order spatial differencing, grid resolution, and Spalart-Allmaras (SA) detached eddy simulation (DES) in predicting the rotor figure of merit (FM) and resolving the turbulent rotor wake. The FM was accurately predicted within experimental error using SA-DES. Moreover, a new adaptive mesh refinement (AMR) procedure revealed a complex and more realistic turbulent rotor wake, including the formation of turbulent structures resembling vortical worms. Time-dependent flow visualization played a crucial role in understanding the physical mechanisms involved in these complex viscous flows. The predicted vortex core growth with wake age was in good agreement with experiment. High-resolution wakes for the UH-60A in forward flight exhibited complex turbulent interactions and turbulent worms, similar to the V22. The normal force and pitching moment coefficients were in good agreement with flight-test data.
Introduction to State Estimation of High-Rate System Dynamics.
Hong, Jonathan; Laflamme, Simon; Dodson, Jacob; Joyce, Bryan
2018-01-13
Engineering systems experiencing high-rate dynamic events, including airbags, debris detection, and active blast protection systems, could benefit from real-time observability for enhanced performance. However, the task of high-rate state estimation is challenging, in particular for real-time applications where the rate of the observer's convergence needs to be in the microsecond range. This paper identifies the challenges of state estimation of high-rate systems and discusses the fundamental characteristics of high-rate systems. A survey of applications and methods for estimators that have the potential to produce accurate estimations for a complex system experiencing highly dynamic events is presented. It is argued that adaptive observers are important to this research. In particular, adaptive data-driven observers are advantageous due to their adaptability and lack of dependence on the system model.
A rapid and accurate approach for prediction of interactomes from co-elution data (PrInCE).
Stacey, R Greg; Skinnider, Michael A; Scott, Nichollas E; Foster, Leonard J
2017-10-23
An organism's protein interactome, or complete network of protein-protein interactions, defines the protein complexes that drive cellular processes. Techniques for studying protein complexes have traditionally applied targeted strategies such as yeast two-hybrid or affinity purification-mass spectrometry to assess protein interactions. However, given the vast number of protein complexes, more scalable methods are necessary to accelerate interaction discovery and to construct whole interactomes. We recently developed a complementary technique based on the use of protein correlation profiling (PCP) and stable isotope labeling in amino acids in cell culture (SILAC) to assess chromatographic co-elution as evidence of interacting proteins. Importantly, PCP-SILAC is also capable of measuring protein interactions simultaneously under multiple biological conditions, allowing the detection of treatment-specific changes to an interactome. Given the uniqueness and high dimensionality of co-elution data, new tools are needed to compare protein elution profiles, control false discovery rates, and construct an accurate interactome. Here we describe a freely available bioinformatics pipeline, PrInCE, for the analysis of co-elution data. PrInCE is a modular, open-source library that is computationally inexpensive, able to use label and label-free data, and capable of detecting tens of thousands of protein-protein interactions. Using a machine learning approach, PrInCE offers greatly reduced run time, more predicted interactions at the same stringency, prediction of protein complexes, and greater ease of use over previous bioinformatics tools for co-elution data. PrInCE is implemented in Matlab (version R2017a). Source code and standalone executable programs for Windows and Mac OSX are available at https://github.com/fosterlab/PrInCE , where usage instructions can be found. An example dataset and output are also provided for testing purposes. PrInCE is the first fast and easy-to-use data analysis pipeline that predicts interactomes and protein complexes from co-elution data. PrInCE allows researchers without bioinformatics expertise to analyze high-throughput co-elution datasets.
Adaptive Microwave Staring Correlated Imaging for Targets Appearing in Discrete Clusters.
Tian, Chao; Jiang, Zheng; Chen, Weidong; Wang, Dongjin
2017-10-21
Microwave staring correlated imaging (MSCI) can achieve ultra-high resolution in real aperture staring radar imaging using the correlated imaging process (CIP) under all-weather and all-day circumstances. The CIP must combine the received echo signal with the temporal-spatial stochastic radiation field. However, a precondition of the CIP is that the continuous imaging region must be discretized to a fine grid, and the measurement matrix should be accurately computed, which makes the imaging process highly complex when the MSCI system observes a wide area. This paper proposes an adaptive imaging approach for the targets in discrete clusters to reduce the complexity of the CIP. The approach is divided into two main stages. First, as discrete clustered targets are distributed in different range strips in the imaging region, the transmitters of the MSCI emit narrow-pulse waveforms to separate the echoes of the targets in different strips in the time domain; using spectral entropy, a modified method robust against noise is put forward to detect the echoes of the discrete clustered targets, based on which the strips with targets can be adaptively located. Second, in a strip with targets, the matched filter reconstruction algorithm is used to locate the regions with targets, and only the regions of interest are discretized to a fine grid; sparse recovery is used, and the band exclusion is used to maintain the non-correlation of the dictionary. Simulation results are presented to demonstrate that the proposed approach can accurately and adaptively locate the regions with targets and obtain high-quality reconstructed images.
Stimulation of dopamine D₁ receptor improves learning capacity in cooperating cleaner fish.
Messias, João P M; Santos, Teresa P; Pinto, Maria; Soares, Marta C
2016-01-27
Accurate contextual decision-making strategies are important in social environments. Specific areas in the brain are tasked to process these complex interactions and generate correct follow-up responses. The dorsolateral and dorsomedial parts of the telencephalon in the teleost fish brain are neural substrates modulated by the neurotransmitter dopamine (DA), and are part of an important neural circuitry that drives animal behaviour from the most basic actions such as learning to search for food, to properly choosing partners and managing decisions based on context. The Indo-Pacific cleaner wrasse Labroides dimidiatus is a highly social teleost fish species with a complex network of interactions with its 'client' reef fish. We asked if changes in DA signalling would affect individual learning ability by presenting cleaner fish two ecologically different tasks that simulated a natural situation requiring accurate decision-making. We demonstrate that there is an involvement of the DA system and D1 receptor pathways on cleaners' natural abilities to learn both tasks. Our results add significantly to the growing literature on the physiological mechanisms that underlie and facilitate the expression of cooperative abilities. © 2016 The Author(s).
Interference effects in phased beam tracing using exact half-space solutions.
Boucher, Matthew A; Pluymers, Bert; Desmet, Wim
2016-12-01
Geometrical acoustics provides a correct solution to the wave equation for rectangular rooms with rigid boundaries and is an accurate approximation at high frequencies with nearly hard walls. When interference effects are important, phased geometrical acoustics is employed in order to account for phase shifts due to propagation and reflection. Error increases, however, with more absorption, complex impedance values, grazing incidence, smaller volumes and lower frequencies. Replacing the plane wave reflection coefficient with a spherical one reduces the error but results in slower convergence. Frequency-dependent stopping criteria are then applied to avoid calculating higher order reflections for frequencies that have already converged. Exact half-space solutions are used to derive two additional spherical wave reflection coefficients: (i) the Sommerfeld integral, consisting of a plane wave decomposition of a point source and (ii) a line of image sources located at complex coordinates. Phased beam tracing using exact half-space solutions agrees well with the finite element method for rectangular rooms with absorbing boundaries, at low frequencies and for rooms with different aspect ratios. Results are accurate even for long source-to-receiver distances. Finally, the crossover frequency between the plane and spherical wave reflection coefficients is discussed.
Chattopadhyay, Pratip K.; Melenhorst, J. Joseph; Ladell, Kristin; Gostick, Emma; Scheinberg, Philip; Barrett, A. John; Wooldridge, Linda; Roederer, Mario; Sewell, Andrew K.; Price, David A.
2008-01-01
The ability to quantify and characterize antigen-specific CD8+ T cells irrespective of functional readouts using fluorochrome-conjugated tetrameric peptide-MHC class I (pMHCI) complexes in conjunction with flow cytometry has transformed our understanding of cellular immune responses over the past decade. In the case of prevalent CD8+ T cell populations that engage cognate pMHCI tetramers with high avidities, direct ex vivo identification and subsequent data interpretation is relatively straightforward. However, the accurate identification of low frequency antigen-specific CD8+ T cell populations can be complicated, especially in situations where TCR-mediated tetramer binding occurs at low avidities. Here, we highlight a few simple techniques that can be employed to improve the visual resolution, and hence the accurate quantification, of tetramer-binding CD8+ T cell populations by flow cytometry. These methodological modifications enhance signal intensity, especially in the case of specific CD8+ T cell populations that bind cognate antigen with low avidity, minimize background noise and enable improved discrimination of true pMHCI tetramer binding events from nonspecific uptake. PMID:18836993
Ringer, Ashley L.; Senenko, Anastasia; Sherrill, C. David
2007-01-01
S/π interactions are prevalent in biochemistry and play an important role in protein folding and stabilization. Geometries of cysteine/aromatic interactions found in crystal structures from the Brookhaven Protein Data Bank (PDB) are analyzed and compared with the equilibrium configurations predicted by high-level quantum mechanical results for the H2S–benzene complex. A correlation is observed between the energetically favorable configurations on the quantum mechanical potential energy surface of the H2S–benzene model and the cysteine/aromatic configurations most frequently found in crystal structures of the PDB. In contrast to some previous PDB analyses, configurations with the sulfur over the aromatic ring are found to be the most important. Our results suggest that accurate quantum computations on models of noncovalent interactions may be helpful in understanding the structures of proteins and other complex systems. PMID:17766371
Wang, Bing; Westerhoff, Lance M.; Merz, Kenneth M.
2008-01-01
We have generated docking poses for the FKBP-GPI complex using eight docking programs, and compared their scoring functions with scoring based on NMR chemical shift perturbations (NMRScore). Because the chemical shift perturbation (CSP) is exquisitely sensitive on the orientation of ligand inside the binding pocket, NMRScore offers an accurate and straightforward approach to score different poses. All scoring functions were inspected by their abilities to highly rank the native-like structures and separate them from decoy poses generated for a protein-ligand complex. The overall performance of NMRScore is much better than that of energy-based scoring functions associated with docking programs in both aspects. In summary, we find that the combination of docking programs with NMRScore results in an approach that can robustly determine the binding site structure for a protein-ligand complex, thereby, providing a new tool facilitating the structure-based drug discovery process. PMID:17867664
Wetland mapping from digitized aerial photography. [Sheboygen Marsh, Sheboygen County, Wisconsin
NASA Technical Reports Server (NTRS)
Scarpace, F. L.; Quirk, B. K.; Kiefer, R. W.; Wynn, S. L.
1981-01-01
Computer assisted interpretation of small scale aerial imagery was found to be a cost effective and accurate method of mapping complex vegetation patterns if high resolution information is desired. This type of technique is suited for problems such as monitoring changes in species composition due to environmental factors and is a feasible method of monitoring and mapping large areas of wetlands. The technique has the added advantage of being in a computer compatible form which can be transformed into any georeference system of interest.
Application of furniture images selection based on neural network
NASA Astrophysics Data System (ADS)
Wang, Yong; Gao, Wenwen; Wang, Ying
2018-05-01
In the construction of 2 million furniture image databases, aiming at the problem of low quality of database, a combination of CNN and Metric learning algorithm is proposed, which makes it possible to quickly and accurately remove duplicate and irrelevant samples in the furniture image database. Solve problems that images screening method is complex, the accuracy is not high, time-consuming is long. Deep learning algorithm achieve excellent image matching ability in actual furniture retrieval applications after improving data quality.
Optical radiation measurements: instrumentation and sources of error.
Landry, R J; Andersen, F A
1982-07-01
Accurate measurement of optical radiation is required when sources of this radiation are used in biological research. The most difficult measurements of broadband noncoherent optical radiations usually must be performed by a highly trained specialist using sophisticated, complex, and expensive instruments. Presentation of the results of such measurement requires correct use of quantities and units with which many biological researchers are unfamiliar. The measurement process, physical quantities and units, measurement systems with instruments, and sources of error and uncertainties associated with optical radiation measurements are reviewed.
Real-time, haptics-enabled simulator for probing ex vivo liver tissue.
Lister, Kevin; Gao, Zhan; Desai, Jaydev P
2009-01-01
The advent of complex surgical procedures has driven the need for realistic surgical training simulators. Comprehensive simulators that provide realistic visual and haptic feedback during surgical tasks are required to familiarize surgeons with the procedures they are to perform. Complex organ geometry inherent to biological tissues and intricate material properties drive the need for finite element methods to assure accurate tissue displacement and force calculations. Advances in real-time finite element methods have not reached the state where they are applicable to soft tissue surgical simulation. Therefore a real-time, haptics-enabled simulator for probing of soft tissue has been developed which utilizes preprocessed finite element data (derived from accurate constitutive model of the soft-tissue obtained from carefully collected experimental data) to accurately replicate the probing task in real-time.
Microfluidic 3D cell culture: potential application for tissue-based bioassays
Li, XiuJun (James); Valadez, Alejandra V.; Zuo, Peng; Nie, Zhihong
2014-01-01
Current fundamental investigations of human biology and the development of therapeutic drugs, commonly rely on two-dimensional (2D) monolayer cell culture systems. However, 2D cell culture systems do not accurately recapitulate the structure, function, physiology of living tissues, as well as highly complex and dynamic three-dimensional (3D) environments in vivo. The microfluidic technology can provide micro-scale complex structures and well-controlled parameters to mimic the in vivo environment of cells. The combination of microfluidic technology with 3D cell culture offers great potential for in vivo-like tissue-based applications, such as the emerging organ-on-a-chip system. This article will review recent advances in microfluidic technology for 3D cell culture and their biological applications. PMID:22793034
Romi, Wahengbam; Keisam, Santosh; Ahmed, Giasuddin; Jeyaram, Kumaraswamy
2014-02-28
Meyerozyma guilliermondii (anamorph Candida guilliermondii) and Meyerozyma caribbica (anamorph Candida fermentati) are closely related species of the genetically heterogenous M. guilliermondii complex. Conventional phenotypic methods frequently misidentify the species within this complex and also with other species of the Saccharomycotina CTG clade. Even the long-established sequencing of large subunit (LSU) rRNA gene remains ambiguous. We also faced similar problem during identification of yeast isolates of M. guilliermondii complex from indigenous bamboo shoot fermentation in North East India. There is a need for development of reliable and accurate identification methods for these closely related species because of their increasing importance as emerging infectious yeasts and associated biotechnological attributes. We targeted the highly variable internal transcribed spacer (ITS) region (ITS1-5.8S-ITS2) and identified seven restriction enzymes through in silico analysis for differentiating M. guilliermondii from M. caribbica. Fifty five isolates of M. guilliermondii complex which could not be delineated into species-specific taxonomic ranks by API 20 C AUX and LSU rRNA gene D1/D2 sequencing were subjected to ITS-restriction fragment length polymorphism (ITS-RFLP) analysis. TaqI ITS-RFLP distinctly differentiated the isolates into M. guilliermondii (47 isolates) and M. caribbica (08 isolates) with reproducible species-specific patterns similar to the in silico prediction. The reliability of this method was validated by ITS1-5.8S-ITS2 sequencing, mitochondrial DNA RFLP and electrophoretic karyotyping. We herein described a reliable ITS-RFLP method for distinct differentiation of frequently misidentified M. guilliermondii from M. caribbica. Even though in silico analysis differentiated other closely related species of M. guilliermondii complex from the above two species, it is yet to be confirmed by in vitro analysis using reference strains. This method can be used as a reliable tool for rapid and accurate identification of closely related species of M. guilliermondii complex and for differentiating emerging infectious yeasts of the Saccharomycotina CTG clade.
Human eyeball model reconstruction and quantitative analysis.
Xing, Qi; Wei, Qi
2014-01-01
Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.
NASA Astrophysics Data System (ADS)
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
A vector scanning processing technique for pulsed laser velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Edwards, Robert V.
1989-01-01
Pulsed-laser-sheet velocimetry yields two-dimensional velocity vectors across an extended planar region of a flow. Current processing techniques offer high-precision (1-percent) velocity estimates, but can require hours of processing time on specialized array processors. Sometimes, however, a less accurate (about 5 percent) data-reduction technique which also gives unambiguous velocity vector information is acceptable. Here, a direct space-domain processing technique is described and shown to be far superior to previous methods in achieving these objectives. It uses a novel data coding and reduction technique and has no 180-deg directional ambiguity. A complex convection vortex flow was recorded and completely processed in under 2 min on an 80386-based PC, producing a two-dimensional velocity-vector map of the flowfield. Pulsed-laser velocimetry data can thus be reduced quickly and reasonably accurately, without specialized array processing hardware.
Eves, E Eugene; Murphy, Ethan K; Yakovlev, Vadim V
2007-01-01
The paper discusses characteristics of a new modeling-based technique for determining dielectric properties of materials. Complex permittivity is found with an optimization algorithm designed to match complex S-parameters obtained from measurements and from 3D FDTD simulation. The method is developed on a two-port (waveguide-type) fixture and deals with complex reflection and transmission characteristics at the frequency of interest. A computational part is constructed as an inverse-RBF-network-based procedure that reconstructs dielectric constant and the loss factor of the sample from the FDTD modeling data sets and the measured reflection and transmission coefficients. As such, it is applicable to samples and cavities of arbitrary configurations provided that the geometry of the experimental setup is adequately represented by the FDTD model. The practical implementation of the method considered in this paper is a section of a WR975 waveguide containing a sample of a liquid in a cylindrical cutout of a rectangular Teflon cup. The method is run in two stages and employs two databases--first, built for a sparse grid on the complex permittivity plane, in order to locate a domain with an anticipated solution and, second, made as a denser grid covering the determined domain, for finding an exact location of the complex permittivity point. Numerical tests demonstrate that the computational part of the method is highly accurate even when the modeling data is represented by relatively small data sets. When working with reflection and transmission coefficients measured in an actual experimental fixture and reconstructing a low dielectric constant and the loss factor the technique may be less accurate. It is shown that the employed neural network is capable of finding complex permittivity of the sample when experimental data on the reflection and transmission coefficients are numerically dispersive (noise-contaminated). A special modeling test is proposed for validating the results; it confirms that the values of complex permittivity for several liquids (including salt water acetone and three types of alcohol) at 915 MHz are reconstructed with satisfactory accuracy.
A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.
Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em
2010-05-19
Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
The accurate assessment of small-angle X-ray scattering data
Grant, Thomas D.; Luft, Joseph R.; Carter, Lester G.; ...
2015-01-23
Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targetsmore » for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. Studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.« less
A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics
Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em
2010-01-01
Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330
White, Alec F.; Epifanovsky, Evgeny; McCurdy, C. William; ...
2017-06-21
The method of complex basis functions is applied to molecular resonances at correlated levels of theory. Møller-Plesset perturbation theory at second order and equation-of-motion electron attachment coupled-cluster singles and doubles (EOM-EA-CCSD) methods based on a non-Hermitian self-consistent-field reference are used to compute accurate Siegert energies for shape resonances in small molecules including N 2 - , CO - , CO 2 - , and CH 2 O - . Analytic continuation of complex θ-trajectories is used to compute Siegert energies, and the θ-trajectories of energy differences are found to yield more consistent results than those of total energies.more » Furthermore, the ability of such methods to accurately compute complex potential energy surfaces is investigated, and the possibility of using EOM-EA-CCSD for Feshbach resonances is explored in the context of e-helium scattering.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Alec F.; Epifanovsky, Evgeny; McCurdy, C. William
The method of complex basis functions is applied to molecular resonances at correlated levels of theory. Møller-Plesset perturbation theory at second order and equation-of-motion electron attachment coupled-cluster singles and doubles (EOM-EA-CCSD) methods based on a non-Hermitian self-consistent-field reference are used to compute accurate Siegert energies for shape resonances in small molecules including N 2 - , CO - , CO 2 - , and CH 2 O - . Analytic continuation of complex θ-trajectories is used to compute Siegert energies, and the θ-trajectories of energy differences are found to yield more consistent results than those of total energies.more » Furthermore, the ability of such methods to accurately compute complex potential energy surfaces is investigated, and the possibility of using EOM-EA-CCSD for Feshbach resonances is explored in the context of e-helium scattering.« less
Pain management in the geriatric population.
Borsheski, Robert; Johnson, Quinn L
2014-01-01
Pain is a highly prevalent and clinically important problem in the elderly. Unfortunately, due to difficulties in assessing pain in geriatric patients, the complexities of multiple comorbidities, and the high prevalence of polypharmacy, many practitioners are reluctant to treat pain aggressively in this unique patient population. Safe and effective treatment therefore, requires a working knowledge of the physiologic changes associated with aging, the challenges of accurately assessing pain, the unique effects of common therapeutic agents upon the elderly as well as the importance of adjunctive therapies. The following review is intended to provide the practitioner with practical knowledge for safer and more effective treatment of pain.
Retinal Connectomics: Towards Complete, Accurate Networks
Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott
2013-01-01
Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532
NASA Astrophysics Data System (ADS)
Taneja, Ankur; Higdon, Jonathan
2018-01-01
A high-order spectral element discontinuous Galerkin method is presented for simulating immiscible two-phase flow in petroleum reservoirs. The governing equations involve a coupled system of strongly nonlinear partial differential equations for the pressure and fluid saturation in the reservoir. A fully implicit method is used with a high-order accurate time integration using an implicit Rosenbrock method. Numerical tests give the first demonstration of high order hp spatial convergence results for multiphase flow in petroleum reservoirs with industry standard relative permeability models. High order convergence is shown formally for spectral elements with up to 8th order polynomials for both homogeneous and heterogeneous permeability fields. Numerical results are presented for multiphase fluid flow in heterogeneous reservoirs with complex geometric or geologic features using up to 11th order polynomials. Robust, stable simulations are presented for heterogeneous geologic features, including globally heterogeneous permeability fields, anisotropic permeability tensors, broad regions of low-permeability, high-permeability channels, thin shale barriers and thin high-permeability fractures. A major result of this paper is the demonstration that the resolution of the high order spectral element method may be exploited to achieve accurate results utilizing a simple cartesian mesh for non-conforming geological features. Eliminating the need to mesh to the boundaries of geological features greatly simplifies the workflow for petroleum engineers testing multiple scenarios in the face of uncertainty in the subsurface geology.
A general method for bead-enhanced quantitation by flow cytometry
Montes, Martin; Jaensson, Elin A.; Orozco, Aaron F.; Lewis, Dorothy E.; Corry, David B.
2009-01-01
Flow cytometry provides accurate relative cellular quantitation (percent abundance) of cells from diverse samples, but technical limitations of most flow cytometers preclude accurate absolute quantitation. Several quantitation standards are now commercially available which, when added to samples, permit absolute quantitation of CD4+ T cells. However, these reagents are limited by their cost, technical complexity, requirement for additional software and/or limited applicability. Moreover, few studies have validated the use of such reagents in complex biological samples, especially for quantitation of non-T cells. Here we show that addition to samples of known quantities of polystyrene fluorescence standardization beads permits accurate quantitation of CD4+ T cells from complex cell samples. This procedure, here termed single bead-enhanced cytofluorimetry (SBEC), was equally capable of enumerating eosinophils as well as subcellular fragments of apoptotic cells, moieties with very different optical and fluorescent characteristics. Relative to other proprietary products, SBEC is simple, inexpensive and requires no special software, suggesting that the method is suitable for the routine quantitation of most cells and other particles by flow cytometry. PMID:17067632
Accurate Structural Correlations from Maximum Likelihood Superpositions
Theobald, Douglas L; Wuttke, Deborah S
2008-01-01
The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091
Lauritzen, Ted
1982-01-01
A measuring system is disclosed for surveying and very accurately positioning objects with respect to a reference line. A principal use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse or fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.
Lauritzen, T.
A measuring system is described for surveying and very accurately positioning objects with respect to a reference line. A principle use of this surveying system is for accurately aligning the electromagnets which direct a particle beam emitted from a particle accelerator. Prior art surveying systems require highly skilled surveyors. Prior art systems include, for example, optical surveying systems which are susceptible to operator reading errors, and celestial navigation-type surveying systems, with their inherent complexities. The present invention provides an automatic readout micrometer which can very accurately measure distances. The invention has a simplicity of operation which practically eliminates the possibilities of operator optical reading error, owning to the elimination of traditional optical alignments for making measurements. The invention has an extendable arm which carries a laser surveying target. The extendable arm can be continuously positioned over its entire length of travel by either a coarse of fine adjustment without having the fine adjustment outrun the coarse adjustment until a reference laser beam is centered on the target as indicated by a digital readout. The length of the micrometer can then be accurately and automatically read by a computer and compared with a standardized set of alignment measurements. Due to its construction, the micrometer eliminates any errors due to temperature changes when the system is operated within a standard operating temperature range.
Liu, Xiao; Xu, Yinyin; Liang, Dequan; Gao, Peng; Sun, Yepeng; Gifford, Benjamin; D’Ascenzo, Mark; Liu, Xiaomin; Tellier, Laurent C. A. M.; Yang, Fang; Tong, Xin; Chen, Dan; Zheng, Jing; Li, Weiyang; Richmond, Todd; Xu, Xun; Wang, Jun; Li, Yingrui
2013-01-01
The major histocompatibility complex (MHC) is one of the most variable and gene-dense regions of the human genome. Most studies of the MHC, and associated regions, focus on minor variants and HLA typing, many of which have been demonstrated to be associated with human disease susceptibility and metabolic pathways. However, the detection of variants in the MHC region, and diagnostic HLA typing, still lacks a coherent, standardized, cost effective and high coverage protocol of clinical quality and reliability. In this paper, we presented such a method for the accurate detection of minor variants and HLA types in the human MHC region, using high-throughput, high-coverage sequencing of target regions. A probe set was designed to template upon the 8 annotated human MHC haplotypes, and to encompass the 5 megabases (Mb) of the extended MHC region. We deployed our probes upon three, genetically diverse human samples for probe set evaluation, and sequencing data show that ∼97% of the MHC region, and over 99% of the genes in MHC region, are covered with sufficient depth and good evenness. 98% of genotypes called by this capture sequencing prove consistent with established HapMap genotypes. We have concurrently developed a one-step pipeline for calling any HLA type referenced in the IMGT/HLA database from this target capture sequencing data, which shows over 96% typing accuracy when deployed at 4 digital resolution. This cost-effective and highly accurate approach for variant detection and HLA typing in the MHC region may lend further insight into immune-mediated diseases studies, and may find clinical utility in transplantation medicine research. This one-step pipeline is released for general evaluation and use by the scientific community. PMID:23894464
Velarde, Luis; Wang, Hong-Fei
2013-12-14
The lack of understanding of the temporal effects and the restricted ability to control experimental conditions in order to obtain intrinsic spectral lineshapes in surface sum-frequency generation vibrational spectroscopy (SFG-VS) have limited its applications in surface and interfacial studies. The emergence of high-resolution broadband sum-frequency generation vibrational spectroscopy (HR-BB-SFG-VS) with sub-wavenumber resolution [Velarde et al., J. Chem. Phys., 2011, 135, 241102] offers new opportunities for obtaining and understanding the spectral lineshapes and temporal effects in SFG-VS. Particularly, the high accuracy of the HR-BB-SFG-VS experimental lineshape provides detailed information on the complex coherent vibrational dynamics through direct spectral measurements. Here we present a unified formalism for the theoretical and experimental routes for obtaining an accurate lineshape of the SFG response. Then, we present a detailed analysis of a cholesterol monolayer at the air/water interface with higher and lower resolution SFG spectra along with their temporal response. With higher spectral resolution and accurate vibrational spectral lineshapes, it is shown that the parameters of the experimental SFG spectra can be used both to understand and to quantitatively reproduce the temporal effects in lower resolution SFG measurements. This perspective provides not only a unified picture but also a novel experimental approach to measuring and understanding the frequency-domain and time-domain SFG response of a complex molecular interface.
Roussis, S G
2001-08-01
The automated acquisition of the product ion spectra of all precursor ions in a selected mass range by using a magnetic sector/orthogonal acceleration time-of-flight (oa-TOF) tandem mass spectrometer for the characterization of complex petroleum mixtures is reported. Product ion spectra are obtained by rapid oa-TOF data acquisition and simultaneous scanning of the magnet. An analog signal generator is used for the scanning of the magnet. Slow magnet scanning rates permit the accurate profiling of precursor ion peaks and the acquisition of product ion spectra for all isobaric ion species. The ability of the instrument to perform both high- and low-energy collisional activation experiments provides access to a large number of dissociation pathways useful for the characterization of precursor ions. Examples are given that illustrate the capability of the method for the characterization of representative petroleum mixtures. The structural information obtained by the automated MS/MS experiment is used in combination with high-resolution accurate mass measurement results to characterize unknown components in a polar extract of a refinery product. The exhaustive mapping of all precursor ions in representative naphtha and middle-distillate fractions is presented. Sets of isobaric ion species are separated and their structures are identified by interpretation from first principles or by comparison with standard 70-eV EI libraries of spectra. The utility of the method increases with the complexity of the samples.
Taoka, Masato; Yamauchi, Yoshio; Nobe, Yuko; Masaki, Shunpei; Nakayama, Hiroshi; Ishikawa, Hideaki; Takahashi, Nobuhiro; Isobe, Toshiaki
2009-11-01
We describe here a mass spectrometry (MS)-based analytical platform of RNA, which combines direct nano-flow reversed-phase liquid chromatography (RPLC) on a spray tip column and a high-resolution LTQ-Orbitrap mass spectrometer. Operating RPLC under a very low flow rate with volatile solvents and MS in the negative mode, we could estimate highly accurate mass values sufficient to predict the nucleotide composition of a approximately 21-nucleotide small interfering RNA, detect post-transcriptional modifications in yeast tRNA, and perform collision-induced dissociation/tandem MS-based structural analysis of nucleolytic fragments of RNA at a sub-femtomole level. Importantly, the method allowed the identification and chemical analysis of small RNAs in ribonucleoprotein (RNP) complex, such as the pre-spliceosomal RNP complex, which was pulled down from cultured cells with a tagged protein cofactor as bait. We have recently developed a unique genome-oriented database search engine, Ariadne, which allows tandem MS-based identification of RNAs in biological samples. Thus, the method presented here has broad potential for automated analysis of RNA; it complements conventional molecular biology-based techniques and is particularly suited for simultaneous analysis of the composition, structure, interaction, and dynamics of RNA and protein components in various cellular RNP complexes.
Delparte, D; Gates, RD; Takabayashi, M
2015-01-01
The structural complexity of coral reefs plays a major role in the biodiversity, productivity, and overall functionality of reef ecosystems. Conventional metrics with 2-dimensional properties are inadequate for characterization of reef structural complexity. A 3-dimensional (3D) approach can better quantify topography, rugosity and other structural characteristics that play an important role in the ecology of coral reef communities. Structure-from-Motion (SfM) is an emerging low-cost photogrammetric method for high-resolution 3D topographic reconstruction. This study utilized SfM 3D reconstruction software tools to create textured mesh models of a reef at French Frigate Shoals, an atoll in the Northwestern Hawaiian Islands. The reconstructed orthophoto and digital elevation model were then integrated with geospatial software in order to quantify metrics pertaining to 3D complexity. The resulting data provided high-resolution physical properties of coral colonies that were then combined with live cover to accurately characterize the reef as a living structure. The 3D reconstruction of reef structure and complexity can be integrated with other physiological and ecological parameters in future research to develop reliable ecosystem models and improve capacity to monitor changes in the health and function of coral reef ecosystems. PMID:26207190
Comparison of alternative designs for reducing complex neurons to equivalent cables.
Burke, R E
2000-01-01
Reduction of the morphological complexity of actual neurons into accurate, computationally efficient surrogate models is an important problem in computational neuroscience. The present work explores the use of two morphoelectrotonic transformations, somatofugal voltage attenuation (AT cables) and signal propagation delay (DL cables), as bases for construction of electrotonically equivalent cable models of neurons. In theory, the AT and DL cables should provide more accurate lumping of membrane regions that have the same transmembrane potential than the familiar equivalent cables that are based only on somatofugal electrotonic distance (LM cables). In practice, AT and DL cables indeed provided more accurate simulations of the somatic transient responses produced by fully branched neuron models than LM cables. This was the case in the presence of a somatic shunt as well as when membrane resistivity was uniform.
NASA Astrophysics Data System (ADS)
Bolte, Stephanie E.; Ooms, Kristopher J.; Polenova, Tatyana; Baruah, Bharat; Crans, Debbie C.; Smee, Jason J.
2008-02-01
V51 solid-state NMR and density functional theory (DFT) investigations are reported for a series of pentacoordinate dioxovanadium(V)-dipicolinate [V(V )O2-dipicolinate] and heptacoordinate aquahydroxylamidooxovanadium(V)-dipicolinate [V(V)O-dipicolinate] complexes. These compounds are of interest because of their potency as phosphatase inhibitors as well as their insulin enhancing properties and potential for the treatment of diabetes. Experimental solid-state NMR results show that the electric field gradient tensors in the V(V )O2-dipicolinate derivatives are affected significantly by substitution on the dipicolinate ring and range from 5.8to8.3MHz. The chemical shift anisotropies show less dramatic variations with respect to the ligand changes and range between -550 and -600ppm. To gain insights on the origins of the NMR parameters, DFT calculations were conducted for an extensive series of the V(V )O2- and V(V)O-dipicolinate complexes. To assess the level of theory required for the accurate calculation of the V51 NMR parameters, different functionals, basis sets, and structural models were explored in the DFT study. It is shown that the original x-ray crystallographic geometries, including all counterions and solvation water molecules within 5Å of the vanadium, lead to the most accurate results. The choice of the functional and the basis set at a high level of theory has a relatively minor impact on the outcome of the chemical shift anisotropy calculations; however, the use of large basis sets is necessary for accurate calculations of the quadrupole coupling constants for several compounds of the V(V )O2 series. These studies demonstrate that even though the vanadium compounds under investigations exhibit distorted trigonal bipyramidal coordination geometry, they have a "perfect" trigonal bipyramidal electronic environment. This observation could potentially explain why vanadate and vanadium(V) adducts are often recognized as potent transition state analogs.
Knöpfel, Thomas; Leech, Robert
2018-01-01
Local perturbations within complex dynamical systems can trigger cascade-like events that spread across significant portions of the system. Cascades of this type have been observed across a broad range of scales in the brain. Studies of these cascades, known as neuronal avalanches, usually report the statistics of large numbers of avalanches, without probing the characteristic patterns produced by the avalanches themselves. This is partly due to limitations in the extent or spatiotemporal resolution of commonly used neuroimaging techniques. In this study, we overcome these limitations by using optical voltage (genetically encoded voltage indicators) imaging. This allows us to record cortical activity in vivo across an entire cortical hemisphere, at both high spatial (~30um) and temporal (~20ms) resolution in mice that are either in an anesthetized or awake state. We then use artificial neural networks to identify the characteristic patterns created by neuronal avalanches in our data. The avalanches in the anesthetized cortex are most accurately classified by an artificial neural network architecture that simultaneously connects spatial and temporal information. This is in contrast with the awake cortex, in which avalanches are most accurately classified by an architecture that treats spatial and temporal information separately, due to the increased levels of spatiotemporal complexity. This is in keeping with reports of higher levels of spatiotemporal complexity in the awake brain coinciding with features of a dynamical system operating close to criticality. PMID:29795654
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Hamdani, Yasmine S.; Alfè, Dario; von Lilienfeld, O. Anatole
Density functional theory (DFT) studies of weakly interacting complexes have recently focused on the importance of van der Waals dispersion forces, whereas the role of exchange has received far less attention. Here, by exploiting the subtle binding between water and a boron and nitrogen doped benzene derivative (1,2-azaborine) we show how exact exchange can alter the binding conformation within a complex. Benchmark values have been calculated for three orientations of the water monomer on 1,2-azaborine from explicitly correlated quantum chemical methods, and we have also used diffusion quantum Monte Carlo. For a host of popular DFT exchange-correlation functionals we showmore » that the lack of exact exchange leads to the wrong lowest energy orientation of water on 1,2-azaborine. As such, we suggest that a high proportion of exact exchange and the associated improvement in the electronic structure could be needed for the accurate prediction of physisorption sites on doped surfaces and in complex organic molecules. Meanwhile to predict correct absolute interaction energies an accurate description of exchange needs to be augmented by dispersion inclusive functionals, and certain non-local van der Waals functionals (optB88- and optB86b-vdW) perform very well for absolute interaction energies. Through a comparison with water on benzene and borazine (B₃N₃H₆) we show that these results could have implications for the interaction of water with doped graphene surfaces, and suggest a possible way of tuning the interaction energy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Al-Hamdani, Yasmine S.; Michaelides, Angelos, E-mail: angelos.michaelides@ucl.ac.uk; Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ
Density functional theory (DFT) studies of weakly interacting complexes have recently focused on the importance of van der Waals dispersion forces, whereas the role of exchange has received far less attention. Here, by exploiting the subtle binding between water and a boron and nitrogen doped benzene derivative (1,2-azaborine) we show how exact exchange can alter the binding conformation within a complex. Benchmark values have been calculated for three orientations of the water monomer on 1,2-azaborine from explicitly correlated quantum chemical methods, and we have also used diffusion quantum Monte Carlo. For a host of popular DFT exchange-correlation functionals we showmore » that the lack of exact exchange leads to the wrong lowest energy orientation of water on 1,2-azaborine. As such, we suggest that a high proportion of exact exchange and the associated improvement in the electronic structure could be needed for the accurate prediction of physisorption sites on doped surfaces and in complex organic molecules. Meanwhile to predict correct absolute interaction energies an accurate description of exchange needs to be augmented by dispersion inclusive functionals, and certain non-local van der Waals functionals (optB88- and optB86b-vdW) perform very well for absolute interaction energies. Through a comparison with water on benzene and borazine (B{sub 3}N{sub 3}H{sub 6}) we show that these results could have implications for the interaction of water with doped graphene surfaces, and suggest a possible way of tuning the interaction energy.« less
A review of models and micrometeorological methods used to estimate wetland evapotranspiration
Drexler, J.Z.; Snyder, R.L.; Spano, D.; Paw, U.K.T.
2004-01-01
Within the past decade or so, the accuracy of evapotranspiration (ET) estimates has improved due to new and increasingly sophisticated methods. Yet despite a plethora of choices concerning methods, estimation of wetland ET remains insufficiently characterized due to the complexity of surface characteristics and the diversity of wetland types. In this review, we present models and micrometeorological methods that have been used to estimate wetland ET and discuss their suitability for particular wetland types. Hydrological, soil monitoring and lysimetric methods to determine ET are not discussed. Our review shows that, due to the variability and complexity of wetlands, there is no single approach that is the best for estimating wetland ET. Furthermore, there is no single foolproof method to obtain an accurate, independent measure of wetland ET. Because all of the methods reviewed, with the exception of eddy covariance and LIDAR, require measurements of net radiation (Rn) and soil heat flux (G), highly accurate measurements of these energy components are key to improving measurements of wetland ET. Many of the major methods used to determine ET can be applied successfully to wetlands of uniform vegetation and adequate fetch, however, certain caveats apply. For example, with accurate Rn and G data and small Bowen ratio (??) values, the Bowen ratio energy balance method can give accurate estimates of wetland ET. However, large errors in latent heat flux density can occur near sunrise and sunset when the Bowen ratio ?? ??? - 1??0. The eddy covariance method provides a direct measurement of latent heat flux density (??E) and sensible heat flux density (II), yet this method requires considerable expertise and expensive instrumentation to implement. A clear advantage of using the eddy covariance method is that ??E can be compared with Rn-G H, thereby allowing for an independent test of accuracy. The surface renewal method is inexpensive to replicate and, therefore, shows particular promise for characterizing variability in ET as a result of spatial heterogeneity. LIDAR is another method that has special utility in a heterogeneous wetland environment, because it provides an integrated value for ET from a surface. The main drawback of LIDAR is the high cost of equipment and the need for an independent ET measure to assess accuracy. If Rn and G are measured accurately, the Priestley-Taylor equation can be used successfully with site-specific calibration factors to estimate wetland ET. The 'crop' cover coefficient (Kc) method can provide accurate wetland ET estimates if calibrated for the environmental and climatic characteristics of a particular area. More complicated equations such as the Penman and Penman-Monteith equations also can be used to estimate wetland ET, but surface variability and lack of information on aerodynamic and surface resistances make use of such equations somewhat questionable. ?? 2004 John Wiley and Sons, Ltd.
Julian, Timothy R; Bustos, Carla; Kwong, Laura H; Badilla, Alejandro D; Lee, Julia; Bischel, Heather N; Canales, Robert A
2018-05-08
Quantitative data on human-environment interactions are needed to fully understand infectious disease transmission processes and conduct accurate risk assessments. Interaction events occur during an individual's movement through, and contact with, the environment, and can be quantified using diverse methodologies. Methods that utilize videography, coupled with specialized software, can provide a permanent record of events, collect detailed interactions in high resolution, be reviewed for accuracy, capture events difficult to observe in real-time, and gather multiple concurrent phenomena. In the accompanying video, the use of specialized software to capture humanenvironment interactions for human exposure and disease transmission is highlighted. Use of videography, combined with specialized software, allows for the collection of accurate quantitative representations of human-environment interactions in high resolution. Two specialized programs include the Virtual Timing Device for the Personal Computer, which collects sequential microlevel activity time series of contact events and interactions, and LiveTrak, which is optimized to facilitate annotation of events in real-time. Opportunities to annotate behaviors at high resolution using these tools are promising, permitting detailed records that can be summarized to gain information on infectious disease transmission and incorporated into more complex models of human exposure and risk.
NASA Astrophysics Data System (ADS)
Kaiser, Bryan E.; Poroseva, Svetlana V.; Canfield, Jesse M.; Sauer, Jeremy A.; Linn, Rodman R.
2013-11-01
The High Gradient hydrodynamics (HIGRAD) code is an atmospheric computational fluid dynamics code created by Los Alamos National Laboratory to accurately represent flows characterized by sharp gradients in velocity, concentration, and temperature. HIGRAD uses a fully compressible finite-volume formulation for explicit Large Eddy Simulation (LES) and features an advection scheme that is second-order accurate in time and space. In the current study, boundary conditions implemented in HIGRAD are varied to find those that better reproduce the reduced physics of a flat plate boundary layer to compare with complex physics of the atmospheric boundary layer. Numerical predictions are compared with available DNS, experimental, and LES data obtained by other researchers. High-order turbulence statistics are collected. The Reynolds number based on the free-stream velocity and the momentum thickness is 120 at the inflow and the Mach number for the flow is 0.2. Results are compared at Reynolds numbers of 670 and 1410. A part of the material is based upon work supported by NASA under award NNX12AJ61A and by the Junior Faculty UNM-LANL Collaborative Research Grant.
Risk Prediction Models for Acute Kidney Injury in Critically Ill Patients: Opus in Progressu.
Neyra, Javier A; Leaf, David E
2018-05-31
Acute kidney injury (AKI) is a complex systemic syndrome associated with high morbidity and mortality. Among critically ill patients admitted to intensive care units (ICUs), the incidence of AKI is as high as 50% and is associated with dismal outcomes. Thus, the development and validation of clinical risk prediction tools that accurately identify patients at high risk for AKI in the ICU is of paramount importance. We provide a comprehensive review of 3 clinical risk prediction tools that have been developed for incident AKI occurring in the first few hours or days following admission to the ICU. We found substantial heterogeneity among the clinical variables that were examined and included as significant predictors of AKI in the final models. The area under the receiver operating characteristic curves was ∼0.8 for all 3 models, indicating satisfactory model performance, though positive predictive values ranged from only 23 to 38%. Hence, further research is needed to develop more accurate and reproducible clinical risk prediction tools. Strategies for improved assessment of AKI susceptibility in the ICU include the incorporation of dynamic (time-varying) clinical parameters, as well as biomarker, functional, imaging, and genomic data. © 2018 S. Karger AG, Basel.
Introduction to State Estimation of High-Rate System Dynamics
Dodson, Jacob; Joyce, Bryan
2018-01-01
Engineering systems experiencing high-rate dynamic events, including airbags, debris detection, and active blast protection systems, could benefit from real-time observability for enhanced performance. However, the task of high-rate state estimation is challenging, in particular for real-time applications where the rate of the observer’s convergence needs to be in the microsecond range. This paper identifies the challenges of state estimation of high-rate systems and discusses the fundamental characteristics of high-rate systems. A survey of applications and methods for estimators that have the potential to produce accurate estimations for a complex system experiencing highly dynamic events is presented. It is argued that adaptive observers are important to this research. In particular, adaptive data-driven observers are advantageous due to their adaptability and lack of dependence on the system model. PMID:29342855
Andrews, Kristin
2017-01-01
I suggest that the Stereotype Rationality Hypothesis (Jussim 2012) is only partially right. I agree it is rational to rely on stereotypes, but in the complexity of real world social interactions, most of our individuating information invokes additional stereotypes. Despite assumptions to the contrary, there is reason to think theory of mind is not accurate, and social psychology's denial of stereotype accuracy led us toward mindreading/theory of mind - a less accurate account of how we understand other people.
NASA Astrophysics Data System (ADS)
Saide, Pablo E.; Carmichael, Gregory R.; Spak, Scott N.; Gallardo, Laura; Osses, Axel E.; Mena-Carrasco, Marcelo A.; Pagowski, Mariusz
2011-05-01
This study presents a system to predict high pollution events that develop in connection with enhanced subsidence due to coastal lows, particularly in winter over Santiago de Chile. An accurate forecast of these episodes is of interest since the local government is entitled by law to take actions in advance to prevent public exposure to PM10 concentrations in excess of 150 μg m -3 (24 h running averages). The forecasting system is based on accurately simulating carbon monoxide (CO) as a PM10/PM2.5 surrogate, since during episodes and within the city there is a high correlation (over 0.95) among these pollutants. Thus, by accurately forecasting CO, which behaves closely to a tracer on this scale, a PM estimate can be made without involving aerosol-chemistry modeling. Nevertheless, the very stable nocturnal conditions over steep topography associated with maxima in concentrations are hard to represent in models. Here we propose a forecast system based on the WRF-Chem model with optimum settings, determined through extensive testing, that best describe both meteorological and air quality available measurements. Some of the important configurations choices involve the boundary layer (PBL) scheme, model grid resolution (both vertical and horizontal), meteorological initial and boundary conditions and spatial and temporal distribution of the emissions. A forecast for the 2008 winter is performed showing that this forecasting system is able to perform similarly to the authority decision for PM10 and better than persistence when forecasting PM10 and PM2.5 high pollution episodes. Problems regarding false alarm predictions could be related to different uncertainties in the model such as day to day emission variability, inability of the model to completely resolve the complex topography and inaccuracy in meteorological initial and boundary conditions. Finally, according to our simulations, emissions from previous days dominate episode concentrations, which highlights the need for 48 h forecasts that can be achieved by the system presented here. This is in fact the largest advantage of the proposed system.
Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.
2015-01-01
Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the isotropic turbulent flow decay, at a relatively high turbulent Mach number, show a nicely behaved spectral decay rate for medium to high wave numbers. The high-order CESE schemes offer very robust solutions even with the presence of strong shocks or widespread shocklets. The explicit formulation in conjunction with a close to unity theoretical upper Courant number bound has the potential to offer an efficient numerical framework for general compressible turbulent flow simulations with unstructured meshes.
A Novel Hyperbolization Procedure for The Two-Phase Six-Equation Flow Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samet Y. Kadioglu; Robert Nourgaliev; Nam Dinh
2011-10-01
We introduce a novel approach for the hyperbolization of the well-known two-phase six equation flow model. The six-equation model has been frequently used in many two-phase flow applications such as bubbly fluid flows in nuclear reactors. One major drawback of this model is that it can be arbitrarily non-hyperbolic resulting in difficulties such as numerical instability issues. Non-hyperbolic behavior can be associated with complex eigenvalues that correspond to characteristic matrix of the system. Complex eigenvalues are often due to certain flow parameter choices such as the definition of inter-facial pressure terms. In our method, we prevent the characteristic matrix receivingmore » complex eigenvalues by fine tuning the inter-facial pressure terms with an iterative procedure. In this way, the characteristic matrix possesses all real eigenvalues meaning that the characteristic wave speeds are all real therefore the overall two-phase flowmodel becomes hyperbolic. The main advantage of this is that one can apply less diffusive highly accurate high resolution numerical schemes that often rely on explicit calculations of real eigenvalues. We note that existing non-hyperbolic models are discretized mainly based on low order highly dissipative numerical techniques in order to avoid stability issues.« less
Reliability Driven Space Logistics Demand Analysis
NASA Technical Reports Server (NTRS)
Knezevic, J.
1995-01-01
Accurate selection of the quantity of logistic support resources has a strong influence on mission success, system availability and the cost of ownership. At the same time the accurate prediction of these resources depends on the accurate prediction of the reliability measures of the items involved. This paper presents a method for the advanced and accurate calculation of the reliability measures of complex space systems which are the basis for the determination of the demands for logistics resources needed during the operational life or mission of space systems. The applicability of the method presented is demonstrated through several examples.
NNLOPS accurate associated HW production
NASA Astrophysics Data System (ADS)
Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia
2016-06-01
We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.
NASA Technical Reports Server (NTRS)
George, K.; Wu, H.; Willingham, V.; Furusawa, Y.; Kawata, T.; Cucinotta, F. A.; Dicello, J. F. (Principal Investigator)
2001-01-01
PURPOSE: To investigate how cell-cycle delays in human peripheral lymphocytes affect the expression of complex chromosome damage in metaphase following high- and low-LET radiation exposure. MATERIALS AND METHODS: Whole blood was irradiated in vitro with a low and a high dose of 1 GeV u(-1) iron particles, 400MeV u(-1) neon particles or y-rays. Lymphocytes were cultured and metaphase cells were collected at different time points after 48-84h in culture. Interphase chromosomes were prematurely condensed using calyculin-A, either 48 or 72 h after exposure to iron particles or gamma-rays. Cells in first division were analysed using a combination of FISH whole-chromosome painting and DAPI/ Hoechst 33258 harlequin staining. RESULTS: There was a delay in expression of chromosome damage in metaphase that was LET- and dose-dependant. This delay was mostly related to the late emergence of complex-type damage into metaphase. Yields of damage in PCC collected 48 h after irradiation with iron particles were similar to values obtained from cells undergoing mitosis after prolonged incubation. CONCLUSION: The yield of high-LET radiation-induced complex chromosome damage could be underestimated when analysing metaphase cells collected at one time point after irradiation. Chemically induced PCC is a more accurate technique since problems with complicated cell-cycle delays are avoided.
NASA Astrophysics Data System (ADS)
Smith, J. A.; Peter, D. B.; Tromp, J.; Komatitsch, D.; Lefebvre, M. P.
2015-12-01
We present both SPECFEM3D_Cartesian and SPECFEM3D_GLOBE open-source codes, representing high-performance numerical wave solvers simulating seismic wave propagation for local-, regional-, and global-scale application. These codes are suitable for both forward propagation in complex media and tomographic imaging. Both solvers compute highly accurate seismic wave fields using the continuous Galerkin spectral-element method on unstructured meshes. Lateral variations in compressional- and shear-wave speeds, density, as well as 3D attenuation Q models, topography and fluid-solid coupling are all readily included in both codes. For global simulations, effects due to rotation, ellipticity, the oceans, 3D crustal models, and self-gravitation are additionally included. Both packages provide forward and adjoint functionality suitable for adjoint tomography on high-performance computing architectures. We highlight the most recent release of the global version which includes improved performance, simultaneous MPI runs, OpenCL and CUDA support via an automatic source-to-source transformation library (BOAST), parallel I/O readers and writers for databases using ADIOS and seismograms using the recently developed Adaptable Seismic Data Format (ASDF) with built-in provenance. This makes our spectral-element solvers current state-of-the-art, open-source community codes for high-performance seismic wave propagation on arbitrarily complex 3D models. Together with these solvers, we provide full-waveform inversion tools to image the Earth's interior at unprecedented resolution.
Broadband ion mobility deconvolution for rapid analysis of complex mixtures.
Pettit, Michael E; Brantley, Matthew R; Donnarumma, Fabrizio; Murray, Kermit K; Solouki, Touradj
2018-05-04
High resolving power ion mobility (IM) allows for accurate characterization of complex mixtures in high-throughput IM mass spectrometry (IM-MS) experiments. We previously demonstrated that pure component IM-MS data can be extracted from IM unresolved post-IM/collision-induced dissociation (CID) MS data using automated ion mobility deconvolution (AIMD) software [Matthew Brantley, Behrooz Zekavat, Brett Harper, Rachel Mason, and Touradj Solouki, J. Am. Soc. Mass Spectrom., 2014, 25, 1810-1819]. In our previous reports, we utilized a quadrupole ion filter for m/z-isolation of IM unresolved monoisotopic species prior to post-IM/CID MS. Here, we utilize a broadband IM-MS deconvolution strategy to remove the m/z-isolation requirement for successful deconvolution of IM unresolved peaks. Broadband data collection has throughput and multiplexing advantages; hence, elimination of the ion isolation step reduces experimental run times and thus expands the applicability of AIMD to high-throughput bottom-up proteomics. We demonstrate broadband IM-MS deconvolution of two separate and unrelated pairs of IM unresolved isomers (viz., a pair of isomeric hexapeptides and a pair of isomeric trisaccharides) in a simulated complex mixture. Moreover, we show that broadband IM-MS deconvolution improves high-throughput bottom-up characterization of a proteolytic digest of rat brain tissue. To our knowledge, this manuscript is the first to report successful deconvolution of pure component IM and MS data from an IM-assisted data-independent analysis (DIA) or HDMSE dataset.
George, K; Wu, H; Willingham, V; Furusawa, Y; Kawata, T; Cucinotta, F A
2001-02-01
To investigate how cell-cycle delays in human peripheral lymphocytes affect the expression of complex chromosome damage in metaphase following high- and low-LET radiation exposure. Whole blood was irradiated in vitro with a low and a high dose of 1 GeV u(-1) iron particles, 400MeV u(-1) neon particles or y-rays. Lymphocytes were cultured and metaphase cells were collected at different time points after 48-84h in culture. Interphase chromosomes were prematurely condensed using calyculin-A, either 48 or 72 h after exposure to iron particles or gamma-rays. Cells in first division were analysed using a combination of FISH whole-chromosome painting and DAPI/ Hoechst 33258 harlequin staining. There was a delay in expression of chromosome damage in metaphase that was LET- and dose-dependant. This delay was mostly related to the late emergence of complex-type damage into metaphase. Yields of damage in PCC collected 48 h after irradiation with iron particles were similar to values obtained from cells undergoing mitosis after prolonged incubation. The yield of high-LET radiation-induced complex chromosome damage could be underestimated when analysing metaphase cells collected at one time point after irradiation. Chemically induced PCC is a more accurate technique since problems with complicated cell-cycle delays are avoided.
How genome complexity can explain the difficulty of aligning reads to genomes.
Phan, Vinhthuy; Gao, Shanshan; Tran, Quang; Vo, Nam S
2015-01-01
Although it is frequently observed that aligning short reads to genomes becomes harder if they contain complex repeat patterns, there has not been much effort to quantify the relationship between complexity of genomes and difficulty of short-read alignment. Existing measures of sequence complexity seem unsuitable for the understanding and quantification of this relationship. We investigated several measures of complexity and found that length-sensitive measures of complexity had the highest correlation to accuracy of alignment. In particular, the rate of distinct substrings of length k, where k is similar to the read length, correlated very highly to alignment performance in terms of precision and recall. We showed how to compute this measure efficiently in linear time, making it useful in practice to estimate quickly the difficulty of alignment for new genomes without having to align reads to them first. We showed how the length-sensitive measures could provide additional information for choosing aligners that would align consistently accurately on new genomes. We formally established a connection between genome complexity and the accuracy of short-read aligners. The relationship between genome complexity and alignment accuracy provides additional useful information for selecting suitable aligners for new genomes. Further, this work suggests that the complexity of genomes sometimes should be thought of in terms of specific computational problems, such as the alignment of short reads to genomes.
Evaluating fuel complexes for fire hazard mitigation planning in the southeastern United States
Anne G. Andreu; Dan Shea; Bernard R. Parresol; Roger D. Ottmar
2012-01-01
Fire hazard mitigation planning requires an accurate accounting of fuel complexes to predict potential fire behavior and effects of treatment alternatives. In the southeastern United States, rapid vegetation growth coupled with complex land use history and forest management options requires a dynamic approach to fuel characterization. In this study we assessed...
Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.
Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela
2016-12-01
Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.
High Performance Computing Modeling Advances Accelerator Science for High-Energy Physics
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
2014-07-28
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space, and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing high-performance computing are essential for accurately modeling them. In the past decade, the US Department of Energy's SciDAC program has produced accelerator-modeling tools that have been employed to tackle some of the most difficult accelerator science problems. The authors discuss the Synergia framework and its applications to high-intensity particle accelerator physics. Synergia is an accelerator simulation package capable ofmore » handling the entire spectrum of beam dynamics simulations. Our authors present Synergia's design principles and its performance on HPC platforms.« less
Gonzalez, Aroa Garcia; Taraba, Lukáš; Hraníček, Jakub; Kozlík, Petr; Coufal, Pavel
2017-01-01
Dasatinib is a novel oral prescription drug proposed for treating adult patients with chronic myeloid leukemia. Three analytical methods, namely ultra high performance liquid chromatography, capillary zone electrophoresis, and sequential injection analysis, were developed, validated, and compared for determination of the drug in the tablet dosage form. The total analysis time of optimized ultra high performance liquid chromatography and capillary zone electrophoresis methods was 2.0 and 2.2 min, respectively. Direct ultraviolet detection with detection wavelength of 322 nm was employed in both cases. The optimized sequential injection analysis method was based on spectrophotometric detection of dasatinib after a simple colorimetric reaction with folin ciocalteau reagent forming a blue-colored complex with an absorbance maximum at 745 nm. The total analysis time was 2.5 min. The ultra high performance liquid chromatography method provided the lowest detection and quantitation limits and the most precise and accurate results. All three newly developed methods were demonstrated to be specific, linear, sensitive, precise, and accurate, providing results satisfactorily meeting the requirements of the pharmaceutical industry, and can be employed for the routine determination of the active pharmaceutical ingredient in the tablet dosage form. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Quantitative Live-Cell Confocal Imaging of 3D Spheroids in a High-Throughput Format.
Leary, Elizabeth; Rhee, Claire; Wilks, Benjamin T; Morgan, Jeffrey R
2018-06-01
Accurately predicting the human response to new compounds is critical to a wide variety of industries. Standard screening pipelines (including both in vitro and in vivo models) often lack predictive power. Three-dimensional (3D) culture systems of human cells, a more physiologically relevant platform, could provide a high-throughput, automated means to test the efficacy and/or toxicity of novel substances. However, the challenge of obtaining high-magnification, confocal z stacks of 3D spheroids and understanding their respective quantitative limitations must be overcome first. To address this challenge, we developed a method to form spheroids of reproducible size at precise spatial locations across a 96-well plate. Spheroids of variable radii were labeled with four different fluorescent dyes and imaged with a high-throughput confocal microscope. 3D renderings of the spheroid had a complex bowl-like appearance. We systematically analyzed these confocal z stacks to determine the depth of imaging and the effect of spheroid size and dyes on quantitation. Furthermore, we have shown that this loss of fluorescence can be addressed through the use of ratio imaging. Overall, understanding both the limitations of confocal imaging and the tools to correct for these limits is critical for developing accurate quantitative assays using 3D spheroids.
Parameters estimation for reactive transport: A way to test the validity of a reactive model
NASA Astrophysics Data System (ADS)
Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme
The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.
NASA Astrophysics Data System (ADS)
Saleh, F.; Garambois, P. A.; Biancamaria, S.
2017-12-01
Floods are considered the major natural threats to human societies across all continents. Consequences of floods in highly populated areas are more dramatic with losses of human lives and substantial property damage. This risk is projected to increase with the effects of climate change, particularly sea-level rise, increasing storm frequencies and intensities and increasing population and economic assets in such urban watersheds. Despite the advances in computational resources and modeling techniques, significant gaps exist in predicting complex processes and accurately representing the initial state of the system. Improving flood prediction models and data assimilation chains through satellite has become an absolute priority to produce accurate flood forecasts with sufficient lead times. The overarching goal of this work is to assess the benefits of the Surface Water Ocean Topography SWOT satellite data from a flood prediction perspective. The near real time methodology is based on combining satellite data from a simulator that mimics the future SWOT data, numerical models, high resolution elevation data and real-time local measurement in the New York/New Jersey area.
Menstrual migraine: a review of current and developing pharmacotherapies for women.
Allais, G; Chiarle, Giulia; Sinigaglia, Silvia; Benedetto, Chiara
2018-02-01
Migraine is one of the most common neurological disorders in the general population. It affects 18% of women and 6% of men. In more than 50% of women migraineurs the occurrence of migraine attacks correlates strongly with the perimenstrual period. Menstrual migraine is highly debilitating, less responsive to therapy, and attacks are longer than those not correlated with menses. Menstrual migraine requires accurate evaluation and targeted therapy, that we aim to recommend in this review. Areas covered: This review of the literature provides an overview of currently available pharmacological therapies (especially with triptans, anti-inflammatory drugs, hormonal strategies) and drugs in development (in particular those acting on calcitonin gene-related peptide) for the treatment of acute migraine attacks and the prophylaxis of menstrual migraine. The studies reviewed here were retrieved from the Medline database as of June 2017. Expert opinion: The treatment of menstrual migraine is highly complex. Accurate evaluation of its characteristics is prerequisite to selecting appropriate therapy. An integrated approach involving neurologists and gynecologists is essential for patient management and for continuous updating on new therapies under development.
Working Memory Capacity and Fluid Intelligence: Maintenance and Disengagement.
Shipstead, Zach; Harrison, Tyler L; Engle, Randall W
2016-11-01
Working memory capacity and fluid intelligence have been demonstrated to be strongly correlated traits. Typically, high working memory capacity is believed to facilitate reasoning through accurate maintenance of relevant information. In this article, we present a proposal reframing this issue, such that tests of working memory capacity and fluid intelligence are seen as measuring complementary processes that facilitate complex cognition. Respectively, these are the ability to maintain access to critical information and the ability to disengage from or block outdated information. In the realm of problem solving, high working memory capacity allows a person to represent and maintain a problem accurately and stably, so that hypothesis testing can be conducted. However, as hypotheses are disproven or become untenable, disengaging from outdated problem solving attempts becomes important so that new hypotheses can be generated and tested. From this perspective, the strong correlation between working memory capacity and fluid intelligence is due not to one ability having a causal influence on the other but to separate attention-demanding mental functions that can be contrary to one another but are organized around top-down processing goals. © The Author(s) 2016.
Reflection full-waveform inversion using a modified phase misfit function
NASA Astrophysics Data System (ADS)
Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe
2017-09-01
Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.
Highly accurate symplectic element based on two variational principles
NASA Astrophysics Data System (ADS)
Qing, Guanghui; Tian, Jia
2018-02-01
For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.
Accurate evaluation and analysis of functional genomics data and methods
Greene, Casey S.; Troyanskaya, Olga G.
2016-01-01
The development of technology capable of inexpensively performing large-scale measurements of biological systems has generated a wealth of data. Integrative analysis of these data holds the promise of uncovering gene function, regulation, and, in the longer run, understanding complex disease. However, their analysis has proved very challenging, as it is difficult to quickly and effectively assess the relevance and accuracy of these data for individual biological questions. Here, we identify biases that present challenges for the assessment of functional genomics data and methods. We then discuss evaluation methods that, taken together, begin to address these issues. We also argue that the funding of systematic data-driven experiments and of high-quality curation efforts will further improve evaluation metrics so that they more-accurately assess functional genomics data and methods. Such metrics will allow researchers in the field of functional genomics to continue to answer important biological questions in a data-driven manner. PMID:22268703
Recent Progress in Treating Protein-Ligand Interactions with Quantum-Mechanical Methods.
Yilmazer, Nusret Duygu; Korth, Martin
2016-05-16
We review the first successes and failures of a "new wave" of quantum chemistry-based approaches to the treatment of protein/ligand interactions. These approaches share the use of "enhanced", dispersion (D), and/or hydrogen-bond (H) corrected density functional theory (DFT) or semi-empirical quantum mechanical (SQM) methods, in combination with ensemble weighting techniques of some form to capture entropic effects. Benchmark and model system calculations in comparison to high-level theoretical as well as experimental references have shown that both DFT-D (dispersion-corrected density functional theory) and SQM-DH (dispersion and hydrogen bond-corrected semi-empirical quantum mechanical) perform much more accurately than older DFT and SQM approaches and also standard docking methods. In addition, DFT-D might soon become and SQM-DH already is fast enough to compute a large number of binding modes of comparably large protein/ligand complexes, thus allowing for a more accurate assessment of entropic effects.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Park, Jung Jin; Park, O. Ok; Jin, Chang-Soo; Yang, Jung Hoon
2016-04-01
Because of the rise in renewable energy use, the redox flow battery (RFB) has attracted extensive attention as an energy storage system. Thus, many studies have focused on improving the performance of the felt electrodes used in RFBs. However, existing analysis cells are unsuitable for characterizing felt electrodes because of their complex 3-dimensional structure. Analysis is also greatly affected by the measurement conditions, viz. compression ratio, contact area, and contact strength between the felt and current collector. To address the growing need for practical analytical apparatus, we report a new analysis cell for accurate electrochemical characterization of felt electrodes under various conditions, and compare it with previous ones. In this cell, the measurement conditions can be exhaustively controlled with a compression supporter. The cell showed excellent reproducibility in cyclic voltammetry analysis and the results agreed well with actual RFB charge-discharge performance.
Microscopic 3D measurement of dynamic scene using optimized pulse-width-modulation binary fringe
NASA Astrophysics Data System (ADS)
Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao
2017-10-01
Microscopic 3-D shape measurement can supply accurate metrology of the delicacy and complexity of MEMS components of the final devices to ensure their proper performance. Fringe projection profilometry (FPP) has the advantages of noncontactness and high accuracy, making it widely used in 3-D measurement. Recently, tremendous advance of electronics development promotes 3-D measurements to be more accurate and faster. However, research about real-time microscopic 3-D measurement is still rarely reported. In this work, we effectively combine optimized binary structured pattern with number-theoretical phase unwrapping algorithm to realize real-time 3-D shape measurement. A slight defocusing of our proposed binary patterns can considerably alleviate the measurement error based on phase-shifting FPP, making the binary patterns have the comparable performance with ideal sinusoidal patterns. Real-time 3-D measurement about 120 frames per second (FPS) is achieved, and experimental result of a vibrating earphone is presented.
Microtubule-dependent regulation of mitotic protein degradation
Song, Ling; Craney, Allison; Rape, Michael
2014-01-01
Accurate cell division depends on tightly regulated ubiquitylation events catalyzed by the anaphase-promoting complex. Among its many substrates, the APC/C triggers the degradation of proteins that stabilize the mitotic spindle, and loss or accumulation of such spindle assembly factors can result in aneuploidy and cancer. Although critical for cell division, it has remained poorly understood how the timing of spindle assembly factor degradation is established during mitosis. Here, we report that active spindle assembly factors are protected from APC/C-dependent degradation by microtubules. In contrast, those molecules that are not bound to microtubules are highly susceptible to proteolysis and turned over immediately after APC/C-activation. The correct timing of spindle assembly factor degradation, as achieved by this regulatory circuit, is required for accurate spindle structure and function. We propose that the localized stabilization of APC/C-substrates provides a mechanism for the selective disposal of cell cycle regulators that have fulfilled their mitotic roles. PMID:24462202
Vehicle detection and orientation estimation using the radon transform
NASA Astrophysics Data System (ADS)
Pelapur, Rengarajan; Bunyak, Filiz; Palaniappan, Kannappan; Seetharaman, Gunasekaran
2013-05-01
Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15? of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within +/-1.0° of the ground truth.
Efficient steady-state solver for hierarchical quantum master equations
NASA Astrophysics Data System (ADS)
Zhang, Hou-Dao; Qiao, Qin; Xu, Rui-Xue; Zheng, Xiao; Yan, YiJing
2017-07-01
Steady states play pivotal roles in many equilibrium and non-equilibrium open system studies. Their accurate evaluations call for exact theories with rigorous treatment of system-bath interactions. Therein, the hierarchical equations-of-motion (HEOM) formalism is a nonperturbative and non-Markovian quantum dissipation theory, which can faithfully describe the dissipative dynamics and nonlinear response of open systems. Nevertheless, solving the steady states of open quantum systems via HEOM is often a challenging task, due to the vast number of dynamical quantities involved. In this work, we propose a self-consistent iteration approach that quickly solves the HEOM steady states. We demonstrate its high efficiency with accurate and fast evaluations of low-temperature thermal equilibrium of a model Fenna-Matthews-Olson pigment-protein complex. Numerically exact evaluation of thermal equilibrium Rényi entropies and stationary emission line shapes is presented with detailed discussion.
NASA Astrophysics Data System (ADS)
Xie, Tian; Grossman, Jeffrey C.
2018-04-01
The use of machine learning methods for accelerating the design of crystalline materials usually requires manually constructed feature vectors or complex transformation of atom coordinates to input the crystal structure, which either constrains the model to certain crystal types or makes it difficult to provide chemical insights. Here, we develop a crystal graph convolutional neural networks framework to directly learn material properties from the connection of atoms in the crystal, providing a universal and interpretable representation of crystalline materials. Our method provides a highly accurate prediction of density functional theory calculated properties for eight different properties of crystals with various structure types and compositions after being trained with 1 04 data points. Further, our framework is interpretable because one can extract the contributions from local chemical environments to global properties. Using an example of perovskites, we show how this information can be utilized to discover empirical rules for materials design.
TSaT-MUSIC: a novel algorithm for rapid and accurate ultrasonic 3D localization
NASA Astrophysics Data System (ADS)
Mizutani, Kyohei; Ito, Toshio; Sugimoto, Masanori; Hashizume, Hiromichi
2011-12-01
We describe a fast and accurate indoor localization technique using the multiple signal classification (MUSIC) algorithm. The MUSIC algorithm is known as a high-resolution method for estimating directions of arrival (DOAs) or propagation delays. A critical problem in using the MUSIC algorithm for localization is its computational complexity. Therefore, we devised a novel algorithm called Time Space additional Temporal-MUSIC, which can rapidly and simultaneously identify DOAs and delays of mul-ticarrier ultrasonic waves from transmitters. Computer simulations have proved that the computation time of the proposed algorithm is almost constant in spite of increasing numbers of incoming waves and is faster than that of existing methods based on the MUSIC algorithm. The robustness of the proposed algorithm is discussed through simulations. Experiments in real environments showed that the standard deviation of position estimations in 3D space is less than 10 mm, which is satisfactory for indoor localization.
RX J1856.5-3754: A Strange Star with Solid Quark Surface?
NASA Technical Reports Server (NTRS)
Zhang, Xiaoling; Xu, Renxin; Zhang, Shuangnan
2003-01-01
The featureless spectra of isolated 'neutron stars' may indicate that they are actually bare strange stars but a definitive conclusion on the nature of the compact objects cannot be reached until accurate and theoretically calculated spectra of the bare quark surface are known. However due to the complex nonlinearity of quantum chromodynamics it is almost impossible to present a definitive and accurate calculation of the density-dominated quark-gluon plasma from the first principles. Nevertheless it was suggested that cold quark matter with extremely high baryon density could be in a solid state. Within the realms of this possibility we have fitted the 500ks Chandra LETG/HRC data for the brightest isolated neutron star RX 51856.5-3754 with a phenomenological spectral model and found that electric conductivity of quark matter on the stellar surface is about 1.5 x 10(exp 16)/s.
Blom, Douglas A
2012-01-01
Multislice frozen phonon calculations were performed on a model structure of a complex oxide which has potential use as an ammoxidation catalyst. The structure has 11 cation sites in the framework, several of which exhibit mixed Mo/V substitution. In this paper the sensitivity of high-angle annular dark-field (HAADF) imaging to partial substitution of V for Mo in this structure is reported. While the relationship between the average V content in an atom column and the HAADF image intensity is not independent of thickness, it is a fairly weak function of thickness suggesting that HAADF STEM imaging in certain cases can provide a useful starting point for Rietveld refinements of mixed occupancy in complex materials. The thermal parameters of the various cations and oxygen anions in the model affect the amount of thermal diffuse scattering and therefore the intensity in the HAADF images. For complex materials where the structure has been derived via powder Rietveld refinement, the uncertainty in the thermal parameters may limit the accuracy of HAADF image simulations. With the current interest in quantitative microscopy, simulations need to accurately describe the electron scattering to the very high angles often subtended by a HAADF detector. For this system approximately 15% of the scattering occurs above 200 mrad at 200 kV. To simulate scattering to such high angles, very fine sampling of the projected potential is necessary which increases the computational cost of the simulation. Copyright © 2011 Elsevier B.V. All rights reserved.
Direct and simultaneous estimation of cardiac four chamber volumes by multioutput sparse regression.
Zhen, Xiantong; Zhang, Heye; Islam, Ali; Bhaduri, Mousumi; Chan, Ian; Li, Shuo
2017-02-01
Cardiac four-chamber volume estimation serves as a fundamental and crucial role in clinical quantitative analysis of whole heart functions. It is a challenging task due to the huge complexity of the four chambers including great appearance variations, huge shape deformation and interference between chambers. Direct estimation has recently emerged as an effective and convenient tool for cardiac ventricular volume estimation. However, existing direct estimation methods were specifically developed for one single ventricle, i.e., left ventricle (LV), or bi-ventricles; they can not be directly used for four chamber volume estimation due to the great combinatorial variability and highly complex anatomical interdependency of the four chambers. In this paper, we propose a new, general framework for direct and simultaneous four chamber volume estimation. We have addressed two key issues, i.e., cardiac image representation and simultaneous four chamber volume estimation, which enables accurate and efficient four-chamber volume estimation. We generate compact and discriminative image representations by supervised descriptor learning (SDL) which can remove irrelevant information and extract discriminative features. We propose direct and simultaneous four-chamber volume estimation by the multioutput sparse latent regression (MSLR), which enables jointly modeling nonlinear input-output relationships and capturing four-chamber interdependence. The proposed method is highly generalized, independent of imaging modalities, which provides a general regression framework that can be extensively used for clinical data prediction to achieve automated diagnosis. Experiments on both MR and CT images show that our method achieves high performance with a correlation coefficient of up to 0.921 with ground truth obtained manually by human experts, which is clinically significant and enables more accurate, convenient and comprehensive assessment of cardiac functions. Copyright © 2016 Elsevier B.V. All rights reserved.
Vukovic, Rade; Milenkovic, Tatjana; Stojan, George; Vukovic, Ana; Mitrovic, Katarina; Todorovic, Sladjana; Soldatovic, Ivan
2017-01-01
The dichotomous nature of the current definition of metabolic syndrome (MS) in youth results in loss of information. On the other hand, the calculation of continuous MS scores using standardized residuals in linear regression (Z scores) or factor scores of principal component analysis (PCA) is highly impractical for clinical use. Recently, a novel, easily calculated continuous MS score called siMS score was developed based on the IDF MS criteria for the adult population. To develop a Pediatric siMS score (PsiMS), a modified continuous MS score for use in the obese youth, based on the original siMS score, while keeping the score as simple as possible and retaining high correlation with more complex scores. The database consisted of clinical data on 153 obese (BMI ≥95th percentile) children and adolescents. Continuous MS scores were calculated using Z scores and PCA, as well as the original siMS score. Four variants of PsiMS score were developed in accordance with IDF criteria for MS in youth and correlation of these scores with PCA and Z score derived MS continuous scores was assessed. PsiMS score calculated using formula: (2xWaist/Height) + (Glucose(mmol/l)/5.6) + (triglycerides(mmol/l)/1.7) + (Systolic BP/130)-(HDL(mmol/l)/1.02) showed the highest correlation with most of the complex continuous scores (0.792-0.901). The original siMS score also showed high correlation with continuous MS scores. PsiMS score represents a practical and accurate score for the evaluation of MS in the obese youth. The original siMS score should be used when evaluating large cohorts consisting of both adults and children.
High-Density SNP Genotyping to Define β-Globin Locus Haplotypes
Liu, Li; Muralidhar, Shalini; Singh, Manisha; Sylvan, Caprice; Kalra, Inderdeep S.; Quinn, Charles T.; Onyekwere, Onyinye C.; Pace, Betty S.
2014-01-01
Five major β-globin locus haplotypes have been established in individuals with sickle cell disease (SCD) from the Benin, Bantu, Senegal, Cameroon, and Arab-Indian populations. Historically, β-haplotypes were established using restriction fragment length polymorphism (RFLP) analysis across the β-locus, which consists of five functional β-like globin genes located on chromosome 11. Previous attempts to correlate these haplotypes as robust predictors of clinical phenotypes observed in SCD have not been successful. We speculate that the coverage and distribution of the RFLP sites located proximal to or within the globin genes are not sufficiently dense to accurately reflect the complexity of this region. To test our hypothesis, we performed RFLP analysis and high-density single nucleotide polymorphism (SNP) genotyping across the β-locus using DNA samples from either healthy African Americans with normal hemoglobin A (HbAA) or individuals with homozygous SS (HbSS) disease. Using the genotyping data from 88 SNPs and Haploview analysis, we generated a greater number of haplotypes than that observed with RFLP analysis alone. Furthermore, a unique pattern of long-range linkage disequilibrium between the locus control region and the β-like globin genes was observed in the HbSS group. Interestingly, we observed multiple SNPs within the HindIII restriction site located in the Gγ-globin intervening sequence II which produced the same RFLP pattern. These findings illustrated the inability of RFLP analysis to decipher the complexity of sequence variations that impacts genomic structure in this region. Our data suggest that high density SNP mapping may be required to accurately define β-haplotypes that correlate with the different clinical phenotypes observed in SCD. PMID:18829352
Assessment of a fully 3D Monte Carlo reconstruction method for preclinical PET with iodine-124
NASA Astrophysics Data System (ADS)
Moreau, M.; Buvat, I.; Ammour, L.; Chouin, N.; Kraeber-Bodéré, F.; Chérel, M.; Carlier, T.
2015-03-01
Iodine-124 is a radionuclide well suited to the labeling of intact monoclonal antibodies. Yet, accurate quantification in preclinical imaging with I-124 is challenging due to the large positron range and a complex decay scheme including high-energy gammas. The aim of this work was to assess the quantitative performance of a fully 3D Monte Carlo (MC) reconstruction for preclinical I-124 PET. The high-resolution small animal PET Inveon (Siemens) was simulated using GATE 6.1. Three system matrices (SM) of different complexity were calculated in addition to a Siddon-based ray tracing approach for comparison purpose. Each system matrix accounted for a more or less complete description of the physics processes both in the scanned object and in the PET scanner. One homogeneous water phantom and three heterogeneous phantoms including water, lungs and bones were simulated, where hot and cold regions were used to assess activity recovery as well as the trade-off between contrast recovery and noise in different regions. The benefit of accounting for scatter, attenuation, positron range and spurious coincidences occurring in the object when calculating the system matrix used to reconstruct I-124 PET images was highlighted. We found that the use of an MC SM including a thorough modelling of the detector response and physical effects in a uniform water-equivalent phantom was efficient to get reasonable quantitative accuracy in homogeneous and heterogeneous phantoms. Modelling the phantom heterogeneities in the SM did not necessarily yield the most accurate estimate of the activity distribution, due to the high variance affecting many SM elements in the most sophisticated SM.
NASA Astrophysics Data System (ADS)
Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe
2017-12-01
A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks, vortical structures and complex geometries.
Castro Grijalba, Alexander; Martinis, Estefanía M; Wuilloud, Rodolfo G
2017-03-15
A highly sensitive vortex assisted liquid-liquid microextraction (VA-LLME) method was developed for inorganic Se [Se(IV) and Se(VI)] speciation analysis in Allium and Brassica vegetables. Trihexyl(tetradecyl)phosphonium decanoate phosphonium ionic liquid (IL) was applied for the extraction of Se(IV)-ammonium pyrrolidine dithiocarbamate (APDC) complex followed by Se determination with electrothermal atomic absorption spectrometry. A complete optimization of the graphite furnace temperature program was developed for accurate determination of Se in the IL-enriched extracts and multivariate statistical optimization was performed to define the conditions for the highest extraction efficiency. Significant factors of IL-VA-LLME method were sample volume, extraction pH, extraction time and APDC concentration. High extraction efficiency (90%), a 100-fold preconcentration factor and a detection limit of 5.0ng/L were achieved. The high sensitivity obtained with preconcentration and the non-chromatographic separation of inorganic Se species in complex matrix samples such as garlic, onion, leek, broccoli and cauliflower, are the main advantages of IL-VA-LLME. Copyright © 2016 Elsevier Ltd. All rights reserved.
Characteristics for electrochemical machining with nanoscale voltage pulses.
Lee, E S; Back, S Y; Lee, J T
2009-06-01
Electrochemical machining has traditionally been used in highly specialized fields, such as those of the aerospace and defense industries. It is now increasingly being applied in other industries, where parts with difficult-to-cut material, complex geometry and tribology, and devices of nanoscale and microscale are required. Electric characteristic plays a principal function role in and chemical characteristic plays an assistant function role in electrochemical machining. Therefore, essential parameters in electrochemical machining can be described current density, machining time, inter-electrode gap size, electrolyte, electrode shape etc. Electrochemical machining provides an economical and effective method for machining high strength, high tension and heat-resistant materials into complex shapes such as turbine blades of titanium and aluminum alloys. The application of nanoscale voltage pulses between a tool electrode and a workpiece in an electrochemical environment allows the three-dimensional machining of conducting materials with sub-micrometer precision. In this study, micro probe are developed by electrochemical etching and micro holes are manufactured using these micro probe as tool electrodes. Micro holes and microgroove can be accurately achieved by using nanoscale voltages pulses.
NASA Astrophysics Data System (ADS)
Grussenmeyer, P.; Alby, E.; Landes, T.; Koehl, M.; Guillemin, S.; Hullo, J. F.; Assali, P.; Smigiel, E.
2012-07-01
Different approaches and tools are required in Cultural Heritage Documentation to deal with the complexity of monuments and sites. The documentation process has strongly changed in the last few years, always driven by technology. Accurate documentation is closely relied to advances of technology (imaging sensors, high speed scanning, automation in recording and processing data) for the purposes of conservation works, management, appraisal, assessment of the structural condition, archiving, publication and research (Patias et al., 2008). We want to focus in this paper on the recording aspects of cultural heritage documentation, especially the generation of geometric and photorealistic 3D models for accurate reconstruction and visualization purposes. The selected approaches are based on the combination of photogrammetric dense matching and Terrestrial Laser Scanning (TLS) techniques. Both techniques have pros and cons and recent advances have changed the way of the recording approach. The choice of the best workflow relies on the site configuration, the performances of the sensors, and criteria as geometry, accuracy, resolution, georeferencing, texture, and of course processing time. TLS techniques (time of flight or phase shift systems) are widely used for recording large and complex objects and sites. Point cloud generation from images by dense stereo or multi-view matching can be used as an alternative or as a complementary method to TLS. Compared to TLS, the photogrammetric solution is a low cost one, as the acquisition system is limited to a high-performance digital camera and a few accessories only. Indeed, the stereo or multi-view matching process offers a cheap, flexible and accurate solution to get 3D point clouds. Moreover, the captured images might also be used for models texturing. Several software packages are available, whether web-based, open source or commercial. The main advantage of this photogrammetric or computer vision based technology is to get at the same time a point cloud (the resolution depends on the size of the pixel on the object), and therefore an accurate meshed object with its texture. After matching and processing steps, we can use the resulting data in much the same way as a TLS point cloud, but in addition with radiometric information for textures. The discussion in this paper reviews recording and important processing steps as geo-referencing and data merging, the essential assessment of the results, and examples of deliverables from projects of the Photogrammetry and Geomatics Group (INSA Strasbourg, France).
Stereolithographic Surgical Template: A Review
Dandekeri, Shilpa Sudesh; Sowmya, M.K.; Bhandary, Shruthi
2013-01-01
Implant placement has become a routine modality of dental care.Improvements in surgical reconstructive methods as well as increased prosthetic demands,require a highly accurate diagnosis, planning and placement. Recently,computer-aided design and manufacturing have made it possible to use data from computerised tomography to not only plan implant rehabilitation,but also transfer this information to the surgery.A review on one of this technique called Stereolithography is presented in this article.It permits graphic and complex 3D implant placement and fabrication of stereolithographic surgical templates. Also offers many significant benefits over traditional procedures. PMID:24179955
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
Sensitive detection of C-reactive protein using optical fiber Bragg gratings.
Sridevi, S; Vasu, K S; Asokan, S; Sood, A K
2015-03-15
An accurate and highly sensitive sensor platform has been demonstrated for the detection of C-reactive protein (CRP) using optical fiber Bragg gratings (FBGs). The CRP detection has been carried out by monitoring the shift in Bragg wavelength (ΔλB) of an etched FBG (eFBG) coated with an anti-CRP antibody (aCRP)-graphene oxide (GO) complex. The complex is characterized by Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy and atomic force microscopy. A limit of detection of 0.01mg/L has been achieved with a linear range of detection from 0.01mg/L to 100mg/L which includes clinical range of CRP. The eFBG sensor coated with only aCRP (without GO) show much less sensitivity than that of aCRP-GO complex coated eFBG. The eFBG sensors show high specificity to CRP even in the presence of other interfering factors such as urea, creatinine and glucose. The affinity constant of ∼1.1×10(10)M(-1) has been extracted from the data of normalized shift (ΔλB/λB) as a function of CRP concentration. Copyright © 2014 Elsevier B.V. All rights reserved.
Reduction of chemical formulas from the isotopic peak distributions of high-resolution mass spectra.
Roussis, Stilianos G; Proulx, Richard
2003-03-15
A method has been developed for the reduction of the chemical formulas of compounds in complex mixtures from the isotopic peak distributions of high-resolution mass spectra. The method is based on the principle that the observed isotopic peak distribution of a mixture of compounds is a linear combination of the isotopic peak distributions of the individual compounds in the mixture. All possible chemical formulas that meet specific criteria (e.g., type and number of atoms in structure, limits of unsaturation, etc.) are enumerated, and theoretical isotopic peak distributions are generated for each formula. The relative amount of each formula is obtained from the accurately measured isotopic peak distribution and the calculated isotopic peak distributions of all candidate formulas. The formulas of compounds in simple spectra, where peak components are fully resolved, are rapidly determined by direct comparison of the calculated and experimental isotopic peak distributions. The singular value decomposition linear algebra method is used to determine the contributions of compounds in complex spectra containing unresolved peak components. The principles of the approach and typical application examples are presented. The method is most useful for the characterization of complex spectra containing partially resolved peaks and structures with multiisotopic elements.
NASA Technical Reports Server (NTRS)
Singleterry, Robert C., Jr.; Walker, Steven A.; Clowdsley, Martha S.
2016-01-01
The mathematical models for Solar Particle Event (SPE) high energy tails are constructed with several di erent algorithms. Since limited measured data exist above energies around 400 MeV, this paper arbitrarily de nes the high energy tail as any proton with an energy above 400 MeV. In order to better understand the importance of accurately modeling the high energy tail for SPE spectra, the contribution to astronaut whole body e ective dose equivalent of the high energy portions of three di erent SPE models has been evaluated. To ensure completeness of this analysis, simple and complex geometries were used. This analysis showed that the high energy tail of certain SPEs can be relevant to astronaut exposure and hence safety. Therefore, models of high energy tails for SPEs should be well analyzed and based on data if possible.
Nunes, Matheus Henrique
2016-01-01
Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects. PMID:27187074
Nunes, Matheus Henrique; Görgens, Eric Bastos
2016-01-01
Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.
NASA Astrophysics Data System (ADS)
Manzi, Lucio; Barrow, Andrew S.; Scott, Daniel; Layfield, Robert; Wright, Timothy G.; Moses, John E.; Oldham, Neil J.
2016-11-01
Specific interactions between proteins and their binding partners are fundamental to life processes. The ability to detect protein complexes, and map their sites of binding, is crucial to understanding basic biology at the molecular level. Methods that employ sensitive analytical techniques such as mass spectrometry have the potential to provide valuable insights with very little material and on short time scales. Here we present a differential protein footprinting technique employing an efficient photo-activated probe for use with mass spectrometry. Using this methodology the location of a carbohydrate substrate was accurately mapped to the binding cleft of lysozyme, and in a more complex example, the interactions between a 100 kDa, multi-domain deubiquitinating enzyme, USP5 and a diubiquitin substrate were located to different functional domains. The much improved properties of this probe make carbene footprinting a viable method for rapid and accurate identification of protein binding sites utilizing benign, near-UV photoactivation.
Instructional Strategy: Administration of Injury Scripts
ERIC Educational Resources Information Center
Schilling, Jim
2016-01-01
Context: Learning how to form accurate and efficient clinical examinations is a critical factor in becoming a competent athletic training practitioner, and instructional strategies differ for this complex task. Objective: To introduce an instructional strategy consistent with complex learning to encourage improved efficiency by minimizing…
NASA Astrophysics Data System (ADS)
Bowring, S. A.
2010-12-01
Over the past two decades, U-Pb geochronology by ID-TIMS has been refined to achieve internal (analytical) uncertainties on a single grain analysis of ± ~ 0.1-0.2%, and 0.05% or better on weighted mean dates. This level of precision enables unprecedented evaluation of the rates and durations of geological processes, from magma chamber evolution to mass extinctions and recoveries. The increased precision, however, exposes complexity in magmatic/volcanic systems and highlights the importance of corrections related to disequilibrium partitioning of intermediate daughter products, and raises questions as to how best to interpret the complex spectrum of dates characteristic of many volcanic rocks. In addition, the increased precision requires renewed emphasis on the accuracy of U decay constants, the isotopic composition of U, the calibration of isotopic tracers, and the accurate propagation of uncertainties It is now commonplace in the high precision dating of volcanic ash-beds to analyze 5-20 single grains of zircon in an attempt to resolve the eruption/depositional age. Data sets with dispersion far in excess of analytical uncertainties are interpreted to reflect Pb-loss, inheritance, and protracted crystallization, often supported with zircon chemistry. In most cases, a weighted mean of the youngest reproducible dates is interpreted as the time of eruption/deposition. Crystallization histories of silicic magmatic systems recovered from plutonic rocks may also be protracted, though may not be directly applicable to silicic eruptions; each sample must be evaluated independently. A key to robust interpretations is the integration high-spatial resolution zircon trace element geochemistry with high-precision ID-TIMS analyses. The EARTHTIME initiative has focused on many of these issues, and the larger subject of constructing a timeline for earth history using both U-Pb and Ar-Ar chronometers. Despite continuing improvements in both, comparing dates for the same rock with both chronometers is not straightforward. Compelling issues range from pre-eruptive magma chamber residence, recognizing open system behavior, accurately correcting for disequilibrium amounts of 230Th and 231Pa, precise and accurate dates of fluence monitors for 40Ar/39Ar, and inter-laboratory biases. At present, despite the level of internal precision achievable by each technique, obstacles remain to combining both chronometers.
Zelesky, Veronica; Schneider, Richard; Janiszewski, John; Zamora, Ismael; Ferguson, James; Troutman, Matthew
2013-05-01
The ability to supplement high-throughput metabolic clearance data with structural information defining the site of metabolism should allow design teams to streamline their synthetic decisions. However, broad application of metabolite identification in early drug discovery has been limited, largely due to the time required for data review and structural assignment. The advent of mass defect filtering and its application toward metabolite scouting paved the way for the development of software automation tools capable of rapidly identifying drug-related material in complex biological matrices. Two semi-automated commercial software applications, MetabolitePilot™ and Mass-MetaSite™, were evaluated to assess the relative speed and accuracy of structural assignments using data generated on a high-resolution MS platform. Review of these applications has demonstrated their utility in providing accurate results in a time-efficient manner, leading to acceleration of metabolite identification initiatives while highlighting the continued need for biotransformation expertise in the interpretation of more complex metabolic reactions.
Application of a laser interferometer skin-friction meter in complex flows
NASA Technical Reports Server (NTRS)
Monson, D. J.; Driver, D. M.; Szodruch, J.
1981-01-01
The application of a nonintrusive laser-interferometer skin-friction meter, which measures skin friction with a remotely located laser interferometer that monitors the thickness change of a thin oil film, is extended both experimentally and theoretically to several complex wind-tunnel flows. These include two-dimensional seperated and reattached subsonic flows with large pressure and shear gradients, and two and three-dimensional supersonic flows at high Reynolds number, which include variable wall temperatures and cross-flows. In addition, it is found that the instrument can provide an accurate location of the mean reattachment length for separated flows. Results show that levels up to 120 N/sq m, or 40 times higher than previous tests, can be obtained, despite encountering some limits to the method for very high skin-friction levels. It is concluded that these results establish the utility of this instrument for measuring skin friction in a wide variety of flows of interest in aerodynamic testing.
Sato, Masanao; Tsuda, Kenichi; Wang, Lin; Coller, John; Watanabe, Yuichiro; Glazebrook, Jane; Katagiri, Fumiaki
2010-01-01
Biological signaling processes may be mediated by complex networks in which network components and network sectors interact with each other in complex ways. Studies of complex networks benefit from approaches in which the roles of individual components are considered in the context of the network. The plant immune signaling network, which controls inducible responses to pathogen attack, is such a complex network. We studied the Arabidopsis immune signaling network upon challenge with a strain of the bacterial pathogen Pseudomonas syringae expressing the effector protein AvrRpt2 (Pto DC3000 AvrRpt2). This bacterial strain feeds multiple inputs into the signaling network, allowing many parts of the network to be activated at once. mRNA profiles for 571 immune response genes of 22 Arabidopsis immunity mutants and wild type were collected 6 hours after inoculation with Pto DC3000 AvrRpt2. The mRNA profiles were analyzed as detailed descriptions of changes in the network state resulting from the genetic perturbations. Regulatory relationships among the genes corresponding to the mutations were inferred by recursively applying a non-linear dimensionality reduction procedure to the mRNA profile data. The resulting static network model accurately predicted 23 of 25 regulatory relationships reported in the literature, suggesting that predictions of novel regulatory relationships are also accurate. The network model revealed two striking features: (i) the components of the network are highly interconnected; and (ii) negative regulatory relationships are common between signaling sectors. Complex regulatory relationships, including a novel negative regulatory relationship between the early microbe-associated molecular pattern-triggered signaling sectors and the salicylic acid sector, were further validated. We propose that prevalent negative regulatory relationships among the signaling sectors make the plant immune signaling network a “sector-switching” network, which effectively balances two apparently conflicting demands, robustness against pathogenic perturbations and moderation of negative impacts of immune responses on plant fitness. PMID:20661428
NASA Astrophysics Data System (ADS)
Ivan, L.; De Sterck, H.; Susanto, A.; Groth, C. P. T.
2015-02-01
A fourth-order accurate finite-volume scheme for hyperbolic conservation laws on three-dimensional (3D) cubed-sphere grids is described. The approach is based on a central essentially non-oscillatory (CENO) finite-volume method that was recently introduced for two-dimensional compressible flows and is extended to 3D geometries with structured hexahedral grids. Cubed-sphere grids feature hexahedral cells with nonplanar cell surfaces, which are handled with high-order accuracy using trilinear geometry representations in the proposed approach. Varying stencil sizes and slope discontinuities in grid lines occur at the boundaries and corners of the six sectors of the cubed-sphere grid where the grid topology is unstructured, and these difficulties are handled naturally with high-order accuracy by the multidimensional least-squares based 3D CENO reconstruction with overdetermined stencils. A rotation-based mechanism is introduced to automatically select appropriate smaller stencils at degenerate block boundaries, where fewer ghost cells are available and the grid topology changes, requiring stencils to be modified. Combining these building blocks results in a finite-volume discretization for conservation laws on 3D cubed-sphere grids that is uniformly high-order accurate in all three grid directions. While solution-adaptivity is natural in the multi-block setting of our code, high-order accurate adaptive refinement on cubed-sphere grids is not pursued in this paper. The 3D CENO scheme is an accurate and robust solution method for hyperbolic conservation laws on general hexahedral grids that is attractive because it is inherently multidimensional by employing a K-exact overdetermined reconstruction scheme, and it avoids the complexity of considering multiple non-central stencil configurations that characterizes traditional ENO schemes. Extensive numerical tests demonstrate fourth-order convergence for stationary and time-dependent Euler and magnetohydrodynamic flows on cubed-sphere grids, and robustness against spurious oscillations at 3D shocks. Performance tests illustrate efficiency gains that can be potentially achieved using fourth-order schemes as compared to second-order methods for the same error level. Applications on extended cubed-sphere grids incorporating a seventh root block that discretizes the interior of the inner sphere demonstrate the versatility of the spatial discretization method.
Contributions of the ARM Program to Radiative Transfer Modeling for Climate and Weather Applications
NASA Technical Reports Server (NTRS)
Mlawer, Eli J.; Iacono, Michael J.; Pincus, Robert; Barker, Howard W.; Oreopoulos, Lazaros; Mitchell, David L.
2016-01-01
Accurate climate and weather simulations must account for all relevant physical processes and their complex interactions. Each of these atmospheric, ocean, and land processes must be considered on an appropriate spatial and temporal scale, which leads these simulations to require a substantial computational burden. One especially critical physical process is the flow of solar and thermal radiant energy through the atmosphere, which controls planetary heating and cooling and drives the large-scale dynamics that moves energy from the tropics toward the poles. Radiation calculations are therefore essential for climate and weather simulations, but are themselves quite complex even without considering the effects of variable and inhomogeneous clouds. Clear-sky radiative transfer calculations have to account for thousands of absorption lines due to water vapor, carbon dioxide, and other gases, which are irregularly distributed across the spectrum and have shapes dependent on pressure and temperature. The line-by-line (LBL) codes that treat these details have a far greater computational cost than can be afforded by global models. Therefore, the crucial requirement for accurate radiation calculations in climate and weather prediction models must be satisfied by fast solar and thermal radiation parameterizations with a high level of accuracy that has been demonstrated through extensive comparisons with LBL codes. See attachment for continuation.
The draft genome of MD-2 pineapple using hybrid error correction of long reads
Redwan, Raimi M.; Saidin, Akzam; Kumar, S. Vijay
2016-01-01
The introduction of the elite pineapple variety, MD-2, has caused a significant market shift in the pineapple industry. Better productivity, overall increased in fruit quality and taste, resilience to chilled storage and resistance to internal browning are among the key advantages of the MD-2 as compared with its previous predecessor, the Smooth Cayenne. Here, we present the genome sequence of the MD-2 pineapple (Ananas comosus (L.) Merr.) by using the hybrid sequencing technology from two highly reputable platforms, i.e. the PacBio long sequencing reads and the accurate Illumina short reads. Our draft genome achieved 99.6% genome coverage with 27,017 predicted protein-coding genes while 45.21% of the genome was identified as repetitive elements. Furthermore, differential expression of ripening RNASeq library of pineapple fruits revealed ethylene-related transcripts, believed to be involved in regulating the process of non-climacteric pineapple fruit ripening. The MD-2 pineapple draft genome serves as an example of how a complex heterozygous genome is amenable to whole genome sequencing by using a hybrid technology that is both economical and accurate. The genome will make genomic applications more feasible as a medium to understand complex biological processes specific to pineapple. PMID:27374615
Fractal Complexity-Based Feature Extraction Algorithm of Communication Signals
NASA Astrophysics Data System (ADS)
Wang, Hui; Li, Jingchao; Guo, Lili; Dou, Zheng; Lin, Yun; Zhou, Ruolin
How to analyze and identify the characteristics of radiation sources and estimate the threat level by means of detecting, intercepting and locating has been the central issue of electronic support in the electronic warfare, and communication signal recognition is one of the key points to solve this issue. Aiming at accurately extracting the individual characteristics of the radiation source for the increasingly complex communication electromagnetic environment, a novel feature extraction algorithm for individual characteristics of the communication radiation source based on the fractal complexity of the signal is proposed. According to the complexity of the received signal and the situation of environmental noise, use the fractal dimension characteristics of different complexity to depict the subtle characteristics of the signal to establish the characteristic database, and then identify different broadcasting station by gray relation theory system. The simulation results demonstrate that the algorithm can achieve recognition rate of 94% even in the environment with SNR of -10dB, and this provides an important theoretical basis for the accurate identification of the subtle features of the signal at low SNR in the field of information confrontation.
Analysis of High Order Difference Methods for Multiscale Complex Compressible Flows
NASA Technical Reports Server (NTRS)
Sjoegreen, Bjoern; Yee, H. C.; Tang, Harry (Technical Monitor)
2002-01-01
Accurate numerical simulations of complex multiscale compressible viscous flows, especially high speed turbulence combustion and acoustics, demand high order schemes with adaptive numerical dissipation controls. Standard high resolution shock-capturing methods are too dissipative to capture the small scales and/or long-time wave propagations without extreme grid refinements and small time steps. An integrated approach for the control of numerical dissipation in high order schemes with incremental studies was initiated. Here we further refine the analysis on, and improve the understanding of the adaptive numerical dissipation control strategy. Basically, the development of these schemes focuses on high order nondissipative schemes and takes advantage of the progress that has been made for the last 30 years in numerical methods for conservation laws, such as techniques for imposing boundary conditions, techniques for stability at shock waves, and techniques for stable and accurate long-time integration. We concentrate on high order centered spatial discretizations and a fourth-order Runge-Kutta temporal discretizations as the base scheme. Near the bound-aries, the base scheme has stable boundary difference operators. To further enhance stability, the split form of the inviscid flux derivatives is frequently used for smooth flow problems. To enhance nonlinear stability, linear high order numerical dissipations are employed away from discontinuities, and nonlinear filters are employed after each time step in order to suppress spurious oscillations near discontinuities to minimize the smearing of turbulent fluctuations. Although these schemes are built from many components, each of which is well-known, it is not entirely obvious how the different components be best connected. For example, the nonlinear filter could instead have been built into the spatial discretization, so that it would have been activated at each stage in the Runge-Kutta time stepping. We could think of a mechanism that activates the split form of the equations only at some parts of the domain. Another issue is how to define good sensors for determining in which parts of the computational domain a certain feature should be filtered by the appropriate numerical dissipation. For the present study we employ a wavelet technique introduced in as sensors. Here, the method is briefly described with selected numerical experiments.
Aiding the Detection of QRS Complex in ECG Signals by Detecting S Peaks Independently.
Sabherwal, Pooja; Singh, Latika; Agrawal, Monika
2018-03-30
In this paper, a novel algorithm for the accurate detection of QRS complex by combining the independent detection of R and S peaks, using fusion algorithm is proposed. R peak detection has been extensively studied and is being used to detect the QRS complex. Whereas, S peaks, which is also part of QRS complex can be independently detected to aid the detection of QRS complex. In this paper, we suggest a method to first estimate S peak from raw ECG signal and then use them to aid the detection of QRS complex. The amplitude of S peak in ECG signal is relatively weak than corresponding R peak, which is traditionally used for the detection of QRS complex, therefore, an appropriate digital filter is designed to enhance the S peaks. These enhanced S peaks are then detected by adaptive thresholding. The algorithm is validated on all the signals of MIT-BIH arrhythmia database and noise stress database taken from physionet.org. The algorithm performs reasonably well even for the signals highly corrupted by noise. The algorithm performance is confirmed by sensitivity and positive predictivity of 99.99% and the detection accuracy of 99.98% for QRS complex detection. The number of false positives and false negatives resulted while analysis has been drastically reduced to 80 and 42 against the 98 and 84 the best results reported so far.
NASA Astrophysics Data System (ADS)
Mohamed, Gehad G.; Hamed, Maher M.; Zaki, Nadia G.; Abdou, Mohamed M.; Mohamed, Marwa El-Badry; Abdallah, Abanoub Mosaad
2017-07-01
A simple, accurate and fast spectrophotometric method for the quantitative determination of melatonin (ML) drug in its pure and pharmaceutical forms was developed based on the formation of its charge transfer complex with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) as an electron acceptor. The different conditions for this method were optimized accurately. The Lambert-Beer's law was found to be valid over the concentration range of 4-100 μg mL- 1 ML. The solid form of the CT complex was structurally characterized by means of different spectral methods. Density functional theory (DFT) and time-dependent density functional theory (TD-DFT) calculations were carried out. The different quantum chemical parameters of the CT complex were calculated. Thermal properties of the CT complex and its kinetic thermodynamic parameters were studied, as well as its antimicrobial and antifungal activities were investigated. Molecular docking studies were performed to predict the binding modes of the CT complex components towards E. coli bacterial RNA and the receptor of breast cancer mutant oxidoreductase.
Mohamed, Gehad G; Hamed, Maher M; Zaki, Nadia G; Abdou, Mohamed M; Mohamed, Marwa El-Badry; Abdallah, Abanoub Mosaad
2017-07-05
A simple, accurate and fast spectrophotometric method for the quantitative determination of melatonin (ML) drug in its pure and pharmaceutical forms was developed based on the formation of its charge transfer complex with 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ) as an electron acceptor. The different conditions for this method were optimized accurately. The Lambert-Beer's law was found to be valid over the concentration range of 4-100μgmL -1 ML. The solid form of the CT complex was structurally characterized by means of different spectral methods. Density functional theory (DFT) and time-dependent density functional theory (TD-DFT) calculations were carried out. The different quantum chemical parameters of the CT complex were calculated. Thermal properties of the CT complex and its kinetic thermodynamic parameters were studied, as well as its antimicrobial and antifungal activities were investigated. Molecular docking studies were performed to predict the binding modes of the CT complex components towards E. coli bacterial RNA and the receptor of breast cancer mutant oxidoreductase. Copyright © 2017 Elsevier B.V. All rights reserved.
A fast complex integer convolution using a hybrid transform
NASA Technical Reports Server (NTRS)
Reed, I. S.; K Truong, T.
1978-01-01
It is shown that the Winograd transform can be combined with a complex integer transform over the Galois field GF(q-squared) to yield a new algorithm for computing the discrete cyclic convolution of complex number points. By this means a fast method for accurately computing the cyclic convolution of a sequence of complex numbers for long convolution lengths can be obtained. This new hybrid algorithm requires fewer multiplications than previous algorithms.
Near-field acoustical holography of military jet aircraft noise
NASA Astrophysics Data System (ADS)
Wall, Alan T.; Gee, Kent L.; Neilsen, Tracianne; Krueger, David W.; Sommerfeldt, Scott D.; James, Michael M.
2010-10-01
Noise radiated from high-performance military jet aircraft poses a hearing-loss risk to personnel. Accurate characterization of jet noise can assist in noise prediction and noise reduction techniques. In this work, sound pressure measurements were made in the near field of an F-22 Raptor. With more than 6000 measurement points, this is the most extensive near-field measurement of a high-performance jet to date. A technique called near-field acoustical holography has been used to propagate the complex pressure from a two- dimensional plane to a three-dimensional region in the jet vicinity. Results will be shown and what they reveal about jet noise characteristics will be discussed.
A Comparative Study of Airflow and Odorant Deposition in the Mammalian Nasal Cavity
NASA Astrophysics Data System (ADS)
Richter, Joseph; Rumple, Christopher; Ranslow, Allison; Quigley, Andrew; Pang, Benison; Neuberger, Thomas; Krane, Michael; van Valkenburgh, Blaire; Craven, Brent
2013-11-01
The complex structure of the mammalian nasal cavity provides a tortuous airflow path and a large surface area for respiratory air conditioning, filtering of inspired contaminants, and olfaction. Due to the small and contorted structure of the nasal turbinals, nasal anatomy and function remains poorly understood in most mammals. Here, we utilize high-resolution MRI scans to reconstruct anatomically-accurate models of the mammalian nasal cavity. These data are used to compare the form and function of the mammalian nose. High-fidelity computational fluid dynamics (CFD) simulations of nasal airflow and odorant deposition are presented and used to compare olfactory function across species (primate, rodent, canine, feline, ungulate).
Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling.
Perdikaris, P; Raissi, M; Damianou, A; Lawrence, N D; Karniadakis, G E
2017-02-01
Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.
NASA Astrophysics Data System (ADS)
Bonnet, M.; Collino, F.; Demaldent, E.; Imperiale, A.; Pesudo, L.
2018-05-01
Ultrasonic Non-Destructive Testing (US NDT) has become widely used in various fields of applications to probe media. Exploiting the surface measurements of the ultrasonic incident waves echoes after their propagation through the medium, it allows to detect potential defects (cracks and inhomogeneities) and characterize the medium. The understanding and interpretation of those experimental measurements is performed with the help of numerical modeling and simulations. However, classical numerical methods can become computationally very expensive for the simulation of wave propagation in the high frequency regime. On the other hand, asymptotic techniques are better suited to model high frequency scattering over large distances but nevertheless do not allow accurate simulation of complex diffraction phenomena. Thus, neither numerical nor asymptotic methods can individually solve high frequency diffraction problems in large media, as those involved in UNDT controls, both quickly and accurately, but their advantages and limitations are complementary. Here we propose a hybrid strategy coupling the surface integral equation method and the ray tracing method to simulate high frequency diffraction under speed and accuracy constraints. This strategy is general and applicable to simulate diffraction phenomena in acoustic or elastodynamic media. We provide its implementation and investigate its performances for the 2D acoustic diffraction problem. The main features of this hybrid method are described and results of 2D computational experiments discussed.
Waychunas, G.A.; Fuller, C.C.; Davis, J.A.; Rehr, J.J.
2003-01-01
X-ray absorption near-edge spectroscopy (XANES) analysis of sorption complexes has the advantages of high sensitivity (10- to 20-fold greater than extended X-ray absorption fine structure [EXAFS] analysis) and relative ease and speed of data collection (because of the short k-space range). It is thus a potentially powerful tool for characterization of environmentally significant surface complexes and precipitates at very low surface coverages. However, quantitative analysis has been limited largely to "fingerprint" comparison with model spectra because of the difficulty of obtaining accurate multiple-scattering amplitudes for small clusters with high confidence. In the present work, calculations of the XANES for 50- to 200-atom clusters of structure from Zn model compounds using the full multiple-scattering code Feff 8.0 accurately replicate experimental spectra and display features characteristic of specific first-neighbor anion coordination geometry and second-neighbor cation geometry and number. Analogous calculations of the XANES for small molecular clusters indicative of precipitation and sorption geometries for aqueous Zn on ferrihydrite, and suggested by EXAFS analysis, are in good agreement with observed spectral trends with sample composition, with Zn-oxygen coordination and with changes in second-neighbor cation coordination as a function of sorption coverage. Empirical analysis of experimental XANES features further verifies the validity of the calculations. The findings agree well with a complete EXAFS analysis previously reported for the same sample set, namely, that octahedrally coordinated aqueous Zn2+ species sorb as a tetrahedral complex on ferrihydrite with varying local geometry depending on sorption density. At significantly higher densities but below those at which Zn hydroxide is expected to precipitate, a mainly octahedral coordinated Zn2+ precipitate is observed. An analysis of the multiple scattering paths contributing to the XANES demonstrates the importance of scattering paths involving the anion sublattice. We also describe the specific advantages of complementary quantitative XANES and EXAFS analysis and estimate limits on the extent of structural information obtainable from XANES analysis. ?? 2003 Elsevier Science Ltd.
Skalski, Matthew R; White, Eric A; Patel, Dakshesh B; Schein, Aaron J; RiveraMelo, Hector; Matcuk, George R
2016-01-01
The triangular fibrocartilage complex (TFCC) plays an important role in wrist biomechanics and is prone to traumatic and degenerative injury, making it a common source of ulnar-sided wrist pain. Because of this, the TFCC is frequently imaged, and a detailed understanding of its anatomy and injury patterns is critical in generating an accurate report to help guide treatment. In this review, we provide a detailed overview of TFCC anatomy, its normal appearance on magnetic resonance imaging, the spectrum of TFCC injuries based on the Palmer classification system, and pitfalls in accurate assessment. Copyright © 2015 Mosby, Inc. All rights reserved.
Cerný, Jirí; Hobza, Pavel
2005-04-21
The performance of the recently introduced X3LYP density functional which was claimed to significantly improve the accuracy for H-bonded and van der Waals complexes was tested for extended H-bonded and stacked complexes (nucleic acid base pairs and amino acid pairs). In the case of planar H-bonded complexes (guanine...cytosine, adenine...thymine) the DFT results nicely agree with accurate correlated ab initio results. For the stacked pairs (uracil dimer, cytosine dimer, adenine...thymine and guanine...cytosine) the DFT fails completely and it was even not able to localize any minimum at the stacked subspace of the potential energy surface. The geometry optimization of all these stacked clusters leads systematically to the planar H-bonded pairs. The amino acid pairs were investigated in the crystal geometry. DFT again strongly underestimates the accurate correlated ab initio stabilization energies and usually it was not able to describe the stabilization of a pair. The X3LYP functional thus behaves similarly to other current functionals. Stacking of nucleic acid bases as well as interaction of amino acids was described satisfactorily by using the tight-binding DFT method, which explicitly covers the London dispersion energy.
The high cost of accurate knowledge.
Sutcliffe, Kathleen M; Weber, Klaus
2003-05-01
Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.
USDA-ARS?s Scientific Manuscript database
Accurate stream topography measurement is important for many ecological applications such as hydraulic modeling and habitat characterization. Habitat complexity measures are often made using total station surveying or visual approximation, which can be subjective and have spatial resolution limitati...
The BioPlex Network: A Systematic Exploration of the Human Interactome.
Huttlin, Edward L; Ting, Lily; Bruckner, Raphael J; Gebreab, Fana; Gygi, Melanie P; Szpyt, John; Tam, Stanley; Zarraga, Gabriela; Colby, Greg; Baltier, Kurt; Dong, Rui; Guarani, Virginia; Vaites, Laura Pontano; Ordureau, Alban; Rad, Ramin; Erickson, Brian K; Wühr, Martin; Chick, Joel; Zhai, Bo; Kolippakkam, Deepak; Mintseris, Julian; Obar, Robert A; Harris, Tim; Artavanis-Tsakonas, Spyros; Sowa, Mathew E; De Camilli, Pietro; Paulo, Joao A; Harper, J Wade; Gygi, Steven P
2015-07-16
Protein interactions form a network whose structure drives cellular function and whose organization informs biological inquiry. Using high-throughput affinity-purification mass spectrometry, we identify interacting partners for 2,594 human proteins in HEK293T cells. The resulting network (BioPlex) contains 23,744 interactions among 7,668 proteins with 86% previously undocumented. BioPlex accurately depicts known complexes, attaining 80%-100% coverage for most CORUM complexes. The network readily subdivides into communities that correspond to complexes or clusters of functionally related proteins. More generally, network architecture reflects cellular localization, biological process, and molecular function, enabling functional characterization of thousands of proteins. Network structure also reveals associations among thousands of protein domains, suggesting a basis for examining structurally related proteins. Finally, BioPlex, in combination with other approaches, can be used to reveal interactions of biological or clinical significance. For example, mutations in the membrane protein VAPB implicated in familial amyotrophic lateral sclerosis perturb a defined community of interactors. Copyright © 2015 Elsevier Inc. All rights reserved.
The BioPlex Network: A Systematic Exploration of the Human Interactome
Huttlin, Edward L.; Ting, Lily; Bruckner, Raphael J.; Gebreab, Fana; Gygi, Melanie P.; Szpyt, John; Tam, Stanley; Zarraga, Gabriela; Colby, Greg; Baltier, Kurt; Dong, Rui; Guarani, Virginia; Vaites, Laura Pontano; Ordureau, Alban; Rad, Ramin; Erickson, Brian K.; Wühr, Martin; Chick, Joel; Zhai, Bo; Kolippakkam, Deepak; Mintseris, Julian; Obar, Robert A.; Harris, Tim; Artavanis-Tsakonas, Spyros; Sowa, Mathew E.; DeCamilli, Pietro; Paulo, Joao A.; Harper, J. Wade; Gygi, Steven P.
2015-01-01
SUMMARY Protein interactions form a network whose structure drives cellular function and whose organization informs biological inquiry. Using high-throughput affinity-purification mass spectrometry, we identify interacting partners for 2,594 human proteins in HEK293T cells. The resulting network (BioPlex) contains 23,744 interactions among 7,668 proteins with 86% previously undocumented. BioPlex accurately depicts known complexes, attaining 80-100% coverage for most CORUM complexes. The network readily subdivides into communities that correspond to complexes or clusters of functionally related proteins. More generally, network architecture reflects cellular localization, biological process, and molecular function, enabling functional characterization of thousands of proteins. Network structure also reveals associations among thousands of protein domains, suggesting a basis for examining structurally-related proteins. Finally, BioPlex, in combination with other approaches can be used to reveal interactions of biological or clinical significance. For example, mutations in the membrane protein VAPB implicated in familial Amyotrophic Lateral Sclerosis perturb a defined community of interactors. PMID:26186194
Xia, Bing; Mamonov, Artem; Leysen, Seppe; Allen, Karen N; Strelkov, Sergei V; Paschalidis, Ioannis Ch; Vajda, Sandor; Kozakov, Dima
2015-07-30
The protein-protein docking server ClusPro is used by thousands of laboratories, and models built by the server have been reported in over 300 publications. Although the structures generated by the docking include near-native ones for many proteins, selecting the best model is difficult due to the uncertainty in scoring. Small angle X-ray scattering (SAXS) is an experimental technique for obtaining low resolution structural information in solution. While not sufficient on its own to uniquely predict complex structures, accounting for SAXS data improves the ranking of models and facilitates the identification of the most accurate structure. Although SAXS profiles are currently available only for a small number of complexes, due to its simplicity the method is becoming increasingly popular. Since combining docking with SAXS experiments will provide a viable strategy for fairly high-throughput determination of protein complex structures, the option of using SAXS restraints is added to the ClusPro server. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
USDA-ARS?s Scientific Manuscript database
Like many agricultural crops, the cultivated cotton genome is large and polyploid (~2.5Gb), consisting of two very similar repeat-rich subgenomes, whose size and complexity pose significant challenges for accurate genome reconstruction using whole-genome shotgun approaches. A strategy for accurately...
Communicating with wildland interface communities during wildfire
Taylor, Jonathan G.; Gillette, Shana C.
2005-01-01
Communications during fire events are complex. Nevertheless, training fire information officers to plan fire communications before events, and to communicate during fires in a way that accurately and promptly informs residents in fire-affected areas, can increase effectiveness, reduce anxiety, ensure residents have accurate information on which to act, help them make better decisions, and possibly save lives.
DNA barcoding of human-biting black flies (Diptera: Simuliidae) in Thailand.
Pramual, Pairot; Thaijarern, Jiraporn; Wongpakam, Komgrit
2016-12-01
Black flies (Diptera: Simuliidae) are important insect vectors and pests of humans and animals. Accurate identification, therefore, is important for control and management. In this study, we used mitochondrial cytochrome oxidase I (COI) barcoding sequences to test the efficiency of species identification for the human-biting black flies in Thailand. We used human-biting specimens because they enabled us to link information with previous studies involving the immature stages. Three black fly taxa, Simulium nodosum, S. nigrogilvum and S. doipuiense complex, were collected. The S. doipuiense complex was confirmed for the first time as having human-biting habits. The COI sequences revealed considerable genetic diversity in all three species. Comparisons to a COI sequence library of black flies in Thailand and in a public database indicated a high efficiency for specimen identification for S. nodosum and S. nigrogilvum, but this method was not successful for the S. doipuiense complex. Phylogenetic analyses revealed two divergent lineages in the S. doipuiense complex. Human-biting specimens formed a separate clade from other members of this complex. The results are consistent with the Barcoding Index Number System (BINs) analysis that found six BINs in the S. doipuiense complex. Further taxonomic work is needed to clarify the species status of these human-biting specimens. Copyright © 2016 Elsevier B.V. All rights reserved.
Quantum Monte Carlo for atoms and molecules
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnett, R.N.
1989-11-01
The diffusion quantum Monte Carlo with fixed nodes (QMC) approach has been employed in studying energy-eigenstates for 1--4 electron systems. Previous work employing the diffusion QMC technique yielded energies of high quality for H{sub 2}, LiH, Li{sub 2}, and H{sub 2}O. Here, the range of calculations with this new approach has been extended to include additional first-row atoms and molecules. In addition, improvements in the previously computed fixed-node energies of LiH, Li{sub 2}, and H{sub 2}O have been obtained using more accurate trial functions. All computations were performed within, but are not limited to, the Born-Oppenheimer approximation. In our computations,more » the effects of variation of Monte Carlo parameters on the QMC solution of the Schroedinger equation were studied extensively. These parameters include the time step, renormalization time and nodal structure. These studies have been very useful in determining which choices of such parameters will yield accurate QMC energies most efficiently. Generally, very accurate energies (90--100% of the correlation energy is obtained) have been computed with single-determinant trail functions multiplied by simple correlation functions. Improvements in accuracy should be readily obtained using more complex trial functions.« less
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Liu, Zhen-Fei; Egger, David A.; Refaely-Abramson, Sivan; ...
2017-02-21
The alignment of the frontier orbital energies of an adsorbed molecule with the substrate Fermi level at metal-organic interfaces is a fundamental observable of significant practical importance in nanoscience and beyond. Typical density functional theory calculations, especially those using local and semi-local functionals, often underestimate level alignment leading to inaccurate electronic structure and charge transport properties. Here, we develop a new fully self-consistent predictive scheme to accurately compute level alignment at certain classes of complex heterogeneous molecule-metal interfaces based on optimally tuned range-separated hybrid functionals. Starting from a highly accurate description of the gas-phase electronic structure, our method by constructionmore » captures important nonlocal surface polarization effects via tuning of the long-range screened exchange in a range-separated hybrid in a non-empirical and system-specific manner. We implement this functional in a plane-wave code and apply it to several physisorbed and chemisorbed molecule-metal interface systems. Our results are in quantitative agreement with experiments, the both the level alignment and work function changes. This approach constitutes a new practical scheme for accurate and efficient calculations of the electronic structure of molecule-metal interfaces.« less
Molecular Spectroscopy by Ab Initio Methods
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Partridge, Harry; Arnold, James O. (Technical Monitor)
1994-01-01
Due to recent advances in methods and computers, the accuracy of ab calculations has reached a point where these methods can be used to provide accurate spectroscopic constants for small molecules; this will be illustrated with several examples. We will show how ab initio calculations where used to identify the Hermann infrared system in N2 and two band systems in CO. The identification of all three of these band systems relied on very accurate calculations of quintet states. The analysis of the infrared spectra of cool stars requires knowledge of the intensity of vibrational transitions in SiO for high nu and J levels. While experiment can supply very accurate dipole moments for nu = 0 to 3, this is insufficient to construct a global dipole moment function. We show how theory, combined by the experiment, can be used to generate the line intensities up to nu = 40 and J = 250. The spectroscopy of transition metal containing systems is very difficult for both theory and experiment. We will discuss the identification of the ground state of Ti2 and the spectroscopy of AlCu as examples of how theory can contribute to the understanding of these complex systems.
NASA Astrophysics Data System (ADS)
Liu, Zhen-Fei; Egger, David A.; Refaely-Abramson, Sivan; Kronik, Leeor; Neaton, Jeffrey B.
2017-03-01
The alignment of the frontier orbital energies of an adsorbed molecule with the substrate Fermi level at metal-organic interfaces is a fundamental observable of significant practical importance in nanoscience and beyond. Typical density functional theory calculations, especially those using local and semi-local functionals, often underestimate level alignment leading to inaccurate electronic structure and charge transport properties. In this work, we develop a new fully self-consistent predictive scheme to accurately compute level alignment at certain classes of complex heterogeneous molecule-metal interfaces based on optimally tuned range-separated hybrid functionals. Starting from a highly accurate description of the gas-phase electronic structure, our method by construction captures important nonlocal surface polarization effects via tuning of the long-range screened exchange in a range-separated hybrid in a non-empirical and system-specific manner. We implement this functional in a plane-wave code and apply it to several physisorbed and chemisorbed molecule-metal interface systems. Our results are in quantitative agreement with experiments, the both the level alignment and work function changes. Our approach constitutes a new practical scheme for accurate and efficient calculations of the electronic structure of molecule-metal interfaces.
NASA Astrophysics Data System (ADS)
Palma, J. L.; Rodrigues, C. V.; Lopes, A. S.; Carneiro, A. M. C.; Coelho, R. P. C.; Gomes, V. C.
2017-12-01
With the ever increasing accuracy required from numerical weather forecasts, there is pressure to increase the resolution and fidelity employed in computational micro-scale flow models. However, numerical studies of complex terrain flows are fundamentally bound by the digital representation of the terrain and land cover. This work assess the impact of the surface description on micro-scale simulation results at a highly complex site in Perdigão, Portugal, characterized by a twin parallel ridge topography, densely forested areas and an operating wind turbine. Although Coriolis and stratification effects cannot be ignored, the study is done under neutrally stratified atmosphere and static inflow conditions. The understanding gained here will later carry over to WRF-coupled simulations, where those conditions do not apply and the flow physics is more accurately modelled. With access to very fine digital mappings (<1m horizontal resolution) of both topography and land cover (roughness and canopy cover, both obtained through aerial LIDAR scanning of the surface) the impact of each element of the surface description on simulation results can be individualized, in order to estimate the resolution required to satisfactorily resolve them. Starting from the bare topographic description, in its coursest form, these include: a) the surface roughness mapping, b) the operating wind turbine, c) the canopy cover, as either body forces or added surface roughness (akin to meso-scale modelling), d) high resolution topography and surface cover mapping. Each of these individually will have an impact near the surface, including the rotor swept area of modern wind turbines. Combined they will considerably change flow up to boundary layer heights. Sensitivity to these elements cannot be generalized and should be assessed case-by-case. This type of in-depth study, unfeasible using WRF-coupled simulations, should provide considerable insight when spatially allocating mesh resolution for accurate resolution of complex flows.
Decision Making in the Airplane
NASA Technical Reports Server (NTRS)
Orasanu, Judith; Shafto, Michael G. (Technical Monitor)
1995-01-01
The Importance of decision-making to safety in complex, dynamic environments like mission control centers, aviation, and offshore installations has been well established. NASA-ARC has a program of research dedicated to fostering safe and effective decision-making in the manned spaceflight environment. Because access to spaceflight is limited, environments with similar characteristics, including aviation and nuclear power plants, serve as analogs from which space-relevant data can be gathered and theories developed. Analyses of aviation accidents cite crew judgement and decision making as causes or contributing factors in over half of all accidents. Yet laboratory research on decision making has not proven especially helpful In improving the quality of decisions in these kinds of environments. One reason is that the traditional, analytic decision models are inappropriate to multi-dimensional, high-risk environments, and do not accurately describe what expert human decision makers do when they make decisions that have consequences. A new model of dynamic, naturalistic decision making is offered that may prove useful for improving decision making in complex, isolated, confined and high-risk environments. Based on analyses of crew performance in full-mission simulators and accident reports, features that define effective decision strategies in abnormal or emergency situations have been identified. These include accurate situation assessment (including time and risk assessment), appreciation of the complexity of the problem, sensitivity to constraints on the decision, timeliness of the response, and use of adequate information. More effective crews also manage their workload to provide themselves with time and resources to make good decisions. In brief, good decisions are appropriate to the demands of the situation. Effective crew decision making and overall performance are mediated by crew communication. Communication contributes to performance because it assures that all crew members have essential information, but it also regulates and coordinates crew actions and is the medium of collective thinking In response to a problem, This presentation will examine the relations between leadership, communication, decision making and overall crew performance. Implications of these findings for training will be discussed.
Decision Making in Action: Applying Research to Practice
NASA Technical Reports Server (NTRS)
Orasanu, Judith; Hart, Sandra G. (Technical Monitor)
1994-01-01
The importance of decision-making to safety in complex, dynamic environments like mission control centers, aviation, and offshore installations has been well established. NASA-ARC has a program of research dedicated to fostering safe and effective decision-making in the manned spaceflight environment: Because access to spaceflight is limited, environments with similar characteristics, including aviation and nuclear power plants, serve as analogs from which space-relevant data can be gathered and theories developed. Analyses of aviation accidents cite crew judgement and decision making as causes or contributing factors in over half of all accidents. Yet laboratory research on decision making has not proven especially helpful in improving the quality of decisions in these kinds of environments. One reason is that the traditional, analytic decision models are inappropriate to multi-dimensional, high-risk environments, and do not accurately describe what expert human decision makers do when they make decisions that have consequences. A new model of dynamic, naturalistic decision making is offered that may prove useful for improving decision making in complex, isolated, confined and high-risk environments. Based on analyses of crew performance in full-mission simulators and accident reports, features that define effective decision strategies in abnormal or emergency situations have been identified. These include accurate situation assessment (including time and risk assessment), appreciation of the complexity of the problem, sensitivity to constraints on the decision, timeliness of the response, and use of adequate information. More effective crews also manage their workload to provide themselves with time and resources to make good good decisions are appropriate to the demands of the situation. Effective crew decision making and overall performance are mediated by crew communication. Communication contributes to performance because it assures that all crew members have essential information, but it also regulates and coordinates crew actions and is the medium of collective thinking in response to a problem. This presentation will examine the relations between leadership, communication, decision making and overall crew performance. Implications of these findings for training will be discussed.
Garcia, Jair E.; Greentree, Andrew D.; Shrestha, Mani; Dorin, Alan; Dyer, Adrian G.
2014-01-01
Background The study of the signal-receiver relationship between flowering plants and pollinators requires a capacity to accurately map both the spectral and spatial components of a signal in relation to the perceptual abilities of potential pollinators. Spectrophotometers can typically recover high resolution spectral data, but the spatial component is difficult to record simultaneously. A technique allowing for an accurate measurement of the spatial component in addition to the spectral factor of the signal is highly desirable. Methodology/Principal findings Consumer-level digital cameras potentially provide access to both colour and spatial information, but they are constrained by their non-linear response. We present a robust methodology for recovering linear values from two different camera models: one sensitive to ultraviolet (UV) radiation and another to visible wavelengths. We test responses by imaging eight different plant species varying in shape, size and in the amount of energy reflected across the UV and visible regions of the spectrum, and compare the recovery of spectral data to spectrophotometer measurements. There is often a good agreement of spectral data, although when the pattern on a flower surface is complex a spectrophotometer may underestimate the variability of the signal as would be viewed by an animal visual system. Conclusion Digital imaging presents a significant new opportunity to reliably map flower colours to understand the complexity of these signals as perceived by potential pollinators. Compared to spectrophotometer measurements, digital images can better represent the spatio-chromatic signal variability that would likely be perceived by the visual system of an animal, and should expand the possibilities for data collection in complex, natural conditions. However, and in spite of its advantages, the accuracy of the spectral information recovered from camera responses is subject to variations in the uncertainty levels, with larger uncertainties associated with low radiance levels. PMID:24827828
He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei
2012-06-25
Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the computational time significantly while keeping high prediction accuracy.
Real-time monitoring of high-gravity corn mash fermentation using in situ raman spectroscopy.
Gray, Steven R; Peretti, Steven W; Lamb, H Henry
2013-06-01
In situ Raman spectroscopy was employed for real-time monitoring of simultaneous saccharification and fermentation (SSF) of corn mash by an industrial strain of Saccharomyces cerevisiae. An accurate univariate calibration model for ethanol was developed based on the very strong 883 cm(-1) C-C stretching band. Multivariate partial least squares (PLS) calibration models for total starch, dextrins, maltotriose, maltose, glucose, and ethanol were developed using data from eight batch fermentations and validated using predictions for a separate batch. The starch, ethanol, and dextrins models showed significant prediction improvement when the calibration data were divided into separate high- and low-concentration sets. Collinearity between the ethanol and starch models was avoided by excluding regions containing strong ethanol peaks from the starch model and, conversely, excluding regions containing strong saccharide peaks from the ethanol model. The two-set calibration models for starch (R(2) = 0.998, percent error = 2.5%) and ethanol (R(2) = 0.999, percent error = 2.1%) provide more accurate predictions than any previously published spectroscopic models. Glucose, maltose, and maltotriose are modeled to accuracy comparable to previous work on less complex fermentation processes. Our results demonstrate that Raman spectroscopy is capable of real time in situ monitoring of a complex industrial biomass fermentation. To our knowledge, this is the first PLS-based chemometric modeling of corn mash fermentation under typical industrial conditions, and the first Raman-based monitoring of a fermentation process with glucose, oligosaccharides and polysaccharides present. Copyright © 2013 Wiley Periodicals, Inc.
Forest cover type analysis of New England forests using innovative WorldView-2 imagery
NASA Astrophysics Data System (ADS)
Kovacs, Jenna M.
For many years, remote sensing has been used to generate land cover type maps to create a visual representation of what is occurring on the ground. One significant use of remote sensing is the identification of forest cover types. New England forests are notorious for their especially complex forest structure and as a result have been, and continue to be, a challenge when classifying forest cover types. To most accurately depict forest cover types occurring on the ground, it is essential to utilize image data that have a suitable combination of both spectral and spatial resolution. The WorldView-2 (WV2) commercial satellite, launched in 2009, is the first of its kind, having both high spectral and spatial resolutions. WV2 records eight bands of multispectral imagery, four more than the usual high spatial resolution sensors, and has a pixel size of 1.85 meters at the nadir. These additional bands have the potential to improve classification detail and classification accuracy of forest cover type maps. For this reason, WV2 imagery was utilized on its own, and in combination with Landsat 5 TM (LS5) multispectral imagery, to evaluate whether these image data could more accurately classify forest cover types. In keeping with recent developments in image analysis, an Object-Based Image Analysis (OBIA) approach was used to segment images of Pawtuckaway State Park and nearby private lands, an area representative of the typical complex forest structure found in the New England region. A Classification and Regression Tree (CART) analysis was then used to classify image segments at two levels of classification detail. Accuracies for each forest cover type map produced were generated using traditional and area-based error matrices, and additional standard accuracy measures (i.e., KAPPA) were generated. The results from this study show that there is value in analyzing imagery with both high spectral and spatial resolutions, and that WV2's new and innovative bands can be useful for the classification of complex forest structures.
Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald
2011-06-01
Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems.
A mass spectrometer based explosives trace detector
NASA Astrophysics Data System (ADS)
Vilkov, Andrey; Jorabchi, Kaveh; Hanold, Karl; Syage, Jack A.
2011-05-01
In this paper we describe the application of mass spectrometry (MS) to the detection of trace explosives. We begin by reviewing the issue of explosives trace detection (ETD) and describe the method of mass spectrometry (MS) as an alternative to existing technologies. Effective security screening devices must be accurate (high detection and low false positive rate), fast and cost effective (upfront and operating costs). Ion mobility spectrometry (IMS) is the most commonly deployed method for ETD devices. Its advantages are compact size and relatively low price. For applications requiring a handheld detector, IMS is an excellent choice. For applications that are more stationary (e.g., checkpoint and alternatives to IMS are available. MS is recognized for its superior performance with regard to sensitivity and specificity, which translate to lower false negative and false positive rates. In almost all applications outside of security where accurate chemical analysis is needed, MS is usually the method of choice and is often referred to as the gold standard for chemical analysis. There are many review articles and proceedings that describe detection technologies for explosives. 1,2,3,4 Here we compare MS and IMS and identify the strengths and weaknesses of each method. - Mass spectrometry (MS): MS offers high levels of sensitivity and specificity compared to other technologies for chemical detection. Its traditional disadvantages have been high cost and complexity. Over the last few years, however, the economics have greatly improved and MS is now capable of routine and automated operation. Here we compare MS and IMS and identify the strengths and weaknesses of each method. - Ion mobility spectrometry (IMS): 5 MS-ETD Screening System IMS is similar in concept to MS except that the ions are dispersed by gas-phase viscosity and not by molecular weight. The main advantage of IMS is that it does not use a vacuum system, which greatly reduces the size, cost, and complexity relative to MS. However, the trade-off is that the measurement accuracy is considerably less than MS. This is especially true for complex samples or when screening for a large number of target compounds simultaneously.
2011-01-01
Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems. PMID:21627852
Effect of ionic strength and presence of serum on lipoplexes structure monitorized by FRET
Madeira, Catarina; Loura, Luís MS; Prieto, Manuel; Fedorov, Aleksander; Aires-Barros, M Raquel
2008-01-01
Background Serum and high ionic strength solutions constitute important barriers to cationic lipid-mediated intravenous gene transfer. Preparation or incubation of lipoplexes in these media results in alteration of their biophysical properties, generally leading to a decrease in transfection efficiency. Accurate quantification of these changes is of paramount importance for the success of lipoplex-mediated gene transfer in vivo. Results In this work, a novel time-resolved fluorescence resonance energy transfer (FRET) methodology was used to monitor lipoplex structural changes in the presence of phosphate-buffered saline solution (PBS) and fetal bovine serum. 1,2-dioleoyl-3-trimethylammonium-propane (DOTAP)/pDNA lipoplexes, prepared in high and low ionic strength solutions, are compared in terms of complexation efficiency. Lipoplexes prepared in PBS show lower complexation efficiencies when compared to lipoplexes prepared in low ionic strength buffer followed by addition of PBS. Moreover, when serum is added to the referred formulation no significant effect on the complexation efficiency was observed. In physiological saline solutions and serum, a multilamellar arrangement of the lipoplexes is maintained, with reduced spacing distances between the FRET probes, relative to those in low ionic strength medium. Conclusion The time-resolved FRET methodology described in this work allowed us to monitor stability and characterize quantitatively the structural changes (variations in interchromophore spacing distances and complexation efficiencies) undergone by DOTAP/DNA complexes in high ionic strength solutions and in presence of serum, as well as to determine the minimum amount of potentially cytotoxic cationic lipid necessary for complete coverage of DNA. This constitutes essential information regarding thoughtful design of future in vivo applications. PMID:18302788
Lee, Major K; Gao, Feng; Strasberg, Steven M
2016-08-01
Liver resections have classically been distinguished as "minor" or "major," based on number of segments removed. This is flawed because the number of segments resected alone does not convey the complexity of a resection. We recently developed a 3-tiered classification for the complexity of liver resections based on utility weighting by experts. This study aims to complete the earlier classification and to illustrate its application. Two surveys were administered to expert liver surgeons. Experts were asked to rate the difficulty of various open liver resections on a scale of 1 to 10. Statistical methods were then used to develop a complexity score for each procedure. Sixty-six of 135 (48.9%) surgeons responded to the earlier survey, and 66 of 122 (54.1%) responded to the current survey. In all, 19 procedures were rated. The lowest mean score of 1.36 (indicating least difficult) was given to peripheral wedge resection. Right hepatectomy with IVC reconstruction was deemed most difficult, with a score of 9.35. Complexity scores were similar for 9 procedures present in both surveys. Caudate resection, hepaticojejunostomy, and vascular reconstruction all increased the complexity of standard resections significantly. These data permit quantitative assessment of the difficulty of a variety of liver resections. The complexity scores generated allow for separation of liver resections into 3 categories of complexity (low complexity, medium complexity, and high complexity) on a quantitative basis. This provides a more accurate representation of the complexity of procedures in comparative studies. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
High Order Schemes in BATS-R-US: Is it OK to Simplify Them?
NASA Astrophysics Data System (ADS)
Tóth, G.; Chen, Y.; van der Holst, B.; Daldorff, L. K. S.
2014-09-01
We describe a number of high order schemes and their simplified variants that have been implemented into the University of Michigan global magnetohydrodynamics code BATS-R-US. We compare the various schemes with each other and the legacy 2nd order TVD scheme for various test problems and two space physics applications. We find that the simplified schemes are often quite competitive with the more complex and expensive full versions, despite the fact that the simplified versions are only high order accurate for linear systems of equations. We find that all the high order schemes require some fixes to ensure positivity in the space physics applications. On the other hand, they produce superior results as compared with the second order scheme and/or produce the same quality of solution at a much reduced computational cost.
Automatic system for 3D reconstruction of the chick eye based on digital photographs.
Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L
2012-01-01
The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.
Al-Hamdani, Yasmine S.; Alfè, Dario; von Lilienfeld, O. Anatole; ...
2014-10-22
Density functional theory (DFT) studies of weakly interacting complexes have recently focused on the importance of van der Waals dispersion forces, whereas the role of exchange has received far less attention. Here, by exploiting the subtle binding between water and a boron and nitrogen doped benzene derivative (1,2-azaborine) we show how exact exchange can alter the binding conformation within a complex. Benchmark values have been calculated for three orientations of the water monomer on 1,2-azaborine from explicitly correlated quantum chemical methods, and we have also used diffusion quantum Monte Carlo. For a host of popular DFT exchange-correlation functionals we showmore » that the lack of exact exchange leads to the wrong lowest energy orientation of water on 1,2-azaborine. As such, we suggest that a high proportion of exact exchange and the associated improvement in the electronic structure could be needed for the accurate prediction of physisorption sites on doped surfaces and in complex organic molecules. Meanwhile to predict correct absolute interaction energies an accurate description of exchange needs to be augmented by dispersion inclusive functionals, and certain non-local van der Waals functionals (optB88- and optB86b-vdW) perform very well for absolute interaction energies. Through a comparison with water on benzene and borazine (B₃N₃H₆) we show that these results could have implications for the interaction of water with doped graphene surfaces, and suggest a possible way of tuning the interaction energy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barran, Perdita; Baker, Erin
The great complexity of biological systems and their environment poses similarly vast challenges for accurate analytical evaluations of their identity, structure and quantity. Post genomic science, has predicted much regarding the static populations of biological systems, but a further challenge for analysis is to test the accuracy of these predictions, as well as provide a better representation of the transient nature of the molecules of life. Accurate measurements of biological systems have wide applications for biological, forensic, biotechnological and healthcare fields. Therefore, the holy grail is to find a technique which can identify and quantify biological molecules with high throughput,more » sensitivity and robustness, as well evaluate molecular structure(s) in order to understand how the specific molecules interact and function. While wrapping all of these characteristics into one platform may sound difficult, ion mobility spectrometry (IMS) is addressing all of these challenges. Over the last decade, the number of analytical studies utilizing IMS for the evaluation of complex biological and environmental samples has greatly increased. In most cases IMS is coupled with mass spectrometry (IM-MS), but even alone IMS provides the unique capability of rapidly assessing a molecule’s structure, which can be extremely difficult with other techniques. The robustness of the IMS measurement is bourne out by its widespread use in security, environmental and military applications. The multidimensional IM-MS measurements however have been proven to be ever more powerful, as applied to complex mixtures as they enable the evaluation of both the structure and mass of every molecular component in a sample during a single measurement, without the need for continual reference calibration.« less
Chemical vapor deposition modeling for high temperature materials
NASA Technical Reports Server (NTRS)
Gokoglu, Suleyman A.
1992-01-01
The formalism for the accurate modeling of chemical vapor deposition (CVD) processes has matured based on the well established principles of transport phenomena and chemical kinetics in the gas phase and on surfaces. The utility and limitations of such models are discussed in practical applications for high temperature structural materials. Attention is drawn to the complexities and uncertainties in chemical kinetics. Traditional approaches based on only equilibrium thermochemistry and/or transport phenomena are defended as useful tools, within their validity, for engineering purposes. The role of modeling is discussed within the context of establishing the link between CVD process parameters and material microstructures/properties. It is argued that CVD modeling is an essential part of designing CVD equipment and controlling/optimizing CVD processes for the production and/or coating of high performance structural materials.
A methodology of SiP testing based on boundary scan
NASA Astrophysics Data System (ADS)
Qin, He; Quan, Haiyang; Han, Yifei; Zhu, Tianrui; Zheng, Tuo
2017-10-01
System in Package (SiP) play an important role in portable, aerospace and military electronic with the microminiaturization, light weight, high density, and high reliability. At present, SiP system test has encountered the problem on system complexity and malfunction location with the system scale exponentially increase. For SiP system, this paper proposed a testing methodology and testing process based on the boundary scan technology. Combining the character of SiP system and referencing the boundary scan theory of PCB circuit and embedded core test, the specific testing methodology and process has been proposed. The hardware requirement of the under test SiP system has been provided, and the hardware platform of the testing has been constructed. The testing methodology has the character of high test efficiency and accurate malfunction location.
Thermographic Sensing For On-Line Industrial Control
NASA Astrophysics Data System (ADS)
Holmsten, Dag
1986-10-01
It is today's emergence of thermoelectrically cooled, highly accurate infrared linescanners and imaging systems that has definitely made on-line Infraread Thermography (IRT) possible. Specifically designed for continuous use, these scanners are equipped with dedicated software capable of monitoring and controlling highly complex thermodynamic situations. This paper will outline some possible implications of using IRT on-line by describing some uses of this technology in the steel-making (hot rolling) and automotive industries (machine-vision). A warning is also expressed that IRT technology not originally designed for automated applications e.g. high resolution, imaging systems, should not be directly applied to an on-line measurement situation without having its measurement resolution, accuracy and especially its repeatability, reliably proven. Some suitable testing procedures are briefly outlined at the end of the paper.
Correction of phase-shifting error in wavelength scanning digital holographic microscopy
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Wang, Jie; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-05-01
Digital holographic microscopy is a promising method for measuring complex micro-structures with high slopes. A quasi-common path interferometric apparatus is adopted to overcome environmental disturbances, and an acousto-optic tunable filter is used to obtain multi-wavelength holograms. However, the phase shifting error caused by the acousto-optic tunable filter reduces the measurement accuracy and, in turn, the reconstructed topographies are erroneous. In this paper, an accurate reconstruction approach is proposed. It corrects the phase-shifting errors by minimizing the difference between the ideal interferograms and the recorded ones. The restriction on the step number and uniformity of the phase shifting is relaxed in the interferometry, and the measurement accuracy for complex surfaces can also be improved. The universality and superiority of the proposed method are demonstrated by practical experiments and comparison to other measurement methods.
The Accuracy of the ADOS-2 in Identifying Autism among Adults with Complex Psychiatric Conditions
Maddox, Brenna B.; Brodkin, Edward S.; Calkins, Monica E.; Shea, Kathleen; Mullan, Katherine; Hostager, Jack; Mandell, David S.; Miller, Judith S.
2018-01-01
The Autism Diagnostic Observation Schedule, Second Edition (ADOS-2), Module 4 is considered a “gold-standard” instrument for diagnosing autism spectrum disorder (ASD) in adults. Although the ADOS-2 shows good sensitivity and specificity in lab-based settings, it is unknown whether these results hold in community clinics that serve a more psychiatrically impaired population. This study is the first to evaluate the diagnostic accuracy of the ADOS-2 among adults in community mental health centers (n = 75). The ADOS-2 accurately identified all adults with ASD; however, it also had a high rate of false positives among adults with psychosis (30%). Findings serve as a reminder that social communication difficulties measured by the ADOS-2 are not specific to ASD, particularly in clinically complex settings. PMID:28589494
PrePhyloPro: phylogenetic profile-based prediction of whole proteome linkages
Niu, Yulong; Liu, Chengcheng; Moghimyfiroozabad, Shayan; Yang, Yi
2017-01-01
Direct and indirect functional links between proteins as well as their interactions as part of larger protein complexes or common signaling pathways may be predicted by analyzing the correlation of their evolutionary patterns. Based on phylogenetic profiling, here we present a highly scalable and time-efficient computational framework for predicting linkages within the whole human proteome. We have validated this method through analysis of 3,697 human pathways and molecular complexes and a comparison of our results with the prediction outcomes of previously published co-occurrency model-based and normalization methods. Here we also introduce PrePhyloPro, a web-based software that uses our method for accurately predicting proteome-wide linkages. We present data on interactions of human mitochondrial proteins, verifying the performance of this software. PrePhyloPro is freely available at http://prephylopro.org/phyloprofile/. PMID:28875072
A Red-Light Running Prevention System Based on Artificial Neural Network and Vehicle Trajectory Data
Li, Pengfei; Li, Yan; Guo, Xiucheng
2014-01-01
The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems. PMID:25435870
Li, Pengfei; Li, Yan; Guo, Xiucheng
2014-01-01
The high frequency of red-light running and complex driving behaviors at the yellow onset at intersections cannot be explained solely by the dilemma zone and vehicle kinematics. In this paper, the author presented a red-light running prevention system which was based on artificial neural networks (ANNs) to approximate the complex driver behaviors during yellow and all-red clearance and serve as the basis of an innovative red-light running prevention system. The artificial neural network and vehicle trajectory are applied to identify the potential red-light runners. The ANN training time was also acceptable and its predicting accurate rate was over 80%. Lastly, a prototype red-light running prevention system with the trained ANN model was described. This new system can be directly retrofitted into the existing traffic signal systems.
Stambaugh, Corey; Durand, Mathieu; Kemiktarak, Utku; Lawall, John
2014-08-01
The material properties of silicon nitride (SiN) play an important role in the performance of SiN membranes used in optomechanical applications. An optimum design of a subwavelength high-contrast grating requires accurate knowledge of the membrane thickness and index of refraction, and its performance is ultimately limited by material absorption. Here we describe a cavity-enhanced method to measure the thickness and complex index of refraction of dielectric membranes with small, but nonzero, absorption coefficients. By determining Brewster's angle and an angle at which reflection is minimized by means of destructive interference, both the real part of the index of refraction and the sample thickness can be measured. A comparison of the losses in the empty cavity and the cavity containing the dielectric sample provides a measurement of the absorption.
NASA Astrophysics Data System (ADS)
Delogu, A.; Furini, F.
1991-09-01
Increasing interest in radar cross section (RCS) reduction is placing new demands on theoretical, computation, and graphic techniques for calculating scattering properties of complex targets. In particular, computer codes capable of predicting the RCS of an entire aircraft at high frequency and of achieving RCS control with modest structural changes, are becoming of paramount importance in stealth design. A computer code, evaluating the RCS of arbitrary shaped metallic objects that are computer aided design (CAD) generated, and its validation with measurements carried out using ALENIA RCS test facilities are presented. The code, based on the physical optics method, is characterized by an efficient integration algorithm with error control, in order to contain the computer time within acceptable limits, and by an accurate parametric representation of the target surface in terms of bicubic splines.
Crew Launch Vehicle Mobile Launcher Solid Rocket Motor Plume Induced Environment
NASA Technical Reports Server (NTRS)
Vu, Bruce T.; Sulyma, Peter
2008-01-01
The plume-induced environment created by the Ares 1 first stage, five-segment reusable solid rocket motor (RSRMV) will impose high heating rates and impact pressures on Launch Complex 39. The extremes of these environments pose a potential threat to weaken or even cause structural components to fail if insufficiently designed. Therefore the ability to accurately predict these environments is critical to assist in specifying structural design requirements to insure overall structural integrity and flight safety. This paper presents the predicted thermal and pressure environments induced by the launch of the Crew Launch Vehicle (CLV) from Launch Complex (LC) 39. Once the environments are predicted, a follow-on thermal analysis is required to determine the surface temperature response and the degradation rate of the materials. An example of structures responding to the plume-induced environment will be provided.
NASA Technical Reports Server (NTRS)
Lucero, John M.
2003-01-01
A new optically based measuring capability that characterizes surface topography, geometry, and wear has been employed by NASA Glenn Research Center s Tribology and Surface Science Branch. To characterize complex parts in more detail, we are using a three-dimensional, surface structure analyzer-the NewView5000 manufactured by Zygo Corporation (Middlefield, CT). This system provides graphical images and high-resolution numerical analyses to accurately characterize surfaces. Because of the inherent complexity of the various analyzed assemblies, the machine has been pushed to its limits. For example, special hardware fixtures and measuring techniques were developed to characterize Oil- Free thrust bearings specifically. We performed a more detailed wear analysis using scanning white light interferometry to image and measure the bearing structure and topography, enabling a further understanding of bearing failure causes.
NASA Astrophysics Data System (ADS)
An, Shengpei; Hu, Tianyue; Liu, Yimou; Peng, Gengxin; Liang, Xianghao
2017-12-01
Static correction is a crucial step of seismic data processing for onshore play, which frequently has a complex near-surface condition. The effectiveness of the static correction depends on an accurate determination of first-arrival traveltimes. However, it is difficult to accurately auto-pick the first arrivals for data with low signal-to-noise ratios (SNR), especially for those measured in the area of the complex near-surface. The technique of the super-virtual interferometry (SVI) has the potential to enhance the SNR of first arrivals. In this paper, we develop the extended SVI with (1) the application of the reverse correlation to improve the capability of SNR enhancement at near-offset, and (2) the usage of the multi-domain method to partially overcome the limitation of current method, given insufficient available source-receiver combinations. Compared to the standard SVI, the SNR enhancement of the extended SVI can be up to 40%. In addition, we propose a quality control procedure, which is based on the statistical characteristics of multichannel recordings of first arrivals. It can auto-correct the mispicks, which might be spurious events generated by the SVI. This procedure is very robust, highly automatic and it can accommodate large data in batches. Finally, we develop one automatic first-arrival picking method to combine the extended SVI and the quality control procedure. Both the synthetic and the field data examples demonstrate that the proposed method is able to accurately auto-pick first arrivals in seismic traces with low SNR. The quality of the stacked seismic sections obtained from this method is much better than those obtained from an auto-picking method, which is commonly employed by the commercial software.
A hierarchy of models for simulating experimental results from a 3D heterogeneous porous medium
NASA Astrophysics Data System (ADS)
Vogler, Daniel; Ostvar, Sassan; Paustian, Rebecca; Wood, Brian D.
2018-04-01
In this work we examine the dispersion of conservative tracers (bromide and fluorescein) in an experimentally-constructed three-dimensional dual-porosity porous medium. The medium is highly heterogeneous (σY2 = 5.7), and consists of spherical, low-hydraulic-conductivity inclusions embedded in a high-hydraulic-conductivity matrix. The bimodal medium was saturated with tracers, and then flushed with tracer-free fluid while the effluent breakthrough curves were measured. The focus for this work is to examine a hierarchy of four models (in the absence of adjustable parameters) with decreasing complexity to assess their ability to accurately represent the measured breakthrough curves. The most information-rich model was (1) a direct numerical simulation of the system in which the geometry, boundary and initial conditions, and medium properties were fully independently characterized experimentally with high fidelity. The reduced-information models included; (2) a simplified numerical model identical to the fully-resolved direct numerical simulation (DNS) model, but using a domain that was one-tenth the size; (3) an upscaled mobile-immobile model that allowed for a time-dependent mass-transfer coefficient; and, (4) an upscaled mobile-immobile model that assumed a space-time constant mass-transfer coefficient. The results illustrated that all four models provided accurate representations of the experimental breakthrough curves as measured by global RMS error. The primary component of error induced in the upscaled models appeared to arise from the neglect of convection within the inclusions. We discuss the necessity to assign value (via a utility function or other similar method) to outcomes if one is to further select from among model options. Interestingly, these results suggested that the conventional convection-dispersion equation, when applied in a way that resolves the heterogeneities, yields models with high fidelity without requiring the imposition of a more complex non-Fickian model.
The role of timbre in pitch matching abilities and pitch discrimination abilities with complex tones
NASA Astrophysics Data System (ADS)
Moore, Robert E.; Watts, Christopher R.; Zhang, Fawen
2004-05-01
Control of fundamental frequency (F0) is important for singing in-tune and is an important factor related to the perception of a talented singing voice. One purpose of the present study was to investigate the relationship between pitch-matching skills, which is one method of testing F0 control, and pitch discrimination skills. It was observed that there was a relationship between pitch matching abilities and pitch discrimination abilities. Those subjects that were accurate pitch matchers were also accurate pitch discriminators (and vice versa). Further, timbre differences appeared to play a role in pitch discrimination accuracy. A second part of the study investigated the effect of timbre on speech discrimination. To study this, all but the first five harmonics of complex tones with different timbre were removed for the pitch discrimination task, thus making the tones more similar in timbre. Under this condition no difference was found between the pitch discrimination abilities of those who were accurate pitch matchers and those who were inaccurate pitch matchers. The results suggest that accurate F0 control is at least partially dependent on pitch discrimination abilities, and timbre appears to play an important role in differences in pitch discrimination ability.
Fitness and Individuality in Complex Life Cycles.
Herron, Matthew D
2016-12-01
Complex life cycles are common in the eukaryotic world, and they complicate the question of how to define individuality. Using a bottom-up, gene-centric approach, I consider the concept of fitness in the context of complex life cycles. I analyze the fitness effects of an allele (or a trait) on different biological units within a complex life history and how these effects drive evolutionary change within populations. Based on these effects, I attempt to construct a concept of fitness that accurately predicts evolutionary change in the context of complex life cycles.
NASA Technical Reports Server (NTRS)
Villanueva, Geronimo L.; DiSanti, M. A.; Mumma, M. J.; Xu, L.-H.
2012-01-01
Methanol (CH3OH) radiates efficiently at infrared wavelengths, dominating the C-H stretching region in comets, yet inadequate quantum-mechanical models have imposed limits on the practical use of its emission spectra. Accordingly, we constructed a new line-by-line model for the 3 fundamental band of methanol at 2844 / cm (3.52 micron) and applied it to interpret cometary fluorescence spectra. The new model permits accurate synthesis of line-by-line spectra for a wide range of rotational temperatures, ranging from 10 K to more than 400 K.We validated the model by comparing simulations of CH3OH fluorescent emission with measured spectra of three comets (C/2001 A2 LINEAR, C/2004 Q2 Machholz and 8P/Tuttle) acquired with high-resolution infrared spectrometers at high-altitude sites. The new model accurately describes the complex emission spectrum of the nu3 band, providing distinct rotational temperatures and production rates at greatly improved confidence levels compared with results derived from earlier fluorescence models. The new model reconciles production rates measured at infrared and radio wavelengths in C/2001 A2 (LINEAR). Methanol can now be quantified with unprecedented precision and accuracy in astrophysical sources through high-dispersion spectroscopy at infrared wavelengths
Accurate determination of complex materials coefficients of piezoelectric resonators.
Du, Xiao-Hong; Wang, Qing-Ming; Uchino, Kenji
2003-03-01
This paper presents a method of accurately determining the complex piezoelectric and elastic coefficients of piezoelectric ceramic resonators from the measurement of the normalized electric admittance, Y, which is electric admittance Y of piezoelectric resonator normalized by the angular frequency omega. The coefficients are derived from the measurements near three special frequency points that correspond to the maximum and the minimum normalized susceptance (B) and the maximum normalized conductance (G). The complex elastic coefficient is determined from the frequencies at these points, and the real and imaginary parts of the piezoelectric coefficient are related to the derivative of the susceptance with respect to the frequency and the asymmetry of the conductance, respectively, near the maximum conductance point. The measurements for some lead zirconate titanate (PZT) based ceramics are used as examples to demonstrate the calculation and experimental procedures and the comparisons with the standard methods.
Phase reconstruction using compressive two-step parallel phase-shifting digital holography
NASA Astrophysics Data System (ADS)
Ramachandran, Prakash; Alex, Zachariah C.; Nelleri, Anith
2018-04-01
The linear relationship between the sample complex object wave and its approximated complex Fresnel field obtained using single shot parallel phase-shifting digital holograms (PPSDH) is used in compressive sensing framework and an accurate phase reconstruction is demonstrated. It is shown that the accuracy of phase reconstruction of this method is better than that of compressive sensing adapted single exposure inline holography (SEOL) method. It is derived that the measurement model of PPSDH method retains both the real and imaginary parts of the Fresnel field but with an approximation noise and the measurement model of SEOL retains only the real part exactly equal to the real part of the complex Fresnel field and its imaginary part is completely not available. Numerical simulation is performed for CS adapted PPSDH and CS adapted SEOL and it is demonstrated that the phase reconstruction is accurate for CS adapted PPSDH and can be used for single shot digital holographic reconstruction.
Alkorta, Ibon; Montero-Campillo, M Merced; Elguero, José; Yáñez, Manuel; Mó, Otilia
2018-06-05
Accurate ab initio calculations reveal that oxyacid beryllium salts yield rather stable complexes with dihydrogen. The binding energies range between -40 and -60 kJ mol-1 for 1 : 1 complexes, remarkably larger than others previously reported for neutral H2 complexes. The second H2 molecule in 1 : 2 complexes is again strongly bound (between -18 and -20 kJ mol-1). The incoming H2 molecules in 1 : n complexes (n = 3-6) are more weakly bound, confirming the preference of Be for tetracoordinated arrangements.
A multiscale model for charge inversion in electric double layers
NASA Astrophysics Data System (ADS)
Mashayak, S. Y.; Aluru, N. R.
2018-06-01
Charge inversion is a widely observed phenomenon. It is a result of the rich statistical mechanics of the molecular interactions between ions, solvent, and charged surfaces near electric double layers (EDLs). Electrostatic correlations between ions and hydration interactions between ions and water molecules play a dominant role in determining the distribution of ions in EDLs. Due to highly polar nature of water, near a surface, an inhomogeneous and anisotropic arrangement of water molecules gives rise to pronounced variations in the electrostatic and hydration energies of ions. Classical continuum theories fail to accurately describe electrostatic correlations and molecular effects of water in EDLs. In this work, we present an empirical potential based quasi-continuum theory (EQT) to accurately predict the molecular-level properties of aqueous electrolytes. In EQT, we employ rigorous statistical mechanics tools to incorporate interatomic interactions, long-range electrostatics, correlations, and orientation polarization effects at a continuum-level. Explicit consideration of atomic interactions of water molecules is both theoretically and numerically challenging. We develop a systematic coarse-graining approach to coarse-grain interactions of water molecules and electrolyte ions from a high-resolution atomistic scale to the continuum scale. To demonstrate the ability of EQT to incorporate the water orientation polarization, ion hydration, and electrostatic correlations effects, we simulate confined KCl aqueous electrolyte and show that EQT can accurately predict the distribution of ions in a thin EDL and also predict the complex phenomenon of charge inversion.
Bhat, Punya; Kriel, Jurgen; Shubha Priya, Babu; Basappa; Shivananju, Nanjunda Swamy; Loos, Ben
2018-01-01
Autophagy is a major protein degradation pathway capable of upholding cellular metabolism under nutrient limiting conditions, making it a valuable resource to highly proliferating tumour cells. Although the regulatory machinery of the autophagic pathway has been well characterized, accurate modulation of this pathway remains complex in the context of clinical translatability for improved cancer therapies. In particular, the dynamic relationship between the rate of protein degradation through autophagy, i.e. autophagic flux, and the susceptibility of tumours to undergo apoptosis remains largely unclear. Adding to inefficient clinical translation is the lack of measurement techniques that accurately depict autophagic flux. Paradoxically, both increased autophagic flux as well as autophagy inhibition have been shown to sensitize cancer cells to undergo cell death, indicating the highly context dependent nature of this pathway. In this article, we aim to disentangle the role of autophagy modulation in tumour suppression by assessing existing literature in the context of autophagic flux and cellular metabolism at the interface of mitochondrial function. We highlight the urgency to not only assess autophagic flux more accurately, but also to center autophagy manipulation within the unique and inherent metabolic properties of cancer cells. Lastly, we discuss the challenges faced when targeting autophagy in the clinical setting. In doing so, it is hoped that a better understanding of autophagy in cancer therapy is revealed in order to overcome tumour chemoresistance through more controlled autophagy modulation in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
Sigala, Paul A.; Fafarman, Aaron T.; Schwans, Jason P.; Fried, Stephen D.; Fenn, Timothy D.; Caaveiro, Jose M. M.; Pybus, Brandon; Ringe, Dagmar; Petsko, Gregory A.; Boxer, Steven G.; Herschlag, Daniel
2013-01-01
Hydrogen bond networks are key elements of protein structure and function but have been challenging to study within the complex protein environment. We have carried out in-depth interrogations of the proton transfer equilibrium within a hydrogen bond network formed to bound phenols in the active site of ketosteroid isomerase. We systematically varied the proton affinity of the phenol using differing electron-withdrawing substituents and incorporated site-specific NMR and IR probes to quantitatively map the proton and charge rearrangements within the network that accompany incremental increases in phenol proton affinity. The observed ionization changes were accurately described by a simple equilibrium proton transfer model that strongly suggests the intrinsic proton affinity of one of the Tyr residues in the network, Tyr16, does not remain constant but rather systematically increases due to weakening of the phenol–Tyr16 anion hydrogen bond with increasing phenol proton affinity. Using vibrational Stark spectroscopy, we quantified the electrostatic field changes within the surrounding active site that accompany these rearrangements within the network. We were able to model these changes accurately using continuum electrostatic calculations, suggesting a high degree of conformational restriction within the protein matrix. Our study affords direct insight into the physical and energetic properties of a hydrogen bond network within a protein interior and provides an example of a highly controlled system with minimal conformational rearrangements in which the observed physical changes can be accurately modeled by theoretical calculations. PMID:23798390
Jiang, Jie; Yu, Wenbo; Zhang, Guangjun
2017-01-01
Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179
de Souza Figueiredo, Fabiana; Celano, Rita; de Sousa Silva, Danila; das Neves Costa, Fernanda; Hewitson, Peter; Ignatova, Svetlana; Piccinelli, Anna Lisa; Rastrelli, Luca; Guimarães Leitão, Suzana; Guimarães Leitão, Gilda
2017-01-20
Ampelozizyphus amazonicus Ducke (Rhamnaceae), a medicinal plant used to prevent malaria, is a climbing shrub, native to the Amazonian region, with jujubogenin glycoside saponins as main compounds. The crude extract of this plant is too complex for any kind of structural identification, and HPLC separation was not sufficient to resolve this issue. Therefore, the aim of this work was to obtain saponin enriched fractions from the bark ethanol extract by countercurrent chromatography (CCC) for further isolation and identification/characterisation of the major saponins by HPLC and MS. The butanol extract was fractionated by CCC with hexane - ethyl acetate - butanol - ethanol - water (1:6:1:1:6; v/v) solvent system yielding 4 group fractions. The collected fractions were analysed by UHPLC-HRMS (ultra-high-performance liquid chromatography/high resolution accurate mass spectrometry) and MS n . Group 1 presented mainly oleane type saponins, and group 3 showed mainly jujubogenin glycosides, keto-dammarane type triterpene saponins and saponins with C 31 skeleton. Thus, CCC separated saponins from the butanol-rich extract by skeleton type. A further purification of group 3 by CCC (ethyl acetate - ethanol - water (1:0.2:1; v/v)) and HPLC-RI was performed in order to obtain these unusual aglycones in pure form. Copyright © 2016 Elsevier B.V. All rights reserved.
Unstructured mesh adaptivity for urban flooding modelling
NASA Astrophysics Data System (ADS)
Hu, R.; Fang, F.; Salinas, P.; Pain, C. C.
2018-05-01
Over the past few decades, urban floods have been gaining more attention due to their increase in frequency. To provide reliable flooding predictions in urban areas, various numerical models have been developed to perform high-resolution flood simulations. However, the use of high-resolution meshes across the whole computational domain causes a high computational burden. In this paper, a 2D control-volume and finite-element flood model using adaptive unstructured mesh technology has been developed. This adaptive unstructured mesh technique enables meshes to be adapted optimally in time and space in response to the evolving flow features, thus providing sufficient mesh resolution where and when it is required. It has the advantage of capturing the details of local flows and wetting and drying front while reducing the computational cost. Complex topographic features are represented accurately during the flooding process. For example, the high-resolution meshes around the buildings and steep regions are placed when the flooding water reaches these regions. In this work a flooding event that happened in 2002 in Glasgow, Scotland, United Kingdom has been simulated to demonstrate the capability of the adaptive unstructured mesh flooding model. The simulations have been performed using both fixed and adaptive unstructured meshes, and then results have been compared with those published 2D and 3D results. The presented method shows that the 2D adaptive mesh model provides accurate results while having a low computational cost.
NASA Astrophysics Data System (ADS)
Nesbit, P. R.; Hugenholtz, C.; Durkin, P.; Hubbard, S. M.; Kucharczyk, M.; Barchyn, T.
2016-12-01
Remote sensing and digital mapping have started to revolutionize geologic mapping in recent years as a result of their realized potential to provide high resolution 3D models of outcrops to assist with interpretation, visualization, and obtaining accurate measurements of inaccessible areas. However, in stratigraphic mapping applications in complex terrain, it is difficult to acquire information with sufficient detail at a wide spatial coverage with conventional techniques. We demonstrate the potential of a UAV and Structure from Motion (SfM) photogrammetric approach for improving 3D stratigraphic mapping applications within a complex badland topography. Our case study is performed in Dinosaur Provincial Park (Alberta, Canada), mapping late Cretaceous fluvial meander belt deposits of the Dinosaur Park formation amidst a succession of steeply sloping hills and abundant drainages - creating a challenge for stratigraphic mapping. The UAV-SfM dataset (2 cm spatial resolution) is compared directly with a combined satellite and aerial LiDAR dataset (30 cm spatial resolution) to reveal advantages and limitations of each dataset before presenting a unique workflow that utilizes the dense point cloud from the UAV-SfM dataset for analysis. The UAV-SfM dense point cloud minimizes distortion, preserves 3D structure, and records an RGB attribute - adding potential value in future studies. The proposed UAV-SfM workflow allows for high spatial resolution remote sensing of stratigraphy in complex topographic environments. This extended capability can add value to field observations and has the potential to be integrated with subsurface petroleum models.
Beaujoin, Justine; Palomero-Gallagher, Nicola; Boumezbeur, Fawzi; Axer, Markus; Bernard, Jeremy; Poupon, Fabrice; Schmitz, Daniel; Mangin, Jean-François; Poupon, Cyril
2018-06-01
The human hippocampus plays a key role in memory management and is one of the first structures affected by Alzheimer's disease. Ultra-high magnetic resonance imaging provides access to its inner structure in vivo. However, gradient limitations on clinical systems hinder access to its inner connectivity and microstructure. A major target of this paper is the demonstration of diffusion MRI potential, using ultra-high field (11.7 T) and strong gradients (750 mT/m), to reveal the extra- and intra-hippocampal connectivity in addition to its microstructure. To this purpose, a multiple-shell diffusion-weighted acquisition protocol was developed to reach an ultra-high spatio-angular resolution with a good signal-to-noise ratio. The MRI data set was analyzed using analytical Q-Ball Imaging, Diffusion Tensor Imaging (DTI), and Neurite Orientation Dispersion and Density Imaging models. High Angular Resolution Diffusion Imaging estimates allowed us to obtain an accurate tractography resolving more complex fiber architecture than DTI models, and subsequently provided a map of the cross-regional connectivity. The neurite density was akin to that found in the histological literature, revealing the three hippocampal layers. Moreover, a gradient of connectivity and neurite density was observed between the anterior and the posterior part of the hippocampus. These results demonstrate that ex vivo ultra-high field/ultra-high gradients diffusion-weighted MRI allows the mapping of the inner connectivity of the human hippocampus, its microstructure, and to accurately reconstruct elements of the polysynaptic intra-hippocampal pathway using fiber tractography techniques at very high spatial/angular resolutions.
Registration of surface structures using airborne focused ultrasound.
Sundström, N; Börjesson, P O; Holmer, N G; Olsson, L; Persson, H W
1991-01-01
A low-cost measuring system, based on a personal computer combined with standard equipment for complex measurements and signal processing, has been assembled. Such a system increases the possibilities for small hospitals and clinics to finance advanced measuring equipment. A description of equipment developed for airborne ultrasound together with a personal computer-based system for fast data acquisition and processing is given. Two air-adapted ultrasound transducers with high lateral resolution have been developed. Furthermore, a few results for fast and accurate estimation of signal arrival time are presented. The theoretical estimation models developed are applied to skin surface profile registrations.
Application of shell elements in simulation of cans ironing
NASA Astrophysics Data System (ADS)
Andrianov, A. V.; Erisov, Y. A.; Aryshensky, E. V.; Aryshensky, V. Y.
2017-01-01
In the present study, the special shell finite elements are used to simulate the drawing with high ironing ratio of aluminum beverage cans. These elements are implemented in commercial software complex PAM-STAMP 2G by means of T.T.S. normal stress option, which is used for ironing to describe well normal stress. By comparison of simulation and experimental data, it is shown that shell elements with T.T.S. option are capable to provide accurate simulation of deep drawing and ironing. The error of can thickness and height computation agrees with the engineering computation accuracy.
Management accounting for advanced technological environments.
Kaplan, R S
1989-08-25
Management accounting systems designed decades ago no longer provide timely, relevant information for companies in today's highly competitive environment. New operational control and performance measurement systems are recognizing the importance of direct measurement of quality, manufacturing lead times, flexibility, and customer responsiveness, as well as more accurate measures of the actual costs of consumed resources. Activity-based cost systems can assign the costs of indirect and support resources to the specific products and activities that benefit from these resources. Both operational control and activity-based systems represent new opportunities for improved managerial information in complex, technologically advanced environments.
Efficient Parallel Algorithm For Direct Numerical Simulation of Turbulent Flows
NASA Technical Reports Server (NTRS)
Moitra, Stuti; Gatski, Thomas B.
1997-01-01
A distributed algorithm for a high-order-accurate finite-difference approach to the direct numerical simulation (DNS) of transition and turbulence in compressible flows is described. This work has two major objectives. The first objective is to demonstrate that parallel and distributed-memory machines can be successfully and efficiently used to solve computationally intensive and input/output intensive algorithms of the DNS class. The second objective is to show that the computational complexity involved in solving the tridiagonal systems inherent in the DNS algorithm can be reduced by algorithm innovations that obviate the need to use a parallelized tridiagonal solver.
[An integrated segmentation method for 3D ultrasound carotid artery].
Yang, Xin; Wu, Huihui; Liu, Yang; Xu, Hongwei; Liang, Huageng; Cai, Wenjuan; Fang, Mengjie; Wang, Yujie
2013-07-01
An integrated segmentation method for 3D ultrasound carotid artery was proposed. 3D ultrasound image was sliced into transverse, coronal and sagittal 2D images on the carotid bifurcation point. Then, the three images were processed respectively, and the carotid artery contours and thickness were obtained finally. This paper tries to overcome the disadvantages of current computer aided diagnosis method, such as high computational complexity, easily introduced subjective errors et al. The proposed method could get the carotid artery overall information rapidly, accurately and completely. It could be transplanted into clinical usage for atherosclerosis diagnosis and prevention.
Using Nuclear Medicine Imaging Wisely in Diagnosing Infectious Diseases
Censullo, Andrea
2017-01-01
Abstract In recent years, there has been an increasing emphasis on efficient and accurate diagnostic testing, exemplified by the American Board of Internal Medicine’s “Choosing Wisely” campaign. Nuclear imaging studies can provide early and accurate diagnoses of many infectious disease syndromes, particularly in complex cases where the differential remains broad. This review paper offers clinicians a rational, evidence-based guide to approaching nuclear medicine tests, using an example case of methicillin-sensitive Staphylococcus aureus (MSSA) bacteremia in a patient with multiple potential sources. Fluorodeoxyglucose-positron emission tomography (FDG-PET) with computed tomography (CT) and sulfur colloid imaging with tagged white blood cell (WBC) scanning offer the most promise in facilitating rapid and accurate diagnoses of endovascular graft infections, vertebral osteomyelitis (V-OM), diabetic foot infections, and prosthetic joint infections (PJIs). However, radiologists at different institutions may have varying degrees of expertise with these modalities. Regardless, infectious disease consultants would benefit from knowing what nuclear medicine tests to order when considering patients with complex infectious disease syndromes. PMID:28480283
A pairwise maximum entropy model accurately describes resting-state human brain networks
Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki
2013-01-01
The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410
Adhikari, Puspa L; Wong, Roberto L; Overton, Edward B
2017-10-01
Accurate characterization of petroleum hydrocarbons in complex and weathered oil residues is analytically challenging. This is primarily due to chemical compositional complexity of both the oil residues and environmental matrices, and the lack of instrumental selectivity due to co-elution of interferences with the target analytes. To overcome these analytical selectivity issues, we used an enhanced resolution gas chromatography coupled with triple quadrupole mass spectrometry in Multiple Reaction Monitoring (MRM) mode (GC/MS/MS-MRM) to eliminate interferences within the ion chromatograms of target analytes found in environmental samples. This new GC/MS/MS-MRM method was developed and used for forensic fingerprinting of deep-water and marsh sediment samples containing oily residues from the Deepwater Horizon oil spill. The results showed that the GC/MS/MS-MRM method increases selectivity, eliminates interferences, and provides more accurate quantitation and characterization of trace levels of alkyl-PAHs and biomarker compounds, from weathered oil residues in complex sample matrices. The higher selectivity of the new method, even at low detection limits, provides greater insights on isomer and homolog compositional patterns and the extent of oil weathering under various environmental conditions. The method also provides flat chromatographic baselines for accurate and unambiguous calculation of petroleum forensic biomarker compound ratios. Thus, this GC/MS/MS-MRM method can be a reliable analytical strategy for more accurate and selective trace level analyses in petroleum forensic studies, and for tacking continuous weathering of oil residues. Copyright © 2017 Elsevier Ltd. All rights reserved.
Copper toxicity and organic matter: Resiliency of watersheds in the Duluth Complex, Minnesota, USA
Piatak, Nadine; Seal, Robert; Jones, Perry M.; Woodruff, Laurel G.
2015-01-01
We estimated copper (Cu) toxicity in surface water with high dissolved organic matter (DOM) for unmined mineralized watersheds of the Duluth Complex using the Biotic Ligand Model (BLM), which evaluates the effect of DOM, cation competition for biologic binding sites, and metal speciation. A sediment-based BLM was used to estimate stream-sediment toxicity; this approach factors in the cumulative effects of multiple metals, incorporation of metals into less bioavailable sulfides, and complexation of metals with organic carbon. For surface water, the formation of Cu-DOM complexes significantly reduces the amount of Cu available to aquatic organisms. The protective effects of cations, such as calcium (Ca) and magnesium (Mg), competing with Cu to complex with the biotic ligand is likely not as important as DOM in water with high DOM and low hardness. Standard hardness-based water quality criteria (WQC) are probably inadequate for describing Cu toxicity in such waters and a BLM approach may yield more accurate results. Nevertheless, assumptions about relative proportions of humic acid (HA) and fulvic acid (FA) in DOM significantly influence BLM results; the higher the HA fraction, the higher calculated resiliency of the water to Cu toxicity. Another important factor is seasonal variation in water chemistry, with greater resiliency to Cu toxicity during low flow compared to high flow.Based on generally low total organic carbon and sulfur content, and equivalent metal ratios from total and weak partial extractions, much of the total metal concentration in clastic streambedsediments may be in bioavailable forms, sorbed on clays or hydroxide phases. However, organicrich fine-grained sediment in the numerous wetlands may sequester significant amount of metals, limiting their bioavailability. A high proportion of organic matter in waters and some sediments will play a key role in the resiliency of these watersheds to potential additional metal loads associated with future mining operations.
Synchronization for Optical PPM with Inter-Symbol Guard Times
NASA Astrophysics Data System (ADS)
Rogalin, R.; Srinivasan, M.
2017-05-01
Deep space optical communications promises orders of magnitude growth in communication capacity, supporting high data rate applications such as video streaming and high-bandwidth science instruments. Pulse position modulation is the modulation format of choice for deep space applications, and by inserting inter-symbol guard times between the symbols, the signal carries the timing information needed by the demodulator. Accurately extracting this timing information is crucial to demodulating and decoding this signal. In this article, we propose a number of timing and frequency estimation schemes for this modulation format, and in particular highlight a low complexity maximum likelihood timing estimator that significantly outperforms the prior art in this domain. This method does not require an explicit synchronization sequence, freeing up channel resources for data transmission.
Numerical Investigations of High Pressure Acoustic Waves in Resonators
NASA Technical Reports Server (NTRS)
Athavale, Mahesh; Pindera, Maciej; Daniels, Christopher C.; Steinetz, Bruce M.
2004-01-01
This presentation presents work on numerical investigations of nonlinear acoustic phenomena in resonators that can generate high-pressure waves using acoustic forcing of the flow. Time-accurate simulations of the flow in a closed cone resonator were performed at different oscillation frequencies and amplitudes, and the numerical results for the resonance frequency and fluid pressure increase match the GRC experimental data well. Work on cone resonator assembly simulations has started and will involve calculations of the flow through the resonator assembly with and without acoustic excitation. A new technique for direct calculation of resonance frequency of complex shaped resonators is also being investigated. Script-driven command procedures will also be developed for optimization of the resonator shape for maximum pressure increase.
Designing Adaptive Low Dissipative High Order Schemes
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sjoegreen, B.; Parks, John W. (Technical Monitor)
2002-01-01
Proper control of the numerical dissipation/filter to accurately resolve all relevant multiscales of complex flow problems while still maintaining nonlinear stability and efficiency for long-time numerical integrations poses a great challenge to the design of numerical methods. The required type and amount of numerical dissipation/filter are not only physical problem dependent, but also vary from one flow region to another. This is particularly true for unsteady high-speed shock/shear/boundary-layer/turbulence/acoustics interactions and/or combustion problems since the dynamics of the nonlinear effect of these flows are not well-understood. Even with extensive grid refinement, it is of paramount importance to have proper control on the type and amount of numerical dissipation/filter in regions where it is needed.
Modeling of a Sequential Two-Stage Combustor
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Liu, N.-S.; Gallagher, J. R.; Ryder, R. C.; Brankovic, A.; Hendricks, J. A.
2005-01-01
A sequential two-stage, natural gas fueled power generation combustion system is modeled to examine the fundamental aerodynamic and combustion characteristics of the system. The modeling methodology includes CAD-based geometry definition, and combustion computational fluid dynamics analysis. Graphical analysis is used to examine the complex vortical patterns in each component, identifying sources of pressure loss. The simulations demonstrate the importance of including the rotating high-pressure turbine blades in the computation, as this results in direct computation of combustion within the first turbine stage, and accurate simulation of the flow in the second combustion stage. The direct computation of hot-streaks through the rotating high-pressure turbine stage leads to improved understanding of the aerodynamic relationships between the primary and secondary combustors and the turbomachinery.
Gadolinium-based nanoparticles for highly efficient T1-weighted magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Lim, Eun-Kyung; Kang, Byunghoon; Choi, Yuna; Jang, Eunji; Han, Seungmin; Lee, Kwangyeol; Suh, Jin-Suck; Haam, Seungjoo; Huh, Yong-Min
2014-06-01
We developed Pyrene-Gadolinium (Py-Gd) nanoparticles as pH-sensitive magnetic resonance imaging (MRI) contrast agents capable of showing a high-Mr signal in cancer-specific environments, such as acidic conditions. Py-Gd nanoparticles were prepared by coating Py-Gd, which is a complex of gadolinium with pyrenyl molecules, with pyrenyl polyethyleneglycol PEG using a nano-emulsion method. These particles show better longitudinal relaxation time (T1) MR signals in acidic conditions than they do in neutral conditions. Furthermore, the particles exhibit biocompatibility and MR contrast effects in both in vitro and in vivo studies. From these results, we confirm that Py-Gd nanoparticles have the potential to be applied for accurate cancer diagnosis and therapy.
Optimal Output of Distributed Generation Based On Complex Power Increment
NASA Astrophysics Data System (ADS)
Wu, D.; Bao, H.
2017-12-01
In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.
NASA Astrophysics Data System (ADS)
Parise, M.
2018-01-01
A highly accurate analytical solution is derived to the electromagnetic problem of a short vertical wire antenna located on a stratified ground. The derivation consists of three steps. First, the integration path of the integrals describing the fields of the dipole is deformed and wrapped around the pole singularities and the two vertical branch cuts of the integrands located in the upper half of the complex plane. This allows to decompose the radiated field into its three contributions, namely the above-surface ground wave, the lateral wave, and the trapped surface waves. Next, the square root terms responsible for the branch cuts are extracted from the integrands of the branch-cut integrals. Finally, the extracted square roots are replaced with their rational representations according to Newton's square root algorithm, and residue theorem is applied to give explicit expressions, in series form, for the fields. The rigorous integration procedure and the convergence of square root algorithm ensure that the obtained formulas converge to the exact solution. Numerical simulations are performed to show the validity and robustness of the developed formulation, as well as its advantages in terms of time cost over standard numerical integration procedures.
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
Ghandehari, Masoud; Emig, Thorsten; Aghamohamadnia, Milad
2018-02-02
Despite decades of research seeking to derive the urban energy budget, the dynamics of thermal exchange in the densely constructed environment is not yet well understood. Using New York City as a study site, we present a novel hybrid experimental-computational approach for a better understanding of the radiative heat transfer in complex urban environments. The aim of this work is to contribute to the calculation of the urban energy budget, particularly the stored energy. We will focus our attention on surface thermal radiation. Improved understanding of urban thermodynamics incorporating the interaction of various bodies, particularly in high rise cities, will have implications on energy conservation at the building scale, and for human health and comfort at the urban scale. The platform presented is based on longwave hyperspectral imaging of nearly 100 blocks of Manhattan, in addition to a geospatial radiosity model that describes the collective radiative heat exchange between multiple buildings. Despite assumptions in surface emissivity and thermal conductivity of buildings walls, the close comparison of temperatures derived from measurements and computations is promising. Results imply that the presented geospatial thermodynamic model of urban structures can enable accurate and high resolution analysis of instantaneous urban surface temperatures.
Nonlinear detection for a high rate extended binary phase shift keying system.
Chen, Xian-Qing; Wu, Le-Nan
2013-03-28
The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding.
Nonlinear Detection for a High Rate Extended Binary Phase Shift Keying System
Chen, Xian-Qing; Wu, Le-Nan
2013-01-01
The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding. PMID:23539034
Shape Sensing Techniques for Continuum Robots in Minimally Invasive Surgery: A Survey.
Shi, Chaoyang; Luo, Xiongbiao; Qi, Peng; Li, Tianliang; Song, Shuang; Najdovski, Zoran; Fukuda, Toshio; Ren, Hongliang
2017-08-01
Continuum robots provide inherent structural compliance with high dexterity to access the surgical target sites along tortuous anatomical paths under constrained environments and enable to perform complex and delicate operations through small incisions in minimally invasive surgery. These advantages enable their broad applications with minimal trauma and make challenging clinical procedures possible with miniaturized instrumentation and high curvilinear access capabilities. However, their inherent deformable designs make it difficult to realize 3-D intraoperative real-time shape sensing to accurately model their shape. Solutions to this limitation can lead themselves to further develop closely associated techniques of closed-loop control, path planning, human-robot interaction, and surgical manipulation safety concerns in minimally invasive surgery. Although extensive model-based research that relies on kinematics and mechanics has been performed, accurate shape sensing of continuum robots remains challenging, particularly in cases of unknown and dynamic payloads. This survey investigates the recent advances in alternative emerging techniques for 3-D shape sensing in this field and focuses on the following categories: fiber-optic-sensor-based, electromagnetic-tracking-based, and intraoperative imaging modality-based shape-reconstruction methods. The limitations of existing technologies and prospects of new technologies are also discussed.
A new, high-precision measurement of the X-ray Cu K α spectrum
NASA Astrophysics Data System (ADS)
Mendenhall, Marcus H.; Cline, James P.; Henins, Albert; Hudson, Lawrence T.; Szabo, Csilla I.; Windover, Donald
2016-03-01
One of the primary measurement issues addressed with NIST Standard Reference Materials (SRMs) for powder diffraction is that of line position. SRMs for this purpose are certified with respect to lattice parameter, traceable to the SI through precise measurement of the emission spectrum of the X-ray source. Therefore, accurate characterization of the emission spectrum is critical to a minimization of the error bounds on the certified parameters. The presently accepted sources for the SI traceable characterization of the Cu K α emission spectrum are those of Härtwig, Hölzer et al., published in the 1990s. The structure of the X-ray emission lines of the Cu K α complex has been remeasured on a newly commissioned double-crystal instrument, with six-bounce Si (440) optics, in a manner directly traceable to the SI definition of the meter. In this measurement, the entire region from 8020 eV to 8100 eV has been covered with a highly precise angular scale and well-defined system efficiency, providing accurate wavelengths and relative intensities. This measurement is in modest disagreement with reference values for the wavelength of the Kα1 line, and strong disagreement for the wavelength of the Kα2 line.
NASA Technical Reports Server (NTRS)
Elrod, David; Christensen, Eric; Brown, Andrew
2011-01-01
At NASA/MSFC, Structural Dynamics personnel continue to perform advanced analysis for the turbomachinery in the J2X Rocket Engine, which is under consideration for the new Space Launch System. One of the most challenging analyses in the program is predicting turbine blade structural capability. Resonance was predicted by modal analysis, so comprehensive forced response analyses using high fidelity cyclic symmetric finite element models were initiated as required. Analysis methodologies up to this point have assumed the flow field could be fully described by a sector, so the loading on every blade would be identical as it travelled through it. However, in the J2X the CFD flow field varied over the 360 deg of a revolution because of the flow speeds and tortuous axial path. MSFC therefore developed a complex procedure using Nastran Dmap's and Matlab scripts to apply this circumferentially varying loading onto the cyclically symmetric structural models to produce accurate dynamic stresses for every blade on the disk. This procedure is coupled with static, spin, and thermal loading to produce high cycle fatigue safety factors resulting in much more accurate analytical assessments of the blades.
High-Accuracy, Compact Scanning Method and Circuit for Resistive Sensor Arrays.
Kim, Jong-Seok; Kwon, Dae-Yong; Choi, Byong-Deok
2016-01-26
The zero-potential scanning circuit is widely used as read-out circuit for resistive sensor arrays because it removes a well known problem: crosstalk current. The zero-potential scanning circuit can be divided into two groups based on type of row drivers. One type is a row driver using digital buffers. It can be easily implemented because of its simple structure, but we found that it can cause a large read-out error which originates from on-resistance of the digital buffers used in the row driver. The other type is a row driver composed of operational amplifiers. It, very accurately, reads the sensor resistance, but it uses a large number of operational amplifiers to drive rows of the sensor array; therefore, it severely increases the power consumption, cost, and system complexity. To resolve the inaccuracy or high complexity problems founded in those previous circuits, we propose a new row driver which uses only one operational amplifier to drive all rows of a sensor array with high accuracy. The measurement results with the proposed circuit to drive a 4 × 4 resistor array show that the maximum error is only 0.1% which is remarkably reduced from 30.7% of the previous counterpart.
Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Wei
Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equationsmore » such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.« less
Complex network structure influences processing in long-term and short-term memory.
Vitevitch, Michael S; Chan, Kit Ying; Roodenrys, Steven
2012-07-01
Complex networks describe how entities in systems interact; the structure of such networks is argued to influence processing. One measure of network structure, clustering coefficient, C, measures the extent to which neighbors of a node are also neighbors of each other. Previous psycholinguistic experiments found that the C of phonological word-forms influenced retrieval from the mental lexicon (that portion of long-term memory dedicated to language) during the on-line recognition and production of spoken words. In the present study we examined how network structure influences other retrieval processes in long- and short-term memory. In a false-memory task-examining long-term memory-participants falsely recognized more words with low- than high-C. In a recognition memory task-examining veridical memories in long-term memory-participants correctly recognized more words with low- than high-C. However, participants in a serial recall task-examining redintegration in short-term memory-recalled lists comprised of high-C words more accurately than lists comprised of low-C words. These results demonstrate that network structure influences cognitive processes associated with several forms of memory including lexical, long-term, and short-term.
Robust active contour via additive local and global intensity information based on local entropy
NASA Astrophysics Data System (ADS)
Yuan, Shuai; Monkam, Patrice; Zhang, Feng; Luan, Fangjun; Koomson, Ben Alfred
2018-01-01
Active contour-based image segmentation can be a very challenging task due to many factors such as high intensity inhomogeneity, presence of noise, complex shape, weak boundaries objects, and dependence on the position of the initial contour. We propose a level set-based active contour method to segment complex shape objects from images corrupted by noise and high intensity inhomogeneity. The energy function of the proposed method results from combining the global intensity information and local intensity information with some regularization factors. First, the global intensity term is proposed based on a scheme formulation that considers two intensity values for each region instead of one, which outperforms the well-known Chan-Vese model in delineating the image information. Second, the local intensity term is formulated based on local entropy computed considering the distribution of the image brightness and using the generalized Gaussian distribution as the kernel function. Therefore, it can accurately handle high intensity inhomogeneity and noise. Moreover, our model is not dependent on the position occupied by the initial curve. Finally, extensive experiments using various images have been carried out to illustrate the performance of the proposed method.
Jeong, Hyundoo; Yoon, Byung-Jun
2017-03-14
Network querying algorithms provide computational means to identify conserved network modules in large-scale biological networks that are similar to known functional modules, such as pathways or molecular complexes. Two main challenges for network querying algorithms are the high computational complexity of detecting potential isomorphism between the query and the target graphs and ensuring the biological significance of the query results. In this paper, we propose SEQUOIA, a novel network querying algorithm that effectively addresses these issues by utilizing a context-sensitive random walk (CSRW) model for network comparison and minimizing the network conductance of potential matches in the target network. The CSRW model, inspired by the pair hidden Markov model (pair-HMM) that has been widely used for sequence comparison and alignment, can accurately assess the node-to-node correspondence between different graphs by accounting for node insertions and deletions. The proposed algorithm identifies high-scoring network regions based on the CSRW scores, which are subsequently extended by maximally reducing the network conductance of the identified subnetworks. Performance assessment based on real PPI networks and known molecular complexes show that SEQUOIA outperforms existing methods and clearly enhances the biological significance of the query results. The source code and datasets can be downloaded from http://www.ece.tamu.edu/~bjyoon/SEQUOIA .
Determination of effective loss factors in reduced SEA models
NASA Astrophysics Data System (ADS)
Chimeno Manguán, M.; Fernández de las Heras, M. J.; Roibás Millán, E.; Simón Hidalgo, F.
2017-01-01
The definition of Statistical Energy Analysis (SEA) models for large complex structures is highly conditioned by the classification of the structure elements into a set of coupled subsystems and the subsequent determination of the loss factors representing both the internal damping and the coupling between subsystems. The accurate definition of the complete system can lead to excessively large models as the size and complexity increases. This fact can also rise practical issues for the experimental determination of the loss factors. This work presents a formulation of reduced SEA models for incomplete systems defined by a set of effective loss factors. This reduced SEA model provides a feasible number of subsystems for the application of the Power Injection Method (PIM). For structures of high complexity, their components accessibility can be restricted, for instance internal equipments or panels. For these cases the use of PIM to carry out an experimental SEA analysis is not possible. New methods are presented for this case in combination with the reduced SEA models. These methods allow defining some of the model loss factors that could not be obtained through PIM. The methods are validated with a numerical analysis case and they are also applied to an actual spacecraft structure with accessibility restrictions: a solar wing in folded configuration.
Determination for Enterobacter cloacae based on a europium ternary complex labeled DNA probe
NASA Astrophysics Data System (ADS)
He, Hui; Niu, Cheng-Gang; Zeng, Guang-Ming; Ruan, Min; Qin, Pin-Zhu; Liu, Jing
2011-11-01
The fast detection and accurate diagnosis of the prevalent pathogenic bacteria is very important for the treatment of disease. Nowadays, fluorescence techniques are important tools for diagnosis. A two-probe tandem DNA hybridization assay was designed for the detection of Enterobacter cloacae based on time-resolved fluorescence. In this work, the authors synthesized a novel europium ternary complex Eu(TTA) 3(5-NH 2-phen) with intense luminescence, high fluorescence quantum yield and long lifetime before. We developed a method based on this europium complex for the specific detection of original extracted DNA from E. cloacae. In the hybridization assay format, the reporter probe was labeled with Eu(TTA) 3(5-NH 2-phen) on the 5'-terminus, and the capture probe capture probe was covalent immobilized on the surface of the glutaraldehyde treated glass slides. The original extracted DNA of samples was directly used without any DNA purification and amplification. The detection was conducted by monitoring the fluorescence intensity from the glass surface after DNA hybridization. The detection limit of the DNA was 5 × 10 -10 mol L -1. The results of the present work proved that this new approach was easy to operate with high sensitivity and specificity. It could be conducted as a powerful tool for the detection of pathogen microorganisms in the environment.
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
Lu, Qiongshi; Li, Boyang; Ou, Derek; Erlendsdottir, Margret; Powles, Ryan L; Jiang, Tony; Hu, Yiming; Chang, David; Jin, Chentian; Dai, Wei; He, Qidu; Liu, Zefeng; Mukherjee, Shubhabrata; Crane, Paul K; Zhao, Hongyu
2017-12-07
Despite the success of large-scale genome-wide association studies (GWASs) on complex traits, our understanding of their genetic architecture is far from complete. Jointly modeling multiple traits' genetic profiles has provided insights into the shared genetic basis of many complex traits. However, large-scale inference sets a high bar for both statistical power and biological interpretability. Here we introduce a principled framework to estimate annotation-stratified genetic covariance between traits using GWAS summary statistics. Through theoretical and numerical analyses, we demonstrate that our method provides accurate covariance estimates, thereby enabling researchers to dissect both the shared and distinct genetic architecture across traits to better understand their etiologies. Among 50 complex traits with publicly accessible GWAS summary statistics (N total ≈ 4.5 million), we identified more than 170 pairs with statistically significant genetic covariance. In particular, we found strong genetic covariance between late-onset Alzheimer disease (LOAD) and amyotrophic lateral sclerosis (ALS), two major neurodegenerative diseases, in single-nucleotide polymorphisms (SNPs) with high minor allele frequencies and in SNPs located in the predicted functional genome. Joint analysis of LOAD, ALS, and other traits highlights LOAD's correlation with cognitive traits and hints at an autoimmune component for ALS. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.
2018-04-01
The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.
NASA Astrophysics Data System (ADS)
Vermeire, B. C.; Witherden, F. D.; Vincent, P. E.
2017-04-01
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier-Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to a range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor-Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vermeire, B.C., E-mail: brian.vermeire@concordia.ca; Witherden, F.D.; Vincent, P.E.
First- and second-order accurate numerical methods, implemented for CPUs, underpin the majority of industrial CFD solvers. Whilst this technology has proven very successful at solving steady-state problems via a Reynolds Averaged Navier–Stokes approach, its utility for undertaking scale-resolving simulations of unsteady flows is less clear. High-order methods for unstructured grids and GPU accelerators have been proposed as an enabling technology for unsteady scale-resolving simulations of flow over complex geometries. In this study we systematically compare accuracy and cost of the high-order Flux Reconstruction solver PyFR running on GPUs and the industry-standard solver STAR-CCM+ running on CPUs when applied to amore » range of unsteady flow problems. Specifically, we perform comparisons of accuracy and cost for isentropic vortex advection (EV), decay of the Taylor–Green vortex (TGV), turbulent flow over a circular cylinder, and turbulent flow over an SD7003 aerofoil. We consider two configurations of STAR-CCM+: a second-order configuration, and a third-order configuration, where the latter was recommended by CD-adapco for more effective computation of unsteady flow problems. Results from both PyFR and STAR-CCM+ demonstrate that third-order schemes can be more accurate than second-order schemes for a given cost e.g. going from second- to third-order, the PyFR simulations of the EV and TGV achieve 75× and 3× error reduction respectively for the same or reduced cost, and STAR-CCM+ simulations of the cylinder recovered wake statistics significantly more accurately for only twice the cost. Moreover, advancing to higher-order schemes on GPUs with PyFR was found to offer even further accuracy vs. cost benefits relative to industry-standard tools.« less
Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines
NASA Astrophysics Data System (ADS)
Massa, Luca
A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.
NASA Astrophysics Data System (ADS)
Kakkos, I.; Gkiatis, K.; Bromis, K.; Asvestas, P. A.; Karanasiou, I. S.; Ventouras, E. M.; Matsopoulos, G. K.
2017-11-01
The detection of an error is the cognitive evaluation of an action outcome that is considered undesired or mismatches an expected response. Brain activity during monitoring of correct and incorrect responses elicits Event Related Potentials (ERPs) revealing complex cerebral responses to deviant sensory stimuli. Development of accurate error detection systems is of great importance both concerning practical applications and in investigating the complex neural mechanisms of decision making. In this study, data are used from an audio identification experiment that was implemented with two levels of complexity in order to investigate neurophysiological error processing mechanisms in actors and observers. To examine and analyse the variations of the processing of erroneous sensory information for each level of complexity we employ Support Vector Machines (SVM) classifiers with various learning methods and kernels using characteristic ERP time-windowed features. For dimensionality reduction and to remove redundant features we implement a feature selection framework based on Sequential Forward Selection (SFS). The proposed method provided high accuracy in identifying correct and incorrect responses both for actors and for observers with mean accuracy of 93% and 91% respectively. Additionally, computational time was reduced and the effects of the nesting problem usually occurring in SFS of large feature sets were alleviated.
Effect of shoulder model complexity in upper-body kinematics analysis of the golf swing.
Bourgain, M; Hybois, S; Thoreux, P; Rouillon, O; Rouch, P; Sauret, C
2018-06-25
The golf swing is a complex full body movement during which the spine and shoulders are highly involved. In order to determine shoulder kinematics during this movement, multibody kinematics optimization (MKO) can be recommended to limit the effect of the soft tissue artifact and to avoid joint dislocations or bone penetration in reconstructed kinematics. Classically, in golf biomechanics research, the shoulder is represented by a 3 degrees-of-freedom model representing the glenohumeral joint. More complex and physiological models are already provided in the scientific literature. Particularly, the model used in this study was a full body model and also described motions of clavicles and scapulae. This study aimed at quantifying the effect of utilizing a more complex and physiological shoulder model when studying the golf swing. Results obtained on 20 golfers showed that a more complex and physiologically-accurate model can more efficiently track experimental markers, which resulted in differences in joint kinematics. Hence, the model with 3 degrees-of-freedom between the humerus and the thorax may be inadequate when combined with MKO and a more physiological model would be beneficial. Finally, results would also be improved through a subject-specific approach for the determination of the segment lengths. Copyright © 2018 Elsevier Ltd. All rights reserved.
Paton, Robert S; Goodman, Jonathan M
2009-04-01
We have evaluated the performance of a set of widely used force fields by calculating the geometries and stabilization energies for a large collection of intermolecular complexes. These complexes are representative of a range of chemical and biological systems for which hydrogen bonding, electrostatic, and van der Waals interactions play important roles. Benchmark energies are taken from the high-level ab initio values in the JSCH-2005 and S22 data sets. All of the force fields underestimate stabilization resulting from hydrogen bonding, but the energetics of electrostatic and van der Waals interactions are described more accurately. OPLSAA gave a mean unsigned error of 2 kcal mol(-1) for all 165 complexes studied, and outperforms DFT calculations employing very large basis sets for the S22 complexes. The magnitude of hydrogen bonding interactions are severely underestimated by all of the force fields tested, which contributes significantly to the overall mean error; if complexes which are predominantly bound by hydrogen bonding interactions are discounted, the mean unsigned error of OPLSAA is reduced to 1 kcal mol(-1). For added clarity, web-based interactive displays of the results have been developed which allow comparisons of force field and ab initio geometries to be performed and the structures viewed and rotated in three dimensions.
USDA-ARS?s Scientific Manuscript database
Sensors that can accurately measure canopy structures are prerequisites for development of advanced variable-rate sprayers. A 270° radial range laser sensor was evaluated for its accuracy to measure dimensions of target surfaces with complex shapes and sizes. An algorithm for data acquisition and 3-...
ERIC Educational Resources Information Center
Simpkins, John D.
Processing complex multivariate information effectively when relational properties of information sub-groups are ambiguous is difficult for man and man-machine systems. However, the information processing task is made easier through code study, cybernetic planning, and accurate display mechanisms. An exploratory laboratory study designed for the…
Adsorption of Cu(II) to Bacillus subtilis: A pH-dependent EXAFS and thermodynamic modelling study
NASA Astrophysics Data System (ADS)
Moon, Ellen M.; Peacock, Caroline L.
2011-11-01
Bacteria are very efficient sorbents of trace metals, and their abundance in a wide variety of natural aqueous systems means biosorption plays an important role in the biogeochemical cycling of many elements. We measured the adsorption of Cu(II) to Bacillus subtilis as a function of pH and surface loading. Adsorption edge and XAS experiments were performed at high bacteria-to-metal ratio, analogous to Cu uptake in natural geologic and aqueous environments. We report significant Cu adsorption to B. subtilis across the entire pH range studied (pH ˜2-7), with adsorption increasing with pH to a maximum at pH ˜6. We determine directly for the first time that Cu adsorbs to B. subtilis as a (CuO 5H n) n-8 monodentate, inner-sphere surface complex involving carboxyl surface functional groups. This Cu-carboxyl complex is able to account for the observed Cu adsorption across the entire pH range studied. Having determined the molecular adsorption mechanism of Cu to B. subtilis, we have developed a new thermodynamic surface complexation model for Cu adsorption that is informed by and consistent with EXAFS results. We model the surface electrostatics using the 1p K basic Stern approximation. We fit our adsorption data to the formation of a monodentate, inner-sphere tbnd RCOOCu + surface complex. In agreement with previous studies, this work indicates that in order to accurately predict the fate and mobility of Cu in complex biogeochemical systems, we must incorporate the formation of Cu-bacteria surface complexes in reactive transport models. To this end, this work recommends log K tbnd RCOOCu + = 7.13 for geologic and aqueous systems with generally high B. subtilis-to-metal ratio.
Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei
2015-01-13
A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta-GGA functionals for the present test set.
Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong
2015-01-01
Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.
NASA Technical Reports Server (NTRS)
Yliniemi, Logan; Agogino, Adrian K.; Tumer, Kagan
2014-01-01
Accurate simulation of the effects of integrating new technologies into a complex system is critical to the modernization of our antiquated air traffic system, where there exist many layers of interacting procedures, controls, and automation all designed to cooperate with human operators. Additions of even simple new technologies may result in unexpected emergent behavior due to complex human/ machine interactions. One approach is to create high-fidelity human models coming from the field of human factors that can simulate a rich set of behaviors. However, such models are difficult to produce, especially to show unexpected emergent behavior coming from many human operators interacting simultaneously within a complex system. Instead of engineering complex human models, we directly model the emergent behavior by evolving goal directed agents, representing human users. Using evolution we can predict how the agent representing the human user reacts given his/her goals. In this paradigm, each autonomous agent in a system pursues individual goals, and the behavior of the system emerges from the interactions, foreseen or unforeseen, between the agents/actors. We show that this method reflects the integration of new technologies in a historical case, and apply the same methodology for a possible future technology.
Fast and accurate detection of spread source in large complex networks.
Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A
2018-02-06
Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.
Correlations between Community Structure and Link Formation in Complex Networks
Liu, Zhen; He, Jia-Lin; Kapoor, Komal; Srivastava, Jaideep
2013-01-01
Background Links in complex networks commonly represent specific ties between pairs of nodes, such as protein-protein interactions in biological networks or friendships in social networks. However, understanding the mechanism of link formation in complex networks is a long standing challenge for network analysis and data mining. Methodology/Principal Findings Links in complex networks have a tendency to cluster locally and form so-called communities. This widely existed phenomenon reflects some underlying mechanism of link formation. To study the correlations between community structure and link formation, we present a general computational framework including a theory for network partitioning and link probability estimation. Our approach enables us to accurately identify missing links in partially observed networks in an efficient way. The links having high connection likelihoods in the communities reveal that links are formed preferentially to create cliques and accordingly promote the clustering level of the communities. The experimental results verify that such a mechanism can be well captured by our approach. Conclusions/Significance Our findings provide a new insight into understanding how links are created in the communities. The computational framework opens a wide range of possibilities to develop new approaches and applications, such as community detection and missing link prediction. PMID:24039818
NASA Astrophysics Data System (ADS)
Le Bouteiller, P.; Benjemaa, M.; Métivier, L.; Virieux, J.
2018-03-01
Accurate numerical computation of wave traveltimes in heterogeneous media is of major interest for a large range of applications in seismics, such as phase identification, data windowing, traveltime tomography and seismic imaging. A high level of precision is needed for traveltimes and their derivatives in applications which require quantities such as amplitude or take-off angle. Even more challenging is the anisotropic case, where the general Eikonal equation is a quartic in the derivatives of traveltimes. Despite their efficiency on Cartesian meshes, finite-difference solvers are inappropriate when dealing with unstructured meshes and irregular topographies. Moreover, reaching high orders of accuracy generally requires wide stencils and high additional computational load. To go beyond these limitations, we propose a discontinuous-finite-element-based strategy which has the following advantages: (1) the Hamiltonian formalism is general enough for handling the full anisotropic Eikonal equations; (2) the scheme is suitable for any desired high-order formulation or mixing of orders (p-adaptivity); (3) the solver is explicit whatever Hamiltonian is used (no need to find the roots of the quartic); (4) the use of unstructured meshes provides the flexibility for handling complex boundary geometries such as topographies (h-adaptivity) and radiation boundary conditions for mimicking an infinite medium. The point-source factorization principles are extended to this discontinuous Galerkin formulation. Extensive tests in smooth analytical media demonstrate the high accuracy of the method. Simulations in strongly heterogeneous media illustrate the solver robustness to realistic Earth-sciences-oriented applications.
Ding, Jun; Xiao, Hua-Ming; Liu, Simin; Wang, Chang; Liu, Xin; Feng, Yu-Qi
2018-10-05
Although several methods have realized the analysis of low molecular weight (LMW) compounds using matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) by overcoming the problem of interference with MS signals in the low mass region derived from conventional organic matrices, this emerging field still requires strategies to address the issue of analyzing complex samples containing LMW components in addition to the LMW compounds of interest, and solve the problem of lack of universality. The present study proposes an integrated strategy that combines chemical labeling with the supramolecular chemistry of cucurbit [n]uril (CB [n]) for the MALDI MS analysis of LMW compounds in complex samples. In this strategy, the target LMW compounds are first labeled by introducing a series of bifunctional reagents that selectively react with the target analytes and also form stable inclusion complexes with CB [n]. Then, the labeled products act as guest molecules that readily and selectively form stable inclusion complexes with CB [n]. This strategy relocates the MS signals of the LMW compounds of interest from the low mass region suffering high interference to the high mass region where interference with low mass components is absent. Experimental results demonstrate that a wide range of LMW compounds, including carboxylic acids, aldehydes, amines, thiol, and cis-diols, can be successfully detected using the proposed strategy, and the limits of detection were in the range of 0.01-1.76 nmol/mL. In addition, the high selectivity of the labeling reagents for the target analytes in conjunction with the high selectivity of the binding between the labeled products and CB [n] ensures an absence of signal interference with the non-targeted LMW components of complex samples. Finally, the feasibility of the proposed strategy for complex sample analysis is demonstrated by the accurate and rapid quantitative analysis of aldehydes in saliva and herbal medicines. As such, this work not only provides an alternative method for the detection of various LMW compounds using MALDI MS, but also can be applied to the selective and high-throughput analysis of LMW analytes in complex samples. Copyright © 2018 Elsevier B.V. All rights reserved.
Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method
Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.
2012-01-01
Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978
Scan-based volume animation driven by locally adaptive articulated registrations.
Rhee, Taehyun; Lewis, J P; Neumann, Ulrich; Nayak, Krishna S
2011-03-01
This paper describes a complete system to create anatomically accurate example-based volume deformation and animation of articulated body regions, starting from multiple in vivo volume scans of a specific individual. In order to solve the correspondence problem across volume scans, a template volume is registered to each sample. The wide range of pose variations is first approximated by volume blend deformation (VBD), providing proper initialization of the articulated subject in different poses. A novel registration method is presented to efficiently reduce the computation cost while avoiding strong local minima inherent in complex articulated body volume registration. The algorithm highly constrains the degrees of freedom and search space involved in the nonlinear optimization, using hierarchical volume structures and locally constrained deformation based on the biharmonic clamped spline. Our registration step establishes a correspondence across scans, allowing a data-driven deformation approach in the volume domain. The results provide an occlusion-free person-specific 3D human body model, asymptotically accurate inner tissue deformations, and realistic volume animation of articulated movements driven by standard joint control estimated from the actual skeleton. Our approach also addresses the practical issues arising in using scans from living subjects. The robustness of our algorithms is tested by their applications on the hand, probably the most complex articulated region in the body, and the knee, a frequent subject area for medical imaging due to injuries. © 2011 IEEE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoang, Tuan L.; Physical and Life Sciences Directorate, Lawrence Livermore National Laboratory, CA 94550; Marian, Jaime, E-mail: jmarian@ucla.edu
2015-11-01
An improved version of a recently developed stochastic cluster dynamics (SCD) method (Marian and Bulatov, 2012) [6] is introduced as an alternative to rate theory (RT) methods for solving coupled ordinary differential equation (ODE) systems for irradiation damage simulations. SCD circumvents by design the curse of dimensionality of the variable space that renders traditional ODE-based RT approaches inefficient when handling complex defect population comprised of multiple (more than two) defect species. Several improvements introduced here enable efficient and accurate simulations of irradiated materials up to realistic (high) damage doses characteristic of next-generation nuclear systems. The first improvement is a proceduremore » for efficiently updating the defect reaction-network and event selection in the context of a dynamically expanding reaction-network. Next is a novel implementation of the τ-leaping method that speeds up SCD simulations by advancing the state of the reaction network in large time increments when appropriate. Lastly, a volume rescaling procedure is introduced to control the computational complexity of the expanding reaction-network through occasional reductions of the defect population while maintaining accurate statistics. The enhanced SCD method is then applied to model defect cluster accumulation in iron thin films subjected to triple ion-beam (Fe{sup 3+}, He{sup +} and H{sup +}) irradiations, for which standard RT or spatially-resolved kinetic Monte Carlo simulations are prohibitively expensive.« less
Operating Room Delays: Meaningful Use in Electronic Health Record.
Van Winkle, Rachelle A; Champagne, Mary T; Gilman-Mays, Meri; Aucoin, Julia
2016-06-01
Perioperative areas are the most costly to operate and account for more than 40% of expenses. The high costs prompted one organization to analyze surgical delays through a retrospective review of their new electronic health record. Electronic health records have made it easier to access and aggregate clinical data; 2123 operating room cases were analyzed. Implementing a new electronic health record system is complex; inaccurate data and poor implementation can introduce new problems. Validating the electronic health record development processes determines the ease of use and the user interface, specifically related to user compliance with the intent of the electronic health record development. The revalidation process after implementation determines if the intent of the design was fulfilled and data can be meaningfully used. In this organization, the data fields completed through automation provided quantifiable, meaningful data. However, data fields completed by staff that required subjective decision making resulted in incomplete data nearly 24% of the time. The ease of use was further complicated by 490 permutations (combinations of delay types and reasons) that were built into the electronic health record. Operating room delay themes emerged notwithstanding the significant complexity of the electronic health record build; however, improved accuracy could improve meaningful data collection and a more accurate root cause analysis of operating room delays. Accurate and meaningful use of data affords a more reliable approach in quality, safety, and cost-effective initiatives.