Sample records for path sampling method

  1. Study on high-resolution representation of terraces in Shanxi Loess Plateau area

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ma, Lei

    2008-10-01

    A new elevation points sampling method, namely TIN-based Sampling Method (TSM) and a new visual method called Elevation Addition Method (EAM), are put forth for representing the typical terraces in Shanxi loess plateau area. The DEM Feature Points and Lines Classification (DEPLC) put forth by the authors in 2007 is perfected for depicting the main path in the study area. The EAM is used to visualize the terraces and the path in the study area. 406 key elevation points and 15 feature constrained lines sampled by this method are used to construct CD-TINs which can depict the terraces and path correctly and effectively. Our case study shows that the new sampling method called TSM is reasonable and feasible. The complicated micro-terrains like terraces and path can be represented with high resolution and high efficiency successfully by use of the perfected DEPLC, TSM and CD-TINs. And both the terraces and the main path are visualized very well by use of EAM even when the terrace height is not more than 1m.

  2. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method.

    PubMed

    Molloy, Kevin; Shehu, Amarda

    2013-01-01

    Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers.

  3. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  4. Generalized Ensemble Sampling of Enzyme Reaction Free Energy Pathways

    PubMed Central

    Wu, Dongsheng; Fajer, Mikolai I.; Cao, Liaoran; Cheng, Xiaolin; Yang, Wei

    2016-01-01

    Free energy path sampling plays an essential role in computational understanding of chemical reactions, particularly those occurring in enzymatic environments. Among a variety of molecular dynamics simulation approaches, the generalized ensemble sampling strategy is uniquely attractive for the fact that it not only can enhance the sampling of rare chemical events but also can naturally ensure consistent exploration of environmental degrees of freedom. In this review, we plan to provide a tutorial-like tour on an emerging topic: generalized ensemble sampling of enzyme reaction free energy path. The discussion is largely focused on our own studies, particularly ones based on the metadynamics free energy sampling method and the on-the-path random walk path sampling method. We hope that this mini presentation will provide interested practitioners some meaningful guidance for future algorithm formulation and application study. PMID:27498634

  5. Elucidating the ensemble of functionally-relevant transitions in protein systems with a robotics-inspired method

    PubMed Central

    2013-01-01

    Background Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. Methods We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Results and conclusions Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers. PMID:24565158

  6. Efficient Sampling of Parsimonious Inversion Histories with Application to Genome Rearrangement in Yersinia

    PubMed Central

    Darling, Aaron E.

    2009-01-01

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called “MC4Inversion.” We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique. PMID:20333186

  7. Quasi-Monte Carlo Methods Applied to Tau-Leaping in Stochastic Biological Systems.

    PubMed

    Beentjes, Casper H L; Baker, Ruth E

    2018-05-25

    Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow [Formula: see text] convergence rates as a function of the number of sample paths N. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely [Formula: see text]-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.

  8. Systems and methods for analyzing liquids under vacuum

    DOEpatents

    Yu, Xiao-Ying; Yang, Li; Cowin, James P.; Iedema, Martin J.; Zhu, Zihua

    2013-10-15

    Systems and methods for supporting a liquid against a vacuum pressure in a chamber can enable analysis of the liquid surface using vacuum-based chemical analysis instruments. No electrical or fluid connections are required to pass through the chamber walls. The systems can include a reservoir, a pump, and a liquid flow path. The reservoir contains a liquid-phase sample. The pump drives flow of the sample from the reservoir, through the liquid flow path, and back to the reservoir. The flow of the sample is not substantially driven by a differential between pressures inside and outside of the liquid flow path. An aperture in the liquid flow path exposes a stable portion of the liquid-phase sample to the vacuum pressure within the chamber. The radius, or size, of the aperture is less than or equal to a critical value required to support a meniscus of the liquid-phase sample by surface tension.

  9. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that the geometry-based FRODA occasionally sampled the pathway space of force field-based DIMS MD. For the AdK transition, the new concept of a Hausdorff-pair map enabled us to extract the molecular structural determinants responsible for differences in pathways, namely a set of conserved salt bridges whose charge-charge interactions are fully modelled in DIMS MD but not in FRODA. PSA has the potential to enhance our understanding of transition path sampling methods, validate them, and to provide a new approach to analyzing conformational transitions. PMID:26488417

  10. Harmonic Fourier beads method for studying rare events on rugged energy surfaces.

    PubMed

    Khavrutskii, Ilja V; Arora, Karunesh; Brooks, Charles L

    2006-11-07

    We present a robust, distributable method for computing minimum free energy paths of large molecular systems with rugged energy landscapes. The method, which we call harmonic Fourier beads (HFB), exploits the Fourier representation of a path in an appropriate coordinate space and proceeds iteratively by evolving a discrete set of harmonically restrained path points-beads-to generate positions for the next path. The HFB method does not require explicit knowledge of the free energy to locate the path. To compute the free energy profile along the final path we employ an umbrella sampling method in two generalized dimensions. The proposed HFB method is anticipated to aid the study of rare events in biomolecular systems. Its utility is demonstrated with an application to conformational isomerization of the alanine dipeptide in gas phase.

  11. Path Finding on High-Dimensional Free Energy Landscapes

    NASA Astrophysics Data System (ADS)

    Díaz Leines, Grisell; Ensing, Bernd

    2012-07-01

    We present a method for determining the average transition path and the free energy along this path in the space of selected collective variables. The formalism is based upon a history-dependent bias along a flexible path variable within the metadynamics framework but with a trivial scaling of the cost with the number of collective variables. Controlling the sampling of the orthogonal modes recovers the average path and the minimum free energy path as the limiting cases. The method is applied to resolve the path and the free energy of a conformational transition in alanine dipeptide.

  12. A Method on Dynamic Path Planning for Robotic Manipulator Autonomous Obstacle Avoidance Based on an Improved RRT Algorithm.

    PubMed

    Wei, Kun; Ren, Bingyin

    2018-02-13

    In a future intelligent factory, a robotic manipulator must work efficiently and safely in a Human-Robot collaborative and dynamic unstructured environment. Autonomous path planning is the most important issue which must be resolved first in the process of improving robotic manipulator intelligence. Among the path-planning methods, the Rapidly Exploring Random Tree (RRT) algorithm based on random sampling has been widely applied in dynamic path planning for a high-dimensional robotic manipulator, especially in a complex environment because of its probability completeness, perfect expansion, and fast exploring speed over other planning methods. However, the existing RRT algorithm has a limitation in path planning for a robotic manipulator in a dynamic unstructured environment. Therefore, an autonomous obstacle avoidance dynamic path-planning method for a robotic manipulator based on an improved RRT algorithm, called Smoothly RRT (S-RRT), is proposed. This method that targets a directional node extends and can increase the sampling speed and efficiency of RRT dramatically. A path optimization strategy based on the maximum curvature constraint is presented to generate a smooth and curved continuous executable path for a robotic manipulator. Finally, the correctness, effectiveness, and practicability of the proposed method are demonstrated and validated via a MATLAB static simulation and a Robot Operating System (ROS) dynamic simulation environment as well as a real autonomous obstacle avoidance experiment in a dynamic unstructured environment for a robotic manipulator. The proposed method not only provides great practical engineering significance for a robotic manipulator's obstacle avoidance in an intelligent factory, but also theoretical reference value for other type of robots' path planning.

  13. Enzymatic Kinetic Isotope Effects from Path-Integral Free Energy Perturbation Theory.

    PubMed

    Gao, J

    2016-01-01

    Path-integral free energy perturbation (PI-FEP) theory is presented to directly determine the ratio of quantum mechanical partition functions of different isotopologs in a single simulation. Furthermore, a double averaging strategy is used to carry out the practical simulation, separating the quantum mechanical path integral exactly into two separate calculations, one corresponding to a classical molecular dynamics simulation of the centroid coordinates, and another involving free-particle path-integral sampling over the classical, centroid positions. An integrated centroid path-integral free energy perturbation and umbrella sampling (PI-FEP/UM, or simply, PI-FEP) method along with bisection sampling was summarized, which provides an accurate and fast convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. The PI-FEP method is illustrated by a number of applications, to highlight the computational precision and accuracy, the rule of geometrical mean in kinetic isotope effects, enhanced nuclear quantum effects in enzyme catalysis, and protein dynamics on temperature dependence of kinetic isotope effects. © 2016 Elsevier Inc. All rights reserved.

  14. CFO compensation method using optical feedback path for coherent optical OFDM system

    NASA Astrophysics Data System (ADS)

    Moon, Sang-Rok; Hwang, In-Ki; Kang, Hun-Sik; Chang, Sun Hyok; Lee, Seung-Woo; Lee, Joon Ki

    2017-07-01

    We investigate feasibility of carrier frequency offset (CFO) compensation method using optical feedback path for coherent optical orthogonal frequency division multiplexing (CO-OFDM) system. Recently proposed CFO compensation algorithms provide wide CFO estimation range in electrical domain. However, their practical compensation range is limited by sampling rate of an analog-to-digital converter (ADC). This limitation has not drawn attention, since the ADC sampling rate was high enough comparing to the data bandwidth and CFO in the wireless OFDM system. For CO-OFDM, the limitation is becoming visible because of increased data bandwidth, laser instability (i.e. large CFO) and insufficient ADC sampling rate owing to high cost. To solve the problem and extend practical CFO compensation range, we propose a CFO compensation method having optical feedback path. By adding simple wavelength control for local oscillator, the practical CFO compensation range can be extended to the sampling frequency range. The feasibility of the proposed method is experimentally investigated.

  15. An adaptive multi-level simulation algorithm for stochastic biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, C., E-mail: lesterc@maths.ox.ac.uk; Giles, M. B.; Baker, R. E.

    2015-01-14

    Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms (SSA) to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method [Anderson and Higham, “Multi-level Montemore » Carlo for continuous time Markov chains, with applications in biochemical kinetics,” SIAM Multiscale Model. Simul. 10(1), 146–179 (2012)] tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths, the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of τ. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel adaptive time-stepping approach where τ is chosen according to the stochastic behaviour of each sample path, we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.« less

  16. Simultaneous determination of sample thickness, tilt, and electron mean free path using tomographic tilt images based on Beer-Lambert law

    PubMed Central

    Yan, Rui; Edwards, Thomas J.; Pankratz, Logan M.; Kuhn, Richard J.; Lanman, Jason K.; Liu, Jun; Jiang, Wen

    2015-01-01

    Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. PMID:26433027

  17. Simultaneous determination of sample thickness, tilt, and electron mean free path using tomographic tilt images based on Beer-Lambert law.

    PubMed

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. The differential path phase comparison method for determining pressure derivatives of elastic constants of solids

    NASA Astrophysics Data System (ADS)

    Peselnick, L.

    1982-08-01

    An ultrasonic method is presented which combines features of the differential path and the phase comparison methods. The proposed differential path phase comparison method, referred to as the `hybrid' method for brevity, eliminates errors resulting from phase changes in the bond between the sample and buffer rod. Define r(P) [and R(P)] as the square of the normalized frequency for cancellation of sample waves for shear [and for compressional] waves. Define N as the number of wavelengths in twice the sample length. The pressure derivatives r'(P) and R' (P) for samples of Alcoa 2024-T4 aluminum were obtained by using the phase comparison and the hybrid methods. The values of the pressure derivatives obtained by using the phase comparison method show variations by as much as 40% for small values of N (N < 50). The pressure derivatives as determined from the hybrid method are reproducible to within ±2% independent of N. The values of the pressure derivatives determined by the phase comparison method for large N are the same as those determined by the hybrid method. Advantages of the hybrid method are (1) no pressure dependent phase shift at the buffer-sample interface, (2) elimination of deviatoric stress in the sample portion of the sample assembly with application of hydrostatic pressure, and (3) operation at lower ultrasonic frequencies (for comparable sample lengths), which eliminates detrimental high frequency ultrasonic problems. A reduction of the uncertainties of the pressure derivatives of single crystals and of low porosity polycrystals permits extrapolation of such experimental data to deeper mantle depths.

  19. Systems for column-based separations, methods of forming packed columns, and methods of purifying sample components

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2000-01-01

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  20. Systems For Column-Based Separations, Methods Of Forming Packed Columns, And Methods Of Purifying Sample Components

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2006-02-21

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  1. Systems For Column-Based Separations, Methods Of Forming Packed Columns, And Methods Of Purifying Sample Components.

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2004-08-24

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  2. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    PubMed

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  3. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    NASA Astrophysics Data System (ADS)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  4. Investigations of α-helix↔β-sheet transition pathways in a miniprotein using the finite-temperature string method

    PubMed Central

    Ovchinnikov, Victor; Karplus, Martin

    2014-01-01

    A parallel implementation of the finite-temperature string method is described, which takes into account the invariance of coordinates with respect to rigid-body motions. The method is applied to the complex α-helix↔β-sheet transition in a β-hairpin miniprotein in implicit solvent, which exhibits much of the complexity of conformational changes in proteins. Two transition paths are considered, one derived from a linear interpolant between the endpoint structures and the other derived from a targeted dynamics simulation. Two methods for computing the conformational free energy (FE) along the string are compared, a restrained method, and a tessellation method introduced by E. Vanden-Eijnden and M. Venturoli [J. Chem. Phys. 130, 194103 (2009)]. It is found that obtaining meaningful free energy profiles using the present atom-based coordinates requires restricting sampling to a vicinity of the converged path, where the hyperplanar approximation to the isocommittor surface is sufficiently accurate. This sampling restriction can be easily achieved using restraints or constraints. The endpoint FE differences computed from the FE profiles are validated by comparison with previous calculations using a path-independent confinement method. The FE profiles are decomposed into the enthalpic and entropic contributions, and it is shown that the entropy difference contribution can be as large as 10 kcal/mol for intermediate regions along the path, compared to 15–20 kcal/mol for the enthalpy contribution. This result demonstrates that enthalpic barriers for transitions are offset by entropic contributions arising from the existence of different paths across a barrier. The possibility of using systematically coarse-grained representations of amino acids, in the spirit of multiple interaction site residue models, is proposed as a means to avoid ad hoc sampling restrictions to narrow transition tubes. PMID:24811667

  5. Investigations of α-helix↔β-sheet transition pathways in a miniprotein using the finite-temperature string method

    NASA Astrophysics Data System (ADS)

    Ovchinnikov, Victor; Karplus, Martin

    2014-05-01

    A parallel implementation of the finite-temperature string method is described, which takes into account the invariance of coordinates with respect to rigid-body motions. The method is applied to the complex α-helix↔β-sheet transition in a β-hairpin miniprotein in implicit solvent, which exhibits much of the complexity of conformational changes in proteins. Two transition paths are considered, one derived from a linear interpolant between the endpoint structures and the other derived from a targeted dynamics simulation. Two methods for computing the conformational free energy (FE) along the string are compared, a restrained method, and a tessellation method introduced by E. Vanden-Eijnden and M. Venturoli [J. Chem. Phys. 130, 194103 (2009)]. It is found that obtaining meaningful free energy profiles using the present atom-based coordinates requires restricting sampling to a vicinity of the converged path, where the hyperplanar approximation to the isocommittor surface is sufficiently accurate. This sampling restriction can be easily achieved using restraints or constraints. The endpoint FE differences computed from the FE profiles are validated by comparison with previous calculations using a path-independent confinement method. The FE profiles are decomposed into the enthalpic and entropic contributions, and it is shown that the entropy difference contribution can be as large as 10 kcal/mol for intermediate regions along the path, compared to 15-20 kcal/mol for the enthalpy contribution. This result demonstrates that enthalpic barriers for transitions are offset by entropic contributions arising from the existence of different paths across a barrier. The possibility of using systematically coarse-grained representations of amino acids, in the spirit of multiple interaction site residue models, is proposed as a means to avoid ad hoc sampling restrictions to narrow transition tubes.

  6. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  7. SU-E-T-58: A Novel Monte Carlo Photon Transport Simulation Scheme and Its Application in Cone Beam CT Projection Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Y; Southern Medical University, Guangzhou; Tian, Z

    Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source.more » After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle transport simulations.« less

  8. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation

    NASA Astrophysics Data System (ADS)

    Peter, Emanuel K.

    2017-12-01

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  9. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation.

    PubMed

    Peter, Emanuel K

    2017-12-07

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  10. Narrow field electromagnetic sensor system and method

    DOEpatents

    McEwan, Thomas E.

    1996-01-01

    A narrow field electromagnetic sensor system and method of sensing a characteristic of an object provide the capability to realize a characteristic of an object such as density, thickness, or presence, for any desired coordinate position on the object. One application is imaging. The sensor can also be used as an obstruction detector or an electronic trip wire with a narrow field without the disadvantages of impaired performance when exposed to dirt, snow, rain, or sunlight. The sensor employs a transmitter for transmitting a sequence of electromagnetic signals in response to a transmit timing signal, a receiver for sampling only the initial direct RF path of the electromagnetic signal while excluding all other electromagnetic signals in response to a receive timing signal, and a signal processor for processing the sampled direct RF path electromagnetic signal and providing an indication of the characteristic of an object. Usually, the electromagnetic signal is a short RF burst and the obstruction must provide a substantially complete eclipse of the direct RF path. By employing time-of-flight techniques, a timing circuit controls the receiver to sample only the initial direct RF path of the electromagnetic signal while not sampling indirect path electromagnetic signals. The sensor system also incorporates circuitry for ultra-wideband spread spectrum operation that reduces interference to and from other RF services while allowing co-location of multiple electronic sensors without the need for frequency assignments.

  11. Narrow field electromagnetic sensor system and method

    DOEpatents

    McEwan, T.E.

    1996-11-19

    A narrow field electromagnetic sensor system and method of sensing a characteristic of an object provide the capability to realize a characteristic of an object such as density, thickness, or presence, for any desired coordinate position on the object. One application is imaging. The sensor can also be used as an obstruction detector or an electronic trip wire with a narrow field without the disadvantages of impaired performance when exposed to dirt, snow, rain, or sunlight. The sensor employs a transmitter for transmitting a sequence of electromagnetic signals in response to a transmit timing signal, a receiver for sampling only the initial direct RF path of the electromagnetic signal while excluding all other electromagnetic signals in response to a receive timing signal, and a signal processor for processing the sampled direct RF path electromagnetic signal and providing an indication of the characteristic of an object. Usually, the electromagnetic signal is a short RF burst and the obstruction must provide a substantially complete eclipse of the direct RF path. By employing time-of-flight techniques, a timing circuit controls the receiver to sample only the initial direct RF path of the electromagnetic signal while not sampling indirect path electromagnetic signals. The sensor system also incorporates circuitry for ultra-wideband spread spectrum operation that reduces interference to and from other RF services while allowing co-location of multiple electronic sensors without the need for frequency assignments. 12 figs.

  12. Nanoscale imaging of clinical specimens using pathology-optimized expansion microscopy

    PubMed Central

    Zhao, Yongxin; Bucur, Octavian; Irshad, Humayun; Chen, Fei; Weins, Astrid; Stancu, Andreea L.; Oh, Eun-Young; DiStasio, Marcello; Torous, Vanda; Glass, Benjamin; Stillman, Isaac E.; Schnitt, Stuart J.; Beck, Andrew H.; Boyden, Edward S.

    2017-01-01

    Expansion microscopy (ExM), a method for improving the resolution of light microscopy by physically expanding the specimen, has not been applied to clinical tissue samples. Here we report a clinically optimized form of ExM that supports nanoscale imaging of human tissue specimens that have been fixed with formalin, embedded in paraffin, stained with hematoxylin and eosin (H&E), and/or fresh frozen. The method, which we call expansion pathology (ExPath), converts clinical samples into an ExM-compatible state, then applies an ExM protocol with protein anchoring and mechanical homogenization steps optimized for clinical samples. ExPath enables ~70 nm resolution imaging of diverse biomolecules in intact tissues using conventional diffraction-limited microscopes, and standard antibody and fluorescent DNA in situ hybridization reagents. We use ExPath for optical diagnosis of kidney minimal-change disease, which previously required electron microscopy (EM), and demonstrate high-fidelity computational discrimination between early breast neoplastic lesions that to date have challenged human judgment. ExPath may enable the routine use of nanoscale imaging in pathology and clinical research. PMID:28714966

  13. Nanoscale imaging of clinical specimens using pathology-optimized expansion microscopy.

    PubMed

    Zhao, Yongxin; Bucur, Octavian; Irshad, Humayun; Chen, Fei; Weins, Astrid; Stancu, Andreea L; Oh, Eun-Young; DiStasio, Marcello; Torous, Vanda; Glass, Benjamin; Stillman, Isaac E; Schnitt, Stuart J; Beck, Andrew H; Boyden, Edward S

    2017-08-01

    Expansion microscopy (ExM), a method for improving the resolution of light microscopy by physically expanding a specimen, has not been applied to clinical tissue samples. Here we report a clinically optimized form of ExM that supports nanoscale imaging of human tissue specimens that have been fixed with formalin, embedded in paraffin, stained with hematoxylin and eosin, and/or fresh frozen. The method, which we call expansion pathology (ExPath), converts clinical samples into an ExM-compatible state, then applies an ExM protocol with protein anchoring and mechanical homogenization steps optimized for clinical samples. ExPath enables ∼70-nm-resolution imaging of diverse biomolecules in intact tissues using conventional diffraction-limited microscopes and standard antibody and fluorescent DNA in situ hybridization reagents. We use ExPath for optical diagnosis of kidney minimal-change disease, a process that previously required electron microscopy, and we demonstrate high-fidelity computational discrimination between early breast neoplastic lesions for which pathologists often disagree in classification. ExPath may enable the routine use of nanoscale imaging in pathology and clinical research.

  14. Geologic, hydrologic, and geochemical identification of flow paths in the Edwards Aquifer, northeastern Bexar and southern Comal Counties, Texas

    USGS Publications Warehouse

    Otero, Cassi L.

    2007-01-01

    The U.S. Geological Survey, in cooperation with the San Antonio Water System, conducted a 4-year study during 2002?06 to identify major flow paths in the Edwards aquifer in northeastern Bexar and southern Comal Counties (study area). In the study area, faulting directs ground water into three hypothesized flow paths that move water, generally, from the southwest to the northeast. These flow paths are identified as the southern Comal flow path, the central Comal flow path, and the northern Comal flow path. Statistical correlations between water levels for six observation wells and between the water levels and discharges from Comal Springs and Hueco Springs yielded evidence for the hypothesized flow paths. Strong linear correlations were evident between the datasets from wells and springs within the same flow path and the datasets from wells in areas where flow between flow paths was suspected. Geochemical data (major ions, stable isotopes, sulfur hexafluoride, and tritium and helium) were used in graphical analyses to obtain evidence of the flow path from which wells or springs derive water. Major-ion geochemistry in samples from selected wells and springs showed relatively little variation. Samples from the southern Comal flow path were characterized by relatively high sulfate and chloride concentrations, possibly indicating that the water in the flow path was mixing with small amounts of saline water from the freshwater/saline-water transition zone. Samples from the central Comal flow path yielded the most varied major-ion geochemistry of the three hypothesized flow paths. Central Comal flow path samples were characterized, in general, by high calcium concentrations and low magnesium concentrations. Samples from the northern Comal flow path were characterized by relatively low sulfate and chloride concentrations and high magnesium concentrations. The high magnesium concentrations characteristic of northern Comal flow path samples from the recharge zone in Comal County might indicate that water from the Trinity aquifer is entering the Edwards aquifer in the subsurface. A graph of the relation between the stable isotopes deuterium and delta-18 oxygen showed that, except for samples collected following an unusually intense rain storm, there was not much variation in stable isotope values among the flow paths. In the study area deuterium ranged from -36.00 to -20.89 per mil and delta-18 oxygen ranged from -6.03 to -3.70 per mil. Excluding samples collected following the intense rain storm, the deuterium range in the study area was -33.00 to -20.89 per mil and the delta-18 oxygen range was -4.60 to -3.70 per mil. Two ground-water age-dating techniques, sulfur hexafluoride concentrations and tritium/helium-3 isotope ratios, were used to compute apparent ages (time since recharge occurred) of water samples collected in the study area. In general, the apparent ages computed by the two methods do not seem to indicate direction of flow. Apparent ages computed for water samples in northeastern Bexar and southern Comal Counties do not vary greatly except for some very young water in the recharge zone in central Comal County.

  15. Free energy landscape from path-sampling: application to the structural transition in LJ38

    NASA Astrophysics Data System (ADS)

    Adjanor, G.; Athènes, M.; Calvo, F.

    2006-09-01

    We introduce a path-sampling scheme that allows equilibrium state-ensemble averages to be computed by means of a biased distribution of non-equilibrium paths. This non-equilibrium method is applied to the case of the 38-atom Lennard-Jones atomic cluster, which has a double-funnel energy landscape. We calculate the free energy profile along the Q4 bond orientational order parameter. At high or moderate temperature the results obtained using the non-equilibrium approach are consistent with those obtained using conventional equilibrium methods, including parallel tempering and Wang-Landau Monte Carlo simulations. At lower temperatures, the non-equilibrium approach becomes more efficient in exploring the relevant inherent structures. In particular, the free energy agrees with the predictions of the harmonic superposition approximation.

  16. Nonequilibrium umbrella sampling in spaces of many order parameters

    NASA Astrophysics Data System (ADS)

    Dickson, Alex; Warmflash, Aryeh; Dinner, Aaron R.

    2009-02-01

    We recently introduced an umbrella sampling method for obtaining nonequilibrium steady-state probability distributions projected onto an arbitrary number of coordinates that characterize a system (order parameters) [A. Warmflash, P. Bhimalapuram, and A. R. Dinner, J. Chem. Phys. 127, 154112 (2007)]. Here, we show how our algorithm can be combined with the image update procedure from the finite-temperature string method for reversible processes [E. Vanden-Eijnden and M. Venturoli, "Revisiting the finite temperature string method for calculation of reaction tubes and free energies," J. Chem. Phys. (in press)] to enable restricted sampling of a nonequilibrium steady state in the vicinity of a path in a many-dimensional space of order parameters. For the study of transitions between stable states, the adapted algorithm results in improved scaling with the number of order parameters and the ability to progressively refine the regions of enforced sampling. We demonstrate the algorithm by applying it to a two-dimensional model of driven Brownian motion and a coarse-grained (Ising) model for nucleation under shear. It is found that the choice of order parameters can significantly affect the convergence of the simulation; local magnetization variables other than those used previously for sampling transition paths in Ising systems are needed to ensure that the reactive flux is primarily contained within a tube in the space of order parameters. The relation of this method to other algorithms that sample the statistics of path ensembles is discussed.

  17. A one-way shooting algorithm for transition path sampling of asymmetric barriers

    NASA Astrophysics Data System (ADS)

    Brotzakis, Z. Faidon; Bolhuis, Peter G.

    2016-10-01

    We present a novel transition path sampling shooting algorithm for the efficient sampling of complex (biomolecular) activated processes with asymmetric free energy barriers. The method employs a fictitious potential that biases the shooting point toward the transition state. The method is similar in spirit to the aimless shooting technique by Peters and Trout [J. Chem. Phys. 125, 054108 (2006)], but is targeted for use with the one-way shooting approach, which has been shown to be more effective than two-way shooting algorithms in systems dominated by diffusive dynamics. We illustrate the method on a 2D Langevin toy model, the association of two peptides and the initial step in dissociation of a β-lactoglobulin dimer. In all cases we show a significant increase in efficiency.

  18. Foundations and latest advances in replica exchange transition interface sampling.

    PubMed

    Cabriolu, Raffaela; Skjelbred Refsnes, Kristin M; Bolhuis, Peter G; van Erp, Titus S

    2017-10-21

    Nearly 20 years ago, transition path sampling (TPS) emerged as an alternative method to free energy based approaches for the study of rare events such as nucleation, protein folding, chemical reactions, and phase transitions. TPS effectively performs Monte Carlo simulations with relatively short molecular dynamics trajectories, with the advantage of not having to alter the actual potential energy surface nor the underlying physical dynamics. Although the TPS approach also introduced a methodology to compute reaction rates, this approach was for a long time considered theoretically attractive, providing the exact same results as extensively long molecular dynamics simulations, but still expensive for most relevant applications. With the increase of computer power and improvements in the algorithmic methodology, quantitative path sampling is finding applications in more and more areas of research. In particular, the transition interface sampling (TIS) and the replica exchange TIS (RETIS) algorithms have, in turn, improved the efficiency of quantitative path sampling significantly, while maintaining the exact nature of the approach. Also, open-source software packages are making these methods, for which implementation is not straightforward, now available for a wider group of users. In addition, a blooming development takes place regarding both applications and algorithmic refinements. Therefore, it is timely to explore the wide panorama of the new developments in this field. This is the aim of this article, which focuses on the most efficient exact path sampling approach, RETIS, as well as its recent applications, extensions, and variations.

  19. Foundations and latest advances in replica exchange transition interface sampling

    NASA Astrophysics Data System (ADS)

    Cabriolu, Raffaela; Skjelbred Refsnes, Kristin M.; Bolhuis, Peter G.; van Erp, Titus S.

    2017-10-01

    Nearly 20 years ago, transition path sampling (TPS) emerged as an alternative method to free energy based approaches for the study of rare events such as nucleation, protein folding, chemical reactions, and phase transitions. TPS effectively performs Monte Carlo simulations with relatively short molecular dynamics trajectories, with the advantage of not having to alter the actual potential energy surface nor the underlying physical dynamics. Although the TPS approach also introduced a methodology to compute reaction rates, this approach was for a long time considered theoretically attractive, providing the exact same results as extensively long molecular dynamics simulations, but still expensive for most relevant applications. With the increase of computer power and improvements in the algorithmic methodology, quantitative path sampling is finding applications in more and more areas of research. In particular, the transition interface sampling (TIS) and the replica exchange TIS (RETIS) algorithms have, in turn, improved the efficiency of quantitative path sampling significantly, while maintaining the exact nature of the approach. Also, open-source software packages are making these methods, for which implementation is not straightforward, now available for a wider group of users. In addition, a blooming development takes place regarding both applications and algorithmic refinements. Therefore, it is timely to explore the wide panorama of the new developments in this field. This is the aim of this article, which focuses on the most efficient exact path sampling approach, RETIS, as well as its recent applications, extensions, and variations.

  20. Path optimization method for the sign problem

    NASA Astrophysics Data System (ADS)

    Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji

    2018-03-01

    We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  1. Can an inadequate cervical cytology sample in ThinPrep be converted to a satisfactory sample by processing it with a SurePath preparation?

    PubMed Central

    Sørbye, Sveinung Wergeland; Pedersen, Mette Kristin; Ekeberg, Bente; Williams, Merete E. Johansen; Sauer, Torill; Chen, Ying

    2017-01-01

    Background: The Norwegian Cervical Cancer Screening Program recommends screening every 3 years for women between 25 and 69 years of age. There is a large difference in the percentage of unsatisfactory samples between laboratories that use different brands of liquid-based cytology. We wished to examine if inadequate ThinPrep samples could be satisfactory by processing them with the SurePath protocol. Materials and Methods: A total of 187 inadequate ThinPrep specimens from the Department of Clinical Pathology at University Hospital of North Norway were sent to Akershus University Hospital for conversion to SurePath medium. Ninety-one (48.7%) were processed through the automated “gynecologic” application for cervix cytology samples, and 96 (51.3%) were processed with the “nongynecological” automatic program. Results: Out of 187 samples that had been unsatisfactory by ThinPrep, 93 (49.7%) were satisfactory after being converted to SurePath. The rate of satisfactory cytology was 36.6% and 62.5% for samples run through the “gynecology” program and “nongynecology” program, respectively. Of the 93 samples that became satisfactory after conversion from ThinPrep to SurePath, 80 (86.0%) were screened as normal while 13 samples (14.0%) were given an abnormal diagnosis, which included 5 atypical squamous cells of undetermined significance, 5 low-grade squamous intraepithelial lesion, 2 atypical glandular cells not otherwise specified, and 1 atypical squamous cells cannot exclude high-grade squamous intraepithelial lesion. A total of 2.1% (4/187) of the women got a diagnosis of cervical intraepithelial neoplasia 2 or higher at a later follow-up. Conclusions: Converting cytology samples from ThinPrep to SurePath processing can reduce the number of unsatisfactory samples. The samples should be run through the “nongynecology” program to ensure an adequate number of cells. PMID:28900466

  2. Importance sampling studies of helium using the Feynman-Kac path integral method

    NASA Astrophysics Data System (ADS)

    Datta, S.; Rejcek, J. M.

    2018-05-01

    In the Feynman-Kac path integral approach the eigenvalues of a quantum system can be computed using Wiener measure which uses Brownian particle motion. In our previous work on such systems we have observed that the Wiener process numerically converges slowly for dimensions greater than two because almost all trajectories will escape to infinity. One can speed up this process by using a generalized Feynman-Kac (GFK) method, in which the new measure associated with the trial function is stationary, so that the convergence rate becomes much faster. We thus achieve an example of "importance sampling" and, in the present work, we apply it to the Feynman-Kac (FK) path integrals for the ground and first few excited-state energies for He to speed up the convergence rate. We calculate the path integrals using space averaging rather than the time averaging as done in the past. The best previous calculations from variational computations report precisions of 10-16 Hartrees, whereas in most cases our path integral results obtained for the ground and first excited states of He are lower than these results by about 10-6 Hartrees or more.

  3. EPA Critical Path Science Plan Projects 19, 20 and 21: Human and Bovine Source Detection

    EPA Science Inventory

    The U.S. EPA Critical Path Science Plan Projects are: Project 19: develop novel bovine and human host-specific PCR assays and complete performance evaluation with other published methods. Project 20: Evaluate human-specific assays with water samples impacted with different lev...

  4. Girsanov reweighting for path ensembles and Markov state models

    NASA Astrophysics Data System (ADS)

    Donati, L.; Hartmann, C.; Keller, B. G.

    2017-06-01

    The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.

  5. Using multiple travel paths to estimate daily travel distance in arboreal, group-living primates.

    PubMed

    Steel, Ruth Irene

    2015-01-01

    Primate field studies often estimate daily travel distance (DTD) in order to estimate energy expenditure and/or test foraging hypotheses. In group-living species, the center of mass (CM) method is traditionally used to measure DTD; a point is marked at the group's perceived center of mass at a set time interval or upon each move, and the distance between consecutive points is measured and summed. However, for groups using multiple travel paths, the CM method potentially creates a central path that is shorter than the individual paths and/or traverses unused areas. These problems may compromise tests of foraging hypotheses, since distance and energy expenditure could be underestimated. To better understand the magnitude of these potential biases, I designed and tested the multiple travel paths (MTP) method, in which DTD was calculated by recording all travel paths taken by the group's members, weighting each path's distance based on its proportional use by the group, and summing the weighted distances. To compare the MTP and CM methods, DTD was calculated using both methods in three groups of Udzungwa red colobus monkeys (Procolobus gordonorum; group size 30-43) for a random sample of 30 days between May 2009 and March 2010. Compared to the CM method, the MTP method provided significantly longer estimates of DTD that were more representative of the actual distance traveled and the areas used by a group. The MTP method is more time-intensive and requires multiple observers compared to the CM method. However, it provides greater accuracy for testing ecological and foraging models.

  6. Cytological Evaluation and REBA HPV-ID HPV Testing of Newly Developed Liquid-Based Cytology, EASYPREP: Comparison with SurePath.

    PubMed

    Lee, Youn Soo; Gong, Gyungyub; Sohn, Jin Hee; Ryu, Ki Sung; Lee, Jung Hun; Khang, Shin Kwang; Cho, Kyung-Ja; Kim, Yong-Man; Kang, Chang Suk

    2013-06-01

    The objective of this study was to evaluate a newly-developed EASYPREP liquid-based cytology method in cervicovaginal specimens and compare it with SurePath. Cervicovaginal specimens were prospectively collected from 1,000 patients with EASYPREP and SurePath. The specimens were first collected by brushing for SurePath and second for EASYPREP. The specimens of both methods were diagnosed according to the Bethesda System. Additionally, we performed to REBA HPV-ID genotyping and sequencing analysis for human papillomavirus (HPV) on 249 specimens. EASYPREP and SurePath showed even distribution of cells and were equal in cellularity and staining quality. The diagnostic agreement between the two methods was 96.5%. Based on the standard of SurePath, the sensitivity, specificity, positive predictive value, and negative predictive value of EASYPREP were 90.7%, 99.2%, 94.8%, and 98.5%, respectively. The positivity of REBA HPV-ID was 49.4% and 95.1% in normal and abnormal cytological samples, respectively. The result of REBA HPV-ID had high concordance with sequencing analysis. EASYPREP provided comparable results to SurePath in the diagnosis and staining quality of cytology examinations and in HPV testing with REBA HPV-ID. EASYPREP could be another LBC method choice for the cervicovaginal specimens. Additionally, REBA HPV-ID may be a useful method for HPV genotyping.

  7. Method and apparatus for probing relative volume fractions

    DOEpatents

    Jandrasits, Walter G.; Kikta, Thomas J.

    1998-01-01

    A relative volume fraction probe particularly for use in a multiphase fluid system includes two parallel conductive paths defining therebetween a sample zone within the system. A generating unit generates time varying electrical signals which are inserted into one of the two parallel conductive paths. A time domain reflectometer receives the time varying electrical signals returned by the second of the two parallel conductive paths and, responsive thereto, outputs a curve of impedance versus distance. An analysis unit then calculates the area under the curve, subtracts the calculated area from an area produced when the sample zone consists entirely of material of a first fluid phase, and divides this calculated difference by the difference between an area produced when the sample zone consists entirely of material of the first fluid phase and an area produced when the sample zone consists entirely of material of a second fluid phase. The result is the volume fraction.

  8. Method and apparatus for probing relative volume fractions

    DOEpatents

    Jandrasits, W.G.; Kikta, T.J.

    1998-03-17

    A relative volume fraction probe particularly for use in a multiphase fluid system includes two parallel conductive paths defining therebetween a sample zone within the system. A generating unit generates time varying electrical signals which are inserted into one of the two parallel conductive paths. A time domain reflectometer receives the time varying electrical signals returned by the second of the two parallel conductive paths and, responsive thereto, outputs a curve of impedance versus distance. An analysis unit then calculates the area under the curve, subtracts the calculated area from an area produced when the sample zone consists entirely of material of a first fluid phase, and divides this calculated difference by the difference between an area produced when the sample zone consists entirely of material of the first fluid phase and an area produced when the sample zone consists entirely of material of a second fluid phase. The result is the volume fraction. 9 figs.

  9. Extended depth measurement for a Stokes sample imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Dixon, Alexander W.; Taberner, Andrew J.; Nash, Martyn P.; Nielsen, Poul M. F.

    2018-02-01

    A non-destructive imaging technique is required for quantifying the anisotropic and heterogeneous structural arrangement of collagen in soft tissue membranes, such as bovine pericardium, which are used in the construction of bioprosthetic heart valves. Previously, our group developed a Stokes imaging polarimeter that measures the linear birefringence of samples in a transmission arrangement. With this device, linear retardance and optic axis orientation; can be estimated over a sample using simple vector algebra on Stokes vectors in the Poincaré sphere. However, this method is limited to a single path retardation of a half-wave, limiting the thickness of samples that can be imaged. The polarimeter has been extended to allow illumination of narrow bandwidth light of controllable wavelength through achromatic lenses and polarization optics. We can now take advantage of the wavelength dependence of relative retardation to remove ambiguities that arise when samples have a single path retardation of a half-wave to full-wave. This effectively doubles the imaging depth of this method. The method has been validated using films of cellulose of varied thickness, and applied to samples of bovine pericardium.

  10. An improved sampling method of complex network

    NASA Astrophysics Data System (ADS)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  11. System and Method for Measuring the Transfer Function of a Guided Wave Device

    NASA Technical Reports Server (NTRS)

    Froggatt, Mark E. (Inventor); Erdogan, Turan (Inventor)

    2002-01-01

    A method/system are provided for measuring the NxN scalar transfer function elements for an N-port guided wave device. Optical energy of a selected wavelength is generated at a source and directed along N reference optical paths having N reference path lengths. Each reference optical path terminates in one of N detectors such that N reference signals are produced at the N detectors. The reference signals are indicative of amplitude, phase and frequency of the optical energy carried along the N reference optical paths. The optical energy from the source is also directed to the N-ports of the guided wave device and then on to each of the N detectors such that N measurement optical paths are defined between the source and each of the N detectors. A portion of the optical energy is modified in terms of at least one of the amplitude and phase to produce N modified signals at each of the N detectors. At each of the N detectors, each of the N modified signals is combined with a corresponding one of the N reference signals to produce corresponding N combined signals at each of the N detectors. A total of N(sup 2) measurement signals are generated by the N detectors. Each of the N(sup 2) measurement signals is sampled at a wave number increment (Delta)k so that N(sup 2) sampled signals are produced. The NxN transfer function elements are generated using the N(sup 2) sampled signals. Reference and measurement path length constraints are defined such that the N combined signals at each of the N detectors are spatially separated from one another in the time domain.

  12. Physically motivated global alignment method for electron tomography

    DOE PAGES

    Sanders, Toby; Prange, Micah; Akatay, Cem; ...

    2015-04-08

    Electron tomography is widely used for nanoscale determination of 3-D structures in many areas of science. Determining the 3-D structure of a sample from electron tomography involves three major steps: acquisition of sequence of 2-D projection images of the sample with the electron microscope, alignment of the images to a common coordinate system, and 3-D reconstruction and segmentation of the sample from the aligned image data. The resolution of the 3-D reconstruction is directly influenced by the accuracy of the alignment, and therefore, it is crucial to have a robust and dependable alignment method. In this paper, we develop amore » new alignment method which avoids the use of markers and instead traces the computed paths of many identifiable ‘local’ center-of-mass points as the sample is rotated. Compared with traditional correlation schemes, the alignment method presented here is resistant to cumulative error observed from correlation techniques, has very rigorous mathematical justification, and is very robust since many points and paths are used, all of which inevitably improves the quality of the reconstruction and confidence in the scientific results.« less

  13. Methodology for Augmenting Existing Paths with Additional Parallel Transects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, John E.

    2013-09-30

    Visual Sample Plan (VSP) is sample planning software that is used, among other purposes, to plan transect sampling paths to detect areas that were potentially used for munition training. This module was developed for application on a large site where existing roads and trails were to be used as primary sampling paths. Gap areas between these primary paths needed to found and covered with parallel transect paths. These gap areas represent areas on the site that are more than a specified distance from a primary path. These added parallel paths needed to optionally be connected together into a single path—themore » shortest path possible. The paths also needed to optionally be attached to existing primary paths, again with the shortest possible path. Finally, the process must be repeatable and predictable so that the same inputs (primary paths, specified distance, and path options) will result in the same set of new paths every time. This methodology was developed to meet those specifications.« less

  14. Accelerating ab initio path integral molecular dynamics with multilevel sampling of potential surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Geng, Hua Y., E-mail: huay.geng@gmail.com; Department of Chemistry and Chemical Biology, Cornell University, Baker Laboratory, Ithaca, NY 14853

    A multilevel approach to sample the potential energy surface in a path integral formalism is proposed. The purpose is to reduce the required number of ab initio evaluations of energy and forces in ab initio path integral molecular dynamics (AI-PIMD) simulation, without compromising the overall accuracy. To validate the method, the internal energy and free energy of an Einstein crystal are calculated and compared with the analytical solutions. As a preliminary application, we assess the performance of the method in a realistic model—the FCC phase of dense atomic hydrogen, in which the calculated result shows that the acceleration rate ismore » about 3 to 4-fold for a two-level implementation, and can be increased up to 10 times if extrapolation is used. With only 16 beads used for the ab initio potential sampling, this method gives a well converged internal energy. The residual error in pressure is just about 3 GPa, whereas it is about 20 GPa for a plain AI-PIMD calculation with the same number of beads. The vibrational free energy of the FCC phase of dense hydrogen at 300 K is also calculated with an AI-PIMD thermodynamic integration method, which gives a result of about 0.51 eV/proton at a density of r{sub s}=0.912.« less

  15. Neutron capture studies with a short flight path

    NASA Astrophysics Data System (ADS)

    Walter, Stephan; Heil, Michael; Käppeler, Franz; Plag, Ralf; Reifarth, René

    The time of flight (TOF) method is an important tool for the experimental determination of neu- tron capture cross sections which are needed for s-process nucleosynthesis in general, and for analyses of branchings in the s-process reaction path in particular. So far, sample masses of at least several milligrams are required to compensate limitations in the currently available neutron fluxes. This constraint leads to unacceptable backgrounds for most of the relevant unstable branch point nuclei, due to the decay activity of the sample. A possible solution has been proposed by the NCAP project at the University of Frankfurt. A first step in this direction is reported here, which aims at enhancing the sensitivity of the Karlsruhe TOF array by reducing the neutron flight path to only a few centimeters. Though sample masses in the microgram regime can be used by this approach, the increase in neutron flux has to be paid by a higher background from the prompt flash related to neutron production. Test measurements with Au samples are reported.

  16. Method and apparatus for probing relative volume fractions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jandrasits, W.G.; Kikta, T.J.

    1996-12-31

    A relative volume fraction probe particularly for use in a multiphase fluid system includes two parallel conductive paths defining there between a sample zone within the system. A generating unit generates time varying electrical signals which are inserted into one of the two parallel conductive paths. A time domain reflectometer receives the time varying electrical signals returned by the second of the two parallel conductive paths and, responsive thereto, outputs a curve of impedance versus distance. An analysis unit then calculates the area under the curve, subtracts the calculated area from an area produced when the sample zone consists entirelymore » of material of a first fluid phase, and divides this calculated difference by the difference between an area produced when the sample zone consists entirely of material of the first fluid phase and an area produced when the sample zone consists entirely of material of a second fluid phase. The result is the volume fraction.« less

  17. Computational path planner for product assembly in complex environments

    NASA Astrophysics Data System (ADS)

    Shang, Wei; Liu, Jianhua; Ning, Ruxin; Liu, Mi

    2013-03-01

    Assembly path planning is a crucial problem in assembly related design and manufacturing processes. Sampling based motion planning algorithms are used for computational assembly path planning. However, the performance of such algorithms may degrade much in environments with complex product structure, narrow passages or other challenging scenarios. A computational path planner for automatic assembly path planning in complex 3D environments is presented. The global planning process is divided into three phases based on the environment and specific algorithms are proposed and utilized in each phase to solve the challenging issues. A novel ray test based stochastic collision detection method is proposed to evaluate the intersection between two polyhedral objects. This method avoids fake collisions in conventional methods and degrades the geometric constraint when a part has to be removed with surface contact with other parts. A refined history based rapidly-exploring random tree (RRT) algorithm which bias the growth of the tree based on its planning history is proposed and employed in the planning phase where the path is simple but the space is highly constrained. A novel adaptive RRT algorithm is developed for the path planning problem with challenging scenarios and uncertain environment. With extending values assigned on each tree node and extending schemes applied, the tree can adapts its growth to explore complex environments more efficiently. Experiments on the key algorithms are carried out and comparisons are made between the conventional path planning algorithms and the presented ones. The comparing results show that based on the proposed algorithms, the path planner can compute assembly path in challenging complex environments more efficiently and with higher success. This research provides the references to the study of computational assembly path planning under complex environments.

  18. Analyzing Water's Optical Absorption

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A cooperative agreement between World Precision Instruments (WPI), Inc., and Stennis Space Center has led the UltraPath(TM) device, which provides a more efficient method for analyzing the optical absorption of water samples at sea. UltraPath is a unique, high-performance absorbance spectrophotometer with user-selectable light path lengths. It is an ideal tool for any study requiring precise and highly sensitive spectroscopic determination of analytes, either in the laboratory or the field. As a low-cost, rugged, and portable system capable of high- sensitivity measurements in widely divergent waters, UltraPath will help scientists examine the role that coastal ocean environments play in the global carbon cycle. UltraPath(TM) is a trademark of World Precision Instruments, Inc. LWCC(TM) is a trademark of World Precision Instruments, Inc.

  19. The “Path” Not Taken: Exploring Structural Differences in Mapped- Versus Shortest-Network-Path School Travel Routes

    PubMed Central

    Larsen, Kristian; Faulkner, Guy E. J.; Stone, Michelle R.

    2013-01-01

    Objectives. School route measurement often involves estimating the shortest network path. We challenged the relatively uncritical adoption of this method in school travel research and tested the route discordance hypothesis that several types of difference exist between shortest network paths and reported school routes. Methods. We constructed the mapped and shortest path through network routes for a sample of 759 children aged 9 to 13 years in grades 5 and 6 (boys = 45%, girls = 54%, unreported gender = 1%), in Toronto, Ontario, Canada. We used Wilcoxon signed-rank tests to compare reported with shortest-path route measures including distance, route directness, intersection crossings, and route overlap. Measurement difference was explored by mode and location. Results. We found statistical evidence of route discordance for walkers and children who were driven and detected it more often for inner suburban cases. Evidence of route discordance varied by mode and school location. Conclusions. We found statistically significant differences for route structure and built environment variables measured along reported and geographic information systems–based shortest-path school routes. Uncertainty produced by the shortest-path approach challenges its conceptual and empirical validity in school travel research. PMID:23865648

  20. Optical Path Switching Based Differential Absorption Radiometry for Substance Detection

    NASA Technical Reports Server (NTRS)

    Sachse, Glen W. (Inventor)

    2000-01-01

    A system and method are provided for detecting one or more substances. An optical path switch divides sample path radiation into a time series of alternating first polarized components and second polarized components. The first polarized components are transmitted along a first optical path and the second polarized components along a second optical path. A first gasless optical filter train filters the first polarized components to isolate at least a first wavelength band thereby generating first filtered radiation. A second gasless optical filter train filters the second polarized components to isolate at least a second wavelength band thereby generating second filtered radiation. The first wavelength band and second wavelength band are unique. Further, spectral absorption of a substance of interest is different at the first wavelength band as compared to the second wavelength band. A beam combiner combines the first and second filtered radiation to form a combined beam of radiation. A detector is disposed to monitor magnitude of at least a portion of the combined beam alternately at the first wavelength band and the second wavelength band as an indication of the concentration of the substance in the sample path.

  1. Liquid waveguide spectrophotometric measurement of nanomolar ammonium in seawater based on the indophenol reaction with o-phenylphenol (OPP).

    PubMed

    Hashihama, Fuminori; Kanda, Jota; Tauchi, Ami; Kodama, Taketoshi; Saito, Hiroaki; Furuya, Ken

    2015-10-01

    We describe a highly sensitive colorimetric method for the determination of nanomolar concentrations of ammonium in seawater based on the indophenol reaction with o-phenylphenol [(1,1'-biphenyl)-2-ol, abbreviated as OPP]. OPP is available as non-toxic, stable flaky crystals with no caustic odor and has some advantages over phenol in practical use. The method was established by using a gas-segmented continuous flow analyzer equipped with two types of long path liquid waveguide capillary cell, LWCCs (100 cm and 200 cm) and an UltraPath (200 cm), which have inner diameters of 0.55 mm and 2 mm, respectively. The reagent concentrations, flow rates of the pumping tubes, and reaction path and temperature were determined on the basis of a manual indophenol blue method with OPP (Kanda, Water Res. 29 (1995) 2746-2750). The sample mixed with reagents that form indophenol blue dye was measured at 670 nm. Aged subtropical surface water was used as a blank, a matrix of standards, and the carrier. The detection limits of the analytical systems with a 100 cm LWCC, a 200 cm LWCC, and a 200 cm UltraPath were 6, 4, and 4 nM, respectively. These systems had high precision (<4% at 100 nM) and a linear dynamic range up to 200 nM. Non-linear baseline drift did not occur when using the UltraPath system. This is due to the elimination of cell clogging because of the larger inner diameter of the UltraPath compared to the LWCCs. The UltraPath system is thus more suitable for long-term measurements compared with the LWCC systems. The results of the proposed sensitive colorimetry and a conventional colorimetry for the determination of seawater samples showed no significant difference. The proposed analytical systems were applied to underway surface monitoring and vertical observation in the oligotrophic South Pacific. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Interleaved Spiral-In/Out with Application to fMRI

    PubMed Central

    Law, Christine S.; Glover, Gary H.

    2009-01-01

    The conventional spiral-in/out trajectory samples k-space sufficiently in the spiral-in path and sufficiently in the spiral-out path to enable creation of separate images. We propose an interleaved spiral-in/out trajectory comprising a spiral-in path that gathers half of the k-space data, and a complimentary spiral-out path that gathers the other half. The readout duration is thereby reduced by approximately half, offering two distinct advantages: reduction of signal dropout due to susceptibility-induced field gradients (at the expense of signal-to-noise ratio), and the ability to achieve higher spatial resolution when the readout duration is identical to the conventional method. Two reconstruction methods are described; both involve temporal filtering to remove aliasing artifacts. Empirically, interleaved spiral-in/out images are free from false activation resulting from signal pileup around the air/tissue interface, which is common in the conventional spiral-out method. Comparisons with conventional methods using a hyperoxia stimulus reveal greater frontal-orbital activation volumes but a slight reduction of overall activation in other brain regions. PMID:19449373

  3. Improved transition path sampling methods for simulation of rare events

    NASA Astrophysics Data System (ADS)

    Chopra, Manan; Malshe, Rohit; Reddy, Allam S.; de Pablo, J. J.

    2008-04-01

    The free energy surfaces of a wide variety of systems encountered in physics, chemistry, and biology are characterized by the existence of deep minima separated by numerous barriers. One of the central aims of recent research in computational chemistry and physics has been to determine how transitions occur between deep local minima on rugged free energy landscapes, and transition path sampling (TPS) Monte-Carlo methods have emerged as an effective means for numerical investigation of such transitions. Many of the shortcomings of TPS-like approaches generally stem from their high computational demands. Two new algorithms are presented in this work that improve the efficiency of TPS simulations. The first algorithm uses biased shooting moves to render the sampling of reactive trajectories more efficient. The second algorithm is shown to substantially improve the accuracy of the transition state ensemble by introducing a subset of local transition path simulations in the transition state. The system considered in this work consists of a two-dimensional rough energy surface that is representative of numerous systems encountered in applications. When taken together, these algorithms provide gains in efficiency of over two orders of magnitude when compared to traditional TPS simulations.

  4. Inclusion of trial functions in the Langevin equation path integral ground state method: Application to parahydrogen clusters and their isotopologues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Matthew; Constable, Steve; Ing, Christopher

    2014-06-21

    We developed and studied the implementation of trial wavefunctions in the newly proposed Langevin equation Path Integral Ground State (LePIGS) method [S. Constable, M. Schmidt, C. Ing, T. Zeng, and P.-N. Roy, J. Phys. Chem. A 117, 7461 (2013)]. The LePIGS method is based on the Path Integral Ground State (PIGS) formalism combined with Path Integral Molecular Dynamics sampling using a Langevin equation based sampling of the canonical distribution. This LePIGS method originally incorporated a trivial trial wavefunction, ψ{sub T}, equal to unity. The present paper assesses the effectiveness of three different trial wavefunctions on three isotopes of hydrogen formore » cluster sizes N = 4, 8, and 13. The trial wavefunctions of interest are the unity trial wavefunction used in the original LePIGS work, a Jastrow trial wavefunction that includes correlations due to hard-core repulsions, and a normal mode trial wavefunction that includes information on the equilibrium geometry. Based on this analysis, we opt for the Jastrow wavefunction to calculate energetic and structural properties for parahydrogen, orthodeuterium, and paratritium clusters of size N = 4 − 19, 33. Energetic and structural properties are obtained and compared to earlier work based on Monte Carlo PIGS simulations to study the accuracy of the proposed approach. The new results for paratritium clusters will serve as benchmark for future studies. This paper provides a detailed, yet general method for optimizing the necessary parameters required for the study of the ground state of a large variety of systems.« less

  5. Method and apparatus for vapor detection

    NASA Technical Reports Server (NTRS)

    Lerner, Melvin (Inventor); Hood, Lyal V. (Inventor); Rommel, Marjorie A. (Inventor); Pettitt, Bruce C. (Inventor); Erikson, Charles M. (Inventor)

    1980-01-01

    The method disclosed herein may be practiced by passing the vapors to be sampled along a path with halogen vapor, preferably chlorine vapor, heating the mixed vapors to halogenate those of the sampled vapors subject to halogenation, removing unreacted halogen vapor, and then sensing the vapors for organic halogenated compounds. The apparatus disclosed herein comprises means for flowing the vapors, both sample and halogen vapors, into a common path, means for heating the mixed vapors to effect the halogenation reaction, means for removing unreacted halogen vapor, and a sensing device for sensing halogenated compounds. By such a method and means, the vapors of low molecular weight hydrocarbons, ketones and alcohols, when present, such as methane, ethane, acetone, ethanol, and the like are converted, at least in part, to halogenated compounds, then the excess halogen removed or trapped, and the resultant vapors of the halogenated compounds sensed or detected. The system is highly sensitive. For example, acetone in a concentration of 30 parts per billion (volume) is readily detected.

  6. Path Analysis of Work Family Conflict, Job Salary and Promotion Satisfaction, Work Engagement to Subjective Well-Being of the Primary and Middle School Principals

    ERIC Educational Resources Information Center

    Hu, Chun-mei; Cui, Shu-jing; Wang, Lei

    2016-01-01

    Objective: To investigate the path analysis of work family conflict, job salary and promotion satisfaction, work engagement to subjective well-being of the primary and middle school principals, and provide advice for enhancing their well-being. Methods: Using convenient sampling, totally 300 primary and middle school principals completed the WFC,…

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akagi, Sheryl; Burling, Ian R.; Mendoza, Albert

    We report trace-gas emission factors from three pine-understory prescribed fires in South Carolina, U.S. measured during the fall of 2011. The fires were an attempt to simulate high-intensity burns and the fuels included mature pine stands not frequently subjected to prescribed fire that were lit following a sustained period of drought. In this work we focus on the emission factor measurements made using a fixed open-path gas analyzer Fourier transform infrared (FTIR) system. We compare these emission factors with those measured using a roving, point sampling, land-based FTIR and an airborne FTIR that were deployed on the same fires. Wemore » also compare to emission factors measured by a similar open-path FTIR system deployed on savanna fires in Africa. The data suggest that the method in which the smoke is sampled can strongly influence the relative abundance of the emissions that are observed. The airborne FTIR probed the bulk of the emissions, which were lofted in the convection column and the downwind chemistry while the roving ground-based point sampling FTIR measured the contribution of individual residual smoldering combustion fuel elements scattered throughout the burn site. The open-path FTIR provided a fixed path-integrated sample of emissions produced directly upwind mixed with emissions that were redirected by wind gusts, or right after ignition and before the adjacent plume achieved significant vertical development. It typically probed two distinct combustion regimes, “flaming-like” (immediately after adjacent ignition) and “smoldering-like”, denoted “early” and “late”, respectively. The calculated emission factors from open-path measurements were closer to the airborne than to the point measurements, but this could vary depending on the calculation method or from fire to fire given the changing MCE and dynamics over the duration of a typical burn. The emission factors for species whose emissions are not highly fuel dependent (e.g. CH4 and CH3OH) from all three systems can be plotted versus modified combustion efficiency and fit to a single consistent trend suggesting that differences between the systems for these species may be mainly due to the unique mix of flaming and smoldering that each system sampled. For other more fuel dependent species, the different fuels sampled also likely contributed to platform differences in emission factors. The path-integrated sample of the ground-level smoke layer adjacent to the fire provided by the open-path measurements is important for estimating fire-line exposure to smoke for wildland fire personnel. We provide a table of estimated fire-line exposures for numerous known air toxics based on synthesizing results from several studies. Our data suggest that peak exposures are more likely to challenge permissible exposure limits for wildland fire personnel than shift-average exposures.« less

  8. Conceptual Soundness, Metric Development, Benchmarking, and Targeting for PATH Subprogram Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosey. G.; Doris, E.; Coggeshall, C.

    The objective of this study is to evaluate the conceptual soundness of the U.S. Department of Housing and Urban Development (HUD) Partnership for Advancing Technology in Housing (PATH) program's revised goals and establish and apply a framework to identify and recommend metrics that are the most useful for measuring PATH's progress. This report provides an evaluative review of PATH's revised goals, outlines a structured method for identifying and selecting metrics, proposes metrics and benchmarks for a sampling of individual PATH programs, and discusses other metrics that potentially could be developed that may add value to the evaluation process. The frameworkmore » and individual program metrics can be used for ongoing management improvement efforts and to inform broader program-level metrics for government reporting requirements.« less

  9. Preserving correlations between trajectories for efficient path sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gingrich, Todd R.; Geissler, Phillip L.; Chemical Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720

    2015-06-21

    Importance sampling of trajectories has proved a uniquely successful strategy for exploring rare dynamical behaviors of complex systems in an unbiased way. Carrying out this sampling, however, requires an ability to propose changes to dynamical pathways that are substantial, yet sufficiently modest to obtain reasonable acceptance rates. Satisfying this requirement becomes very challenging in the case of long trajectories, due to the characteristic divergences of chaotic dynamics. Here, we examine schemes for addressing this problem, which engineer correlation between a trial trajectory and its reference path, for instance using artificial forces. Our analysis is facilitated by a modern perspective onmore » Markov chain Monte Carlo sampling, inspired by non-equilibrium statistical mechanics, which clarifies the types of sampling strategies that can scale to long trajectories. Viewed in this light, the most promising such strategy guides a trial trajectory by manipulating the sequence of random numbers that advance its stochastic time evolution, as done in a handful of existing methods. In cases where this “noise guidance” synchronizes trajectories effectively, as the Glauber dynamics of a two-dimensional Ising model, we show that efficient path sampling can be achieved for even very long trajectories.« less

  10. Heating and thermal control of brazing technique to break contamination path for potential Mars sample return

    NASA Astrophysics Data System (ADS)

    Bao, Xiaoqi; Badescu, Mircea; Sherrit, Stewart; Bar-Cohen, Yoseph; Campos, Sergio

    2017-04-01

    The potential return of Mars sample material is of great interest to the planetary science community, as it would enable extensive analysis of samples with highly sensitive laboratory instruments. It is important to make sure such a mission concept would not bring any living microbes, which may possibly exist on Mars, back to Earth's environment. In order to ensure the isolation of Mars microbes from Earth's Atmosphere, a brazing sealing and sterilizing technique was proposed to break the Mars-to-Earth contamination path. Effectively, heating the brazing zone in high vacuum space and controlling the sample temperature for integrity are key challenges to the implementation of this technique. The break-thechain procedures for container configurations, which are being considered, were simulated by multi-physics finite element models. Different heating methods including induction and resistive/radiation were evaluated. The temperature profiles of Martian samples in a proposed container structure were predicted. The results show that the sealing and sterilizing process can be controlled such that the samples temperature is maintained below the level that may cause damage, and that the brazing technique is a feasible approach to breaking the contamination path.

  11. Stochastic stability properties of jump linear systems

    NASA Technical Reports Server (NTRS)

    Feng, Xiangbo; Loparo, Kenneth A.; Ji, Yuandong; Chizeck, Howard J.

    1992-01-01

    Jump linear systems are defined as a family of linear systems with randomly jumping parameters (usually governed by a Markov jump process) and are used to model systems subject to failures or changes in structure. The authors study stochastic stability properties in jump linear systems and the relationship among various moment and sample path stability properties. It is shown that all second moment stability properties are equivalent and are sufficient for almost sure sample path stability, and a testable necessary and sufficient condition for second moment stability is derived. The Lyapunov exponent method for the study of almost sure sample stability is discussed, and a theorem which characterizes the Lyapunov exponents of jump linear systems is presented.

  12. Path planning in uncertain flow fields using ensemble method

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Le Maître, Olivier P.; Hoteit, Ibrahim; Knio, Omar M.

    2016-10-01

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  13. Quantum structural fluctuation in para-hydrogen clusters revealed by the variational path integral method

    NASA Astrophysics Data System (ADS)

    Miura, Shinichi

    2018-03-01

    In this paper, the ground state of para-hydrogen clusters for size regime N ≤ 40 has been studied by our variational path integral molecular dynamics method. Long molecular dynamics calculations have been performed to accurately evaluate ground state properties. The chemical potential of the hydrogen molecule is found to have a zigzag size dependence, indicating the magic number stability for the clusters of the size N = 13, 26, 29, 34, and 39. One-body density of the hydrogen molecule is demonstrated to have a structured profile, not a melted one. The observed magic number stability is examined using the inherent structure analysis. We also have developed a novel method combining our variational path integral hybrid Monte Carlo method with the replica exchange technique. We introduce replicas of the original system bridging from the structured to the melted cluster, which is realized by scaling the potential energy of the system. Using the enhanced sampling method, the clusters are demonstrated to have the structured density profile in the ground state.

  14. Quantum structural fluctuation in para-hydrogen clusters revealed by the variational path integral method.

    PubMed

    Miura, Shinichi

    2018-03-14

    In this paper, the ground state of para-hydrogen clusters for size regime N ≤ 40 has been studied by our variational path integral molecular dynamics method. Long molecular dynamics calculations have been performed to accurately evaluate ground state properties. The chemical potential of the hydrogen molecule is found to have a zigzag size dependence, indicating the magic number stability for the clusters of the size N = 13, 26, 29, 34, and 39. One-body density of the hydrogen molecule is demonstrated to have a structured profile, not a melted one. The observed magic number stability is examined using the inherent structure analysis. We also have developed a novel method combining our variational path integral hybrid Monte Carlo method with the replica exchange technique. We introduce replicas of the original system bridging from the structured to the melted cluster, which is realized by scaling the potential energy of the system. Using the enhanced sampling method, the clusters are demonstrated to have the structured density profile in the ground state.

  15. Adaptive Importance Sampling for Control and Inference

    NASA Astrophysics Data System (ADS)

    Kappen, H. J.; Ruiz, H. C.

    2016-03-01

    Path integral (PI) control problems are a restricted class of non-linear control problems that can be solved formally as a Feynman-Kac PI and can be estimated using Monte Carlo sampling. In this contribution we review PI control theory in the finite horizon case. We subsequently focus on the problem how to compute and represent control solutions. We review the most commonly used methods in robotics and control. Within the PI theory, the question of how to compute becomes the question of importance sampling. Efficient importance samplers are state feedback controllers and the use of these requires an efficient representation. Learning and representing effective state-feedback controllers for non-linear stochastic control problems is a very challenging, and largely unsolved, problem. We show how to learn and represent such controllers using ideas from the cross entropy method. We derive a gradient descent method that allows to learn feed-back controllers using an arbitrary parametrisation. We refer to this method as the path integral cross entropy method or PICE. We illustrate this method for some simple examples. The PI control methods can be used to estimate the posterior distribution in latent state models. In neuroscience these problems arise when estimating connectivity from neural recording data using EM. We demonstrate the PI control method as an accurate alternative to particle filtering.

  16. Using the global positioning system to map disturbance patterns of forest harvesting machinery

    Treesearch

    T.P. McDonald; E.A. Carter; S.E. Taylor

    2002-01-01

    Abstract: A method was presented to transform sampled machine positional data obtained from a global positioning system (GPS) receiver into a two-dimensional raster map of number of passes as a function of location. The effect of three sources of error in the transformation process were investigated: path sampling rate (receiver sampling frequency);...

  17. Path Complexity in Virtual Water Maze Navigation: Differential Associations with Age, Sex, and Regional Brain Volume.

    PubMed

    Daugherty, Ana M; Yuan, Peng; Dahle, Cheryl L; Bender, Andrew R; Yang, Yiqin; Raz, Naftali

    2015-09-01

    Studies of human navigation in virtual maze environments have consistently linked advanced age with greater distance traveled between the start and the goal and longer duration of the search. Observations of search path geometry suggest that routes taken by older adults may be unnecessarily complex and that excessive path complexity may be an indicator of cognitive difficulties experienced by older navigators. In a sample of healthy adults, we quantify search path complexity in a virtual Morris water maze with a novel method based on fractal dimensionality. In a two-level hierarchical linear model, we estimated improvement in navigation performance across trials by a decline in route length, shortening of search time, and reduction in fractal dimensionality of the path. While replicating commonly reported age and sex differences in time and distance indices, a reduction in fractal dimension of the path accounted for improvement across trials, independent of age or sex. The volumes of brain regions associated with the establishment of cognitive maps (parahippocampal gyrus and hippocampus) were related to path dimensionality, but not to the total distance and time. Thus, fractal dimensionality of a navigational path may present a useful complementary method of quantifying performance in navigation. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Sampling the multiple folding mechanisms of Trp-cage in explicit solvent

    PubMed Central

    Juraszek, J.; Bolhuis, P. G.

    2006-01-01

    We investigate the kinetic pathways of folding and unfolding of the designed miniprotein Trp- cage in explicit solvent. Straightforward molecular dynamics and replica exchange methods both have severe convergence problems, whereas transition path sampling allows us to sample unbiased dynamical pathways between folded and unfolded states and leads to deeper understanding of the mechanisms of (un)folding. In contrast to previous predictions employing an implicit solvent, we find that Trp-cage folds primarily (80% of the paths) via a pathway forming the tertiary contacts and the salt bridge, before helix formation. The remaining 20% of the paths occur in the opposite order, by first forming the helix. The transition states of the rate-limiting steps are solvated native-like structures. Water expulsion is found to be the last step upon folding for each route. Committor analysis suggests that the dynamics of the solvent is not part of the reaction coordinate. Nevertheless, during the transition, specific water molecules are strongly bound and can play a structural role in the folding. PMID:17035504

  19. Multifrequency Ultra-High Resolution Miniature Scanning Microscope Using Microchannel And Solid-State Sensor Technologies And Method For Scanning Samples

    NASA Technical Reports Server (NTRS)

    Wang, Yu (Inventor)

    2006-01-01

    A miniature, ultra-high resolution, and color scanning microscope using microchannel and solid-state technology that does not require focus adjustment. One embodiment includes a source of collimated radiant energy for illuminating a sample, a plurality of narrow angle filters comprising a microchannel structure to permit the passage of only unscattered radiant energy through the microchannels with some portion of the radiant energy entering the microchannels from the sample, a solid-state sensor array attached to the microchannel structure, the microchannels being aligned with an element of the solid-state sensor array, that portion of the radiant energy entering the microchannels parallel to the microchannel walls travels to the sensor element generating an electrical signal from which an image is reconstructed by an external device, and a moving element for movement of the microchannel structure relative to the sample. Discloses a method for scanning samples whereby the sensor array elements trace parallel paths that are arbitrarily close to the parallel paths traced by other elements of the array.

  20. Rate Constant and Reaction Coordinate of Trp-Cage Folding in Explicit Water

    PubMed Central

    Juraszek, Jarek; Bolhuis, Peter G.

    2008-01-01

    We report rate constant calculations and a reaction coordinate analysis of the rate-limiting folding and unfolding process of the Trp-cage mini-protein in explicit solvent using transition interface sampling. Previous transition path sampling simulations revealed that in this (un)folding process the protein maintains its compact configuration, while a (de)increase of secondary structure is observed. The calculated folding rate agrees reasonably with experiment, while the unfolding rate is 10 times higher. We discuss possible origins for this mismatch. We recomputed the rates with the forward flux sampling method, and found a discrepancy of four orders of magnitude, probably caused by the method's higher sensitivity to the choice of order parameter with respect to transition interface sampling. Finally, we used the previously computed transition path-sampling ensemble to screen combinations of many order parameters for the best model of the reaction coordinate by employing likelihood maximization. We found that a combination of the root mean-square deviation of the helix and of the entire protein was, of the set of tried order parameters, the one that best describes the reaction coordination. PMID:18676648

  1. Re-evaluation of P-T paths across the Himalayan Main Central Thrust

    NASA Astrophysics Data System (ADS)

    Catlos, E. J.; Harrison, M.; Kelly, E. D.; Ashley, K.; Lovera, O. M.; Etzel, T.; Lizzadro-McPherson, D. J.

    2016-12-01

    The Main Central Thrust (MCT) is the dominant crustal thickening structure in the Himalayas, juxtaposing high-grade Greater Himalayan Crystalline rocks over the lower-grade Lesser Himalaya Formations. The fault is underlain by a 2 to 12-km-thick sequence of deformed rocks characterized by an apparent inverted metamorphic gradient, termed the MCT shear zone. Garnet-bearing rocks sampled from across the MCT along the Marysandi River in central Nepal contain monazite that decrease in age from Early Miocene (ca. 20 Ma) in the hanging wall to Late Miocene-Pliocene (ca. 7 Ma and 3 Ma) towards structurally lower levels in the shear zone. We obtained high-resolution garnet-zoning pressure-temperature (P-T) paths from 11 of the same rocks used for monazite geochronology using a recently-developed semi-automated Gibbs-free-energy-minimization technique. Quartz-in-garnet Raman barometry refined the locations of the paths. Diffusional re-equilibration of garnet zoning in hanging wall samples prevented accurate path determinations from most Greater Himalayan Crystalline samples, but one that shows a bell-shaped Mn zoning profile shows a slight decrease in P (from 8.2 to 7.6kbar) with increase in T (from 590 to 640ºC). Three MCT shear zone samples were modeled: one yields a simple path increasing in both P and T (6 to 7kbar, 540 to 580ºC); the others yield N-shaped paths that occupy similar P-T space (4 to 5.5 kbar, 500 to 560ºC). Five lower lesser Himalaya garnet-bearing rocks were modeled. One yields a path increasing in both P-T (6 to 7 kbar, 525 to 550ºC) but others show either sharp compression/decompression or N-shape paths (within 4.5-6 kbar and 530-580ºC). The lowermost sample decreases in P (5.5 to 5 kbar) over increasing T (540 to 580°C). No progressive change is seen from one type of path to another within the Lesser Himalayan Formations to the MCT zone. The results using the modeling approach yield lower P-T conditions compared to the Gibbs method and lower core/rim P-T conditions compared to traditional thermometers and barometers. Inclusion barometry suggests that baric estimates from the modeling may be underestimated by 2-4 kbar. Despite uncertainty, path shapes are consistent with a model in which the MCT shear zone experienced a progressive accretion of footwall slivers.

  2. Feasibility of Whole-Body Functional Mouse Imaging Using Helical Pinhole SPECT

    PubMed Central

    Metzler, Scott D.; Vemulapalli, Sreekanth; Jaszczak, Ronald J.; Akabani, Gamal; Chin, Bennett B.

    2010-01-01

    Purpose Detailed in vivo whole-body biodistributions of radiolabeled tracers may characterize the longitudinal progression of disease, and changes with therapeutic interventions. Small-animal imaging in mice is particularly attractive due to the wide array of well characterized genetically and surgically created models of disease. Single Photon Emission Computed Tomography (SPECT) imaging using pinhole collimation provides high resolution and sensitivity, but conventional methods using circular acquisitions result in severe image truncation and incomplete sampling of data which prevent the accurate determination of whole-body radiotracer biodistributions. This study describes the feasibility of helical acquisition paths to mitigate these effects. Procedures Helical paths of pinhole apertures were implemented using an external robotic stage aligned with the axis of rotation (AOR) of the scanner. Phantom and mouse scans were performed using helical paths and either circular or bi-circular orbits at the same radius of rotation (ROR). The bi-circular orbits consisted of two 360-degree scans separated by an axial shift to increase the axial field of view (FOV) and to improve the complete-sampling properties. Results Reconstructions of phantoms and mice acquired with helical paths show good image quality and are visually free of both truncation and axial-blurring artifacts. Circular orbits yielded reconstructions with both artifacts and a limited effective FOV. The bi-circular scans enlarged the axial FOV, but still suffered from truncation and sampling artifacts. Conclusions Helical paths can provide complete sampling data and large effective FOV, yielding 3D full-body in vivo biodistributions while still maintaining a small distance from the aperture to the object for good sensitivity and resolution. PMID:19521736

  3. Electrophoretic sample insertion. [device for uniformly distributing samples in flow path

    NASA Technical Reports Server (NTRS)

    Mccreight, L. R. (Inventor)

    1974-01-01

    Two conductive screens located in the flow path of an electrophoresis sample separation apparatus are charged electrically. The sample is introduced between the screens, and the charge is sufficient to disperse and hold the samples across the screens. When the charge is terminated, the samples are uniformly distributed in the flow path. Additionally, a first separation by charged properties has been accomplished.

  4. Self-adaptive enhanced sampling in the energy and trajectory spaces: accelerated thermodynamics and kinetic calculations.

    PubMed

    Gao, Yi Qin

    2008-04-07

    Here, we introduce a simple self-adaptive computational method to enhance the sampling in energy, configuration, and trajectory spaces. The method makes use of two strategies. It first uses a non-Boltzmann distribution method to enhance the sampling in the phase space, in particular, in the configuration space. The application of this method leads to a broad energy distribution in a large energy range and a quickly converged sampling of molecular configurations. In the second stage of simulations, the configuration space of the system is divided into a number of small regions according to preselected collective coordinates. An enhanced sampling of reactive transition paths is then performed in a self-adaptive fashion to accelerate kinetics calculations.

  5. Facilitating Self-Transcendence: An Intervention to Enhance Well-Being in Late Life.

    PubMed

    McCarthy, Valerie Lander; Hall, Lynne A; Crawford, Timothy N; Connelly, Jennifer

    2018-06-01

    This randomized controlled pilot study evaluated the effects of the Psychoeducational Approach to Transcendence and Health (PATH) Program, an 8-week intervention hypothesized to increase self-transcendence and improve well-being in community-dwelling women aged 60 years and older ( N = 20). The PATH combined mindfulness exercises, group processes, creative activities, and at-home practice using community engaged research methods. Findings provided some support for the effectiveness of PATH. Although there was no significant Group × Time interaction, self-transcendence, psychological well-being, and life satisfaction differed significantly pre- and postintervention in the wait-listed control group, which received a revised version of the program. Further study is needed with a larger sample to determine the effectiveness of PATH. Potentially, PATH may be a convenient and affordable activity to support personal development and improve well-being among older adults at senior centers, retirement communities, nursing homes, church groups, and other places where older adults gather.

  6. Concerns regarding 24-h sampling for formaldehyde, acetaldehyde, and acrolein using 2,4-dinitrophenylhydrazine (DNPH)-coated solid sorbents

    NASA Astrophysics Data System (ADS)

    Herrington, Jason S.; Hays, Michael D.

    2012-08-01

    There is high demand for accurate and reliable airborne carbonyl measurement methods due to the human and environmental health impacts of carbonyls and their effects on atmospheric chemistry. Standardized 2,4-dinitrophenylhydrazine (DNPH)-based sampling methods are frequently applied for measuring gaseous carbonyls in the atmospheric environment. However, there are multiple short-comings associated with these methods that detract from an accurate understanding of carbonyl-related exposure, health effects, and atmospheric chemistry. The purpose of this brief technical communication is to highlight these method challenges and their influence on national ambient monitoring networks, and to provide a logical path forward for accurate carbonyl measurement. This manuscript focuses on three specific carbonyl compounds of high toxicological interest—formaldehyde, acetaldehyde, and acrolein. Further method testing and development, the revision of standardized methods, and the plausibility of introducing novel technology for these carbonyls are considered elements of the path forward. The consolidation of this information is important because it seems clear that carbonyl data produced utilizing DNPH-based methods are being reported without acknowledgment of the method short-comings or how to best address them.

  7. SIMULATION FROM ENDPOINT-CONDITIONED, CONTINUOUS-TIME MARKOV CHAINS ON A FINITE STATE SPACE, WITH APPLICATIONS TO MOLECULAR EVOLUTION.

    PubMed

    Hobolth, Asger; Stone, Eric A

    2009-09-01

    Analyses of serially-sampled data often begin with the assumption that the observations represent discrete samples from a latent continuous-time stochastic process. The continuous-time Markov chain (CTMC) is one such generative model whose popularity extends to a variety of disciplines ranging from computational finance to human genetics and genomics. A common theme among these diverse applications is the need to simulate sample paths of a CTMC conditional on realized data that is discretely observed. Here we present a general solution to this sampling problem when the CTMC is defined on a discrete and finite state space. Specifically, we consider the generation of sample paths, including intermediate states and times of transition, from a CTMC whose beginning and ending states are known across a time interval of length T. We first unify the literature through a discussion of the three predominant approaches: (1) modified rejection sampling, (2) direct sampling, and (3) uniformization. We then give analytical results for the complexity and efficiency of each method in terms of the instantaneous transition rate matrix Q of the CTMC, its beginning and ending states, and the length of sampling time T. In doing so, we show that no method dominates the others across all model specifications, and we give explicit proof of which method prevails for any given Q, T, and endpoints. Finally, we introduce and compare three applications of CTMCs to demonstrate the pitfalls of choosing an inefficient sampler.

  8. Investigation of progressive failure robustness and alternate load paths for damage tolerant structures

    NASA Astrophysics Data System (ADS)

    Marhadi, Kun Saptohartyadi

    Structural optimization for damage tolerance under various unforeseen damage scenarios is computationally challenging. It couples non-linear progressive failure analysis with sampling-based stochastic analysis of random damage. The goal of this research was to understand the relationship between alternate load paths available in a structure and its damage tolerance, and to use this information to develop computationally efficient methods for designing damage tolerant structures. Progressive failure of a redundant truss structure subjected to small random variability was investigated to identify features that correlate with robustness and predictability of the structure's progressive failure. The identified features were used to develop numerical surrogate measures that permit computationally efficient deterministic optimization to achieve robustness and predictability of progressive failure. Analysis of damage tolerance on designs with robust progressive failure indicated that robustness and predictability of progressive failure do not guarantee damage tolerance. Damage tolerance requires a structure to redistribute its load to alternate load paths. In order to investigate the load distribution characteristics that lead to damage tolerance in structures, designs with varying degrees of damage tolerance were generated using brute force stochastic optimization. A method based on principal component analysis was used to describe load distributions (alternate load paths) in the structures. Results indicate that a structure that can develop alternate paths is not necessarily damage tolerant. The alternate load paths must have a required minimum load capability. Robustness analysis of damage tolerant optimum designs indicates that designs are tailored to specified damage. A design Optimized under one damage specification can be sensitive to other damages not considered. Effectiveness of existing load path definitions and characterizations were investigated for continuum structures. A load path definition using a relative compliance change measure (U* field) was demonstrated to be the most useful measure of load path. This measure provides quantitative information on load path trajectories and qualitative information on the effectiveness of the load path. The use of the U* description of load paths in optimizing structures for effective load paths was investigated.

  9. Toward cost-efficient sampling methods

    NASA Astrophysics Data System (ADS)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  10. Bayesian Analysis of Evolutionary Divergence with Genomic Data under Diverse Demographic Models.

    PubMed

    Chung, Yujin; Hey, Jody

    2017-06-01

    We present a new Bayesian method for estimating demographic and phylogenetic history using population genomic data. Several key innovations are introduced that allow the study of diverse models within an Isolation-with-Migration framework. The new method implements a 2-step analysis, with an initial Markov chain Monte Carlo (MCMC) phase that samples simple coalescent trees, followed by the calculation of the joint posterior density for the parameters of a demographic model. In step 1, the MCMC sampling phase, the method uses a reduced state space, consisting of coalescent trees without migration paths, and a simple importance sampling distribution without the demography of interest. Once obtained, a single sample of trees can be used in step 2 to calculate the joint posterior density for model parameters under multiple diverse demographic models, without having to repeat MCMC runs. Because migration paths are not included in the state space of the MCMC phase, but rather are handled by analytic integration in step 2 of the analysis, the method is scalable to a large number of loci with excellent MCMC mixing properties. With an implementation of the new method in the computer program MIST, we demonstrate the method's accuracy, scalability, and other advantages using simulated data and DNA sequences of two common chimpanzee subspecies: Pan troglodytes (P. t.) troglodytes and P. t. verus. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. The "path" not taken: exploring structural differences in mapped- versus shortest-network-path school travel routes.

    PubMed

    Buliung, Ron N; Larsen, Kristian; Faulkner, Guy E J; Stone, Michelle R

    2013-09-01

    School route measurement often involves estimating the shortest network path. We challenged the relatively uncritical adoption of this method in school travel research and tested the route discordance hypothesis that several types of difference exist between shortest network paths and reported school routes. We constructed the mapped and shortest path through network routes for a sample of 759 children aged 9 to 13 years in grades 5 and 6 (boys = 45%, girls = 54%, unreported gender = 1%), in Toronto, Ontario, Canada. We used Wilcoxon signed-rank tests to compare reported with shortest-path route measures including distance, route directness, intersection crossings, and route overlap. Measurement difference was explored by mode and location. We found statistical evidence of route discordance for walkers and children who were driven and detected it more often for inner suburban cases. Evidence of route discordance varied by mode and school location. We found statistically significant differences for route structure and built environment variables measured along reported and geographic information systems-based shortest-path school routes. Uncertainty produced by the shortest-path approach challenges its conceptual and empirical validity in school travel research.

  12. A compact CCD-monitored atomic force microscope with optical vision and improved performances.

    PubMed

    Mingyue, Liu; Haijun, Zhang; Dongxian, Zhang

    2013-09-01

    A novel CCD-monitored atomic force microscope (AFM) with optical vision and improved performances has been developed. Compact optical paths are specifically devised for both tip-sample microscopic monitoring and cantilever's deflection detecting with minimized volume and optimal light-amplifying ratio. The ingeniously designed AFM probe with such optical paths enables quick and safe tip-sample approaching, convenient and effective tip-sample positioning, and high quality image scanning. An image stitching method is also developed to build a wider-range AFM image under monitoring. Experiments show that this AFM system can offer real-time optical vision for tip-sample monitoring with wide visual field and/or high lateral optical resolution by simply switching the objective; meanwhile, it has the elegant performances of nanometer resolution, high stability, and high scan speed. Furthermore, it is capable of conducting wider-range image measurement while keeping nanometer resolution. Copyright © 2013 Wiley Periodicals, Inc.

  13. Few-mode fiber detection for tissue characterization in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Eugui, Pablo; Lichtenegger, Antonia; Augustin, Marco; Harper, Danielle J.; Fialová, Stanislava; Wartak, Andreas; Hitzenberger, Christoph K.; Baumann, Bernhard

    2017-07-01

    A few-mode fiber based detection for OCT systems is presented. The capability of few-mode fibers for delivering light through different fiber paths enables the application of these fibers for angular scattering tissue character- ization. Since the optical path lengths traveled in the fiber change between the fiber modes, the OCT image information will be reconstructed at different depth positions, separating the directly backscattered light from the light scattered at other angles. Using the proposed method, the relation between the angle of reflection from the sample and the respective modal intensity distribution was investigated. The system was demonstrated for imaging ex-vivo brain tissue samples of patients with Alzheimer's disease.

  14. Evaluation of a rapid and inexpensive dipstick assay for the diagnosis of Plasmodium falciparum malaria.

    PubMed Central

    Mills, C. D.; Burgess, D. C.; Taylor, H. J.; Kain, K. C.

    1999-01-01

    Rapid, accurate and affordable methods are needed for the diagnosis of malaria. Reported here is an evaluation of a new immunochromatographic strip, the PATH Falciparum Malaria IC Strip, which is impregnated with an immobilized IgM monoclonal antibody that binds to the HRP-II antigen of Plasmodium falciparum. In contrast to other commercially available kits marketed for the rapid diagnosis of falciparum malaria, this kit should be affordable in the malaria-endemic world. Using microscopy and polymerase chain reaction (PCR)-based methods as reference standards, we compared two versions of the PATH test for the detection of P. falciparum infection in 200 febrile travellers. As determined by PCR and microscopy, 148 travellers had malaria, 50 of whom (33.8%) were infected with P. falciparum. Compared with PCR, the two versions of the PATH test had initial sensitivities of 90% and 88% and specificities of 97% and 96%, respectively, for the detection of falciparum malaria. When discrepant samples were retested blindly with a modified procedure (increased sample volume and longer washing step) the sensitivity and specificity of both kits improved to 96% and 99%, respectively. The two remaining false negatives occurred in samples with < 100 parasites per microliter of blood. The accuracy, simplicity and predicted low cost may make this test a useful diagnostic tool in malaria-endemic areas. PMID:10444878

  15. Alpha-spectrometry and fractal analysis of surface micro-images for characterisation of porous materials used in manufacture of targets for laser plasma experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aushev, A A; Barinov, S P; Vasin, M G

    2015-06-30

    We present the results of employing the alpha-spectrometry method to determine the characteristics of porous materials used in targets for laser plasma experiments. It is shown that the energy spectrum of alpha-particles, after their passage through porous samples, allows one to determine the distribution of their path length in the foam skeleton. We describe the procedure of deriving such a distribution, excluding both the distribution broadening due to statistical nature of the alpha-particle interaction with an atomic structure (straggling) and hardware effects. The fractal analysis of micro-images is applied to the same porous surface samples that have been studied bymore » alpha-spectrometry. The fractal dimension and size distribution of the number of the foam skeleton grains are obtained. Using the data obtained, a distribution of the total foam skeleton thickness along a chosen direction is constructed. It roughly coincides with the path length distribution of alpha-particles within a range of larger path lengths. It is concluded that the combined use of the alpha-spectrometry method and fractal analysis of images will make it possible to determine the size distribution of foam skeleton grains (or pores). The results can be used as initial data in theoretical studies on propagation of the laser and X-ray radiation in specific porous samples. (laser plasma)« less

  16. Harmonic-phase path-integral approximation of thermal quantum correlation functions

    NASA Astrophysics Data System (ADS)

    Robertson, Christopher; Habershon, Scott

    2018-03-01

    We present an approximation to the thermal symmetric form of the quantum time-correlation function in the standard position path-integral representation. By transforming to a sum-and-difference position representation and then Taylor-expanding the potential energy surface of the system to second order, the resulting expression provides a harmonic weighting function that approximately recovers the contribution of the phase to the time-correlation function. This method is readily implemented in a Monte Carlo sampling scheme and provides exact results for harmonic potentials (for both linear and non-linear operators) and near-quantitative results for anharmonic systems for low temperatures and times that are likely to be relevant to condensed phase experiments. This article focuses on one-dimensional examples to provide insights into convergence and sampling properties, and we also discuss how this approximation method may be extended to many-dimensional systems.

  17. Method and system for laser-based formation of micro-shapes in surfaces of optical elements

    DOEpatents

    Bass, Isaac Louis; Guss, Gabriel Mark

    2013-03-05

    A method of forming a surface feature extending into a sample includes providing a laser operable to emit an output beam and modulating the output beam to form a pulse train having a plurality of pulses. The method also includes a) directing the pulse train along an optical path intersecting an exposed portion of the sample at a position i and b) focusing a first portion of the plurality of pulses to impinge on the sample at the position i. Each of the plurality of pulses is characterized by a spot size at the sample. The method further includes c) ablating at least a portion of the sample at the position i to form a portion of the surface feature and d) incrementing counter i. The method includes e) repeating steps a) through d) to form the surface feature. The sample is free of a rim surrounding the surface feature.

  18. Effect of equilibration on primitive path analyses of entangled polymers.

    PubMed

    Hoy, Robert S; Robbins, Mark O

    2005-12-01

    We use recently developed primitive path analysis (PPA) methods to study the effect of equilibration on entanglement density in model polymeric systems. Values of Ne for two commonly used equilibration methods differ by a factor of 2-4 even though the methods produce similar large-scale chain statistics. We find that local chain stretching in poorly equilibrated samples increases entanglement density. The evolution of Ne with time shows that many entanglements are lost through fast processes such as chain retraction as the local stretching relaxes. Quenching a melt state into a glass has little effect on Ne. Equilibration-dependent differences in short-scale structure affect the craze extension ratio much less than expected from the differences in PPA values of Ne.

  19. Model-Averaged ℓ1 Regularization using Markov Chain Monte Carlo Model Composition

    PubMed Central

    Fraley, Chris; Percival, Daniel

    2014-01-01

    Bayesian Model Averaging (BMA) is an effective technique for addressing model uncertainty in variable selection problems. However, current BMA approaches have computational difficulty dealing with data in which there are many more measurements (variables) than samples. This paper presents a method for combining ℓ1 regularization and Markov chain Monte Carlo model composition techniques for BMA. By treating the ℓ1 regularization path as a model space, we propose a method to resolve the model uncertainty issues arising in model averaging from solution path point selection. We show that this method is computationally and empirically effective for regression and classification in high-dimensional datasets. We apply our technique in simulations, as well as to some applications that arise in genomics. PMID:25642001

  20. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model.

    PubMed

    Plotnikov, Nikolay V

    2014-08-12

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.

  1. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model

    PubMed Central

    2015-01-01

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268

  2. Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis.

    PubMed

    Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M

    2016-07-14

    Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.

  3. Note: A pure-sampling quantum Monte Carlo algorithm with independent Metropolis

    NASA Astrophysics Data System (ADS)

    Vrbik, Jan; Ospadov, Egor; Rothstein, Stuart M.

    2016-07-01

    Recently, Ospadov and Rothstein published a pure-sampling quantum Monte Carlo algorithm (PSQMC) that features an auxiliary Path Z that connects the midpoints of the current and proposed Paths X and Y, respectively. When sufficiently long, Path Z provides statistical independence of Paths X and Y. Under those conditions, the Metropolis decision used in PSQMC is done without any approximation, i.e., not requiring microscopic reversibility and without having to introduce any G(x → x'; τ) factors into its decision function. This is a unique feature that contrasts with all competing reptation algorithms in the literature. An example illustrates that dependence of Paths X and Y has adverse consequences for pure sampling.

  4. Improved electromagnetic tracking for catheter path reconstruction with application in high-dose-rate brachytherapy.

    PubMed

    Lugez, Elodie; Sadjadi, Hossein; Joshi, Chandra P; Akl, Selim G; Fichtinger, Gabor

    2017-04-01

    Electromagnetic (EM) catheter tracking has recently been introduced in order to enable prompt and uncomplicated reconstruction of catheter paths in various clinical interventions. However, EM tracking is prone to measurement errors which can compromise the outcome of the procedure. Minimizing catheter tracking errors is therefore paramount to improve the path reconstruction accuracy. An extended Kalman filter (EKF) was employed to combine the nonlinear kinematic model of an EM sensor inside the catheter, with both its position and orientation measurements. The formulation of the kinematic model was based on the nonholonomic motion constraints of the EM sensor inside the catheter. Experimental verification was carried out in a clinical HDR suite. Ten catheters were inserted with mean curvatures varying from 0 to [Formula: see text] in a phantom. A miniaturized Ascension (Burlington, Vermont, USA) trakSTAR EM sensor (model 55) was threaded within each catheter at various speeds ranging from 7.4 to [Formula: see text]. The nonholonomic EKF was applied on the tracking data in order to statistically improve the EM tracking accuracy. A sample reconstruction error was defined at each point as the Euclidean distance between the estimated EM measurement and its corresponding ground truth. A path reconstruction accuracy was defined as the root mean square of the sample reconstruction errors, while the path reconstruction precision was defined as the standard deviation of these sample reconstruction errors. The impacts of sensor velocity and path curvature on the nonholonomic EKF method were determined. Finally, the nonholonomic EKF catheter path reconstructions were compared with the reconstructions provided by the manufacturer's filters under default settings, namely the AC wide notch and the DC adaptive filter. With a path reconstruction accuracy of 1.9 mm, the nonholonomic EKF surpassed the performance of the manufacturer's filters (2.4 mm) by 21% and the raw EM measurements (3.5 mm) by 46%. Similarly, with a path reconstruction precision of 0.8 mm, the nonholonomic EKF surpassed the performance of the manufacturer's filters (1.0 mm) by 20% and the raw EM measurements (1.7 mm) by 53%. Path reconstruction accuracies did not follow an apparent trend when varying the path curvature and sensor velocity; instead, reconstruction accuracies were predominantly impacted by the position of the EM field transmitter ([Formula: see text]). The advanced nonholonomic EKF is effective in reducing EM measurement errors when reconstructing catheter paths, is robust to path curvature and sensor speed, and runs in real time. Our approach is promising for a plurality of clinical procedures requiring catheter reconstructions, such as cardiovascular interventions, pulmonary applications (Bender et al. in medical image computing and computer-assisted intervention-MICCAI 99. Springer, Berlin, pp 981-989, 1999), and brachytherapy.

  5. Path integral approach to the Wigner representation of canonical density operators for discrete systems coupled to harmonic baths.

    PubMed

    Montoya-Castillo, Andrés; Reichman, David R

    2017-01-14

    We derive a semi-analytical form for the Wigner transform for the canonical density operator of a discrete system coupled to a harmonic bath based on the path integral expansion of the Boltzmann factor. The introduction of this simple and controllable approach allows for the exact rendering of the canonical distribution and permits systematic convergence of static properties with respect to the number of path integral steps. In addition, the expressions derived here provide an exact and facile interface with quasi- and semi-classical dynamical methods, which enables the direct calculation of equilibrium time correlation functions within a wide array of approaches. We demonstrate that the present method represents a practical path for the calculation of thermodynamic data for the spin-boson and related systems. We illustrate the power of the present approach by detailing the improvement of the quality of Ehrenfest theory for the correlation function C zz (t)=Re⟨σ z (0)σ z (t)⟩ for the spin-boson model with systematic convergence to the exact sampling function. Importantly, the numerically exact nature of the scheme presented here and its compatibility with semiclassical methods allows for the systematic testing of commonly used approximations for the Wigner-transformed canonical density.

  6. Lensless digital holography with diffuse illumination through a pseudo-random phase mask.

    PubMed

    Bernet, Stefan; Harm, Walter; Jesacher, Alexander; Ritsch-Marte, Monika

    2011-12-05

    Microscopic imaging with a setup consisting of a pseudo-random phase mask, and an open CMOS camera, without an imaging objective, is demonstrated. The pseudo random phase mask acts as a diffuser for an incoming laser beam, scattering a speckle pattern to a CMOS chip, which is recorded once as a reference. A sample which is afterwards inserted somewhere in the optical beam path changes the speckle pattern. A single (non-iterative) image processing step, comparing the modified speckle pattern with the previously recorded one, generates a sharp image of the sample. After a first calibration the method works in real-time and allows quantitative imaging of complex (amplitude and phase) samples in an extended three-dimensional volume. Since no lenses are used, the method is free from lens abberations. Compared to standard inline holography the diffuse sample illumination improves the axial sectioning capability by increasing the effective numerical aperture in the illumination path, and it suppresses the undesired so-called twin images. For demonstration, a high resolution spatial light modulator (SLM) is programmed to act as the pseudo-random phase mask. We show experimental results, imaging microscopic biological samples, e.g. insects, within an extended volume at a distance of 15 cm with a transverse and longitudinal resolution of about 60 μm and 400 μm, respectively.

  7. Path length entropy analysis of diastolic heart sounds.

    PubMed

    Griffel, Benjamin; Zia, Mohammad K; Fridman, Vladamir; Saponieri, Cesare; Semmlow, John L

    2013-09-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multiscale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%-81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Path Length Entropy Analysis of Diastolic Heart Sounds

    PubMed Central

    Griffel, B.; Zia, M. K.; Fridman, V.; Saponieri, C.; Semmlow, J. L.

    2013-01-01

    Early detection of coronary artery disease (CAD) using the acoustic approach, a noninvasive and cost-effective method, would greatly improve the outcome of CAD patients. To detect CAD, we analyze diastolic sounds for possible CAD murmurs. We observed diastolic sounds to exhibit 1/f structure and developed a new method, path length entropy (PLE) and a scaled version (SPLE), to characterize this structure to improve CAD detection. We compare SPLE results to Hurst exponent, Sample entropy and Multi-scale entropy for distinguishing between normal and CAD patients. SPLE achieved a sensitivity-specificity of 80%–81%, the best of the tested methods. However, PLE and SPLE are not sufficient to prove nonlinearity, and evaluation using surrogate data suggests that our cardiovascular sound recordings do not contain significant nonlinear properties. PMID:23930808

  9. Logistic Regression and Path Analysis Method to Analyze Factors influencing Students’ Achievement

    NASA Astrophysics Data System (ADS)

    Noeryanti, N.; Suryowati, K.; Setyawan, Y.; Aulia, R. R.

    2018-04-01

    Students' academic achievement cannot be separated from the influence of two factors namely internal and external factors. The first factors of the student (internal factors) consist of intelligence (X1), health (X2), interest (X3), and motivation of students (X4). The external factors consist of family environment (X5), school environment (X6), and society environment (X7). The objects of this research are eighth grade students of the school year 2016/2017 at SMPN 1 Jiwan Madiun sampled by using simple random sampling. Primary data are obtained by distributing questionnaires. The method used in this study is binary logistic regression analysis that aims to identify internal and external factors that affect student’s achievement and how the trends of them. Path Analysis was used to determine the factors that influence directly, indirectly or totally on student’s achievement. Based on the results of binary logistic regression, variables that affect student’s achievement are interest and motivation. And based on the results obtained by path analysis, factors that have a direct impact on student’s achievement are students’ interest (59%) and students’ motivation (27%). While the factors that have indirect influences on students’ achievement, are family environment (97%) and school environment (37).

  10. Path integral Monte Carlo and the electron gas

    NASA Astrophysics Data System (ADS)

    Brown, Ethan W.

    Path integral Monte Carlo is a proven method for accurately simulating quantum mechanical systems at finite-temperature. By stochastically sampling Feynman's path integral representation of the quantum many-body density matrix, path integral Monte Carlo includes non-perturbative effects like thermal fluctuations and particle correlations in a natural way. Over the past 30 years, path integral Monte Carlo has been successfully employed to study the low density electron gas, high-pressure hydrogen, and superfluid helium. For systems where the role of Fermi statistics is important, however, traditional path integral Monte Carlo simulations have an exponentially decreasing efficiency with decreased temperature and increased system size. In this thesis, we work towards improving this efficiency, both through approximate and exact methods, as specifically applied to the homogeneous electron gas. We begin with a brief overview of the current state of atomic simulations at finite-temperature before we delve into a pedagogical review of the path integral Monte Carlo method. We then spend some time discussing the one major issue preventing exact simulation of Fermi systems, the sign problem. Afterwards, we introduce a way to circumvent the sign problem in PIMC simulations through a fixed-node constraint. We then apply this method to the homogeneous electron gas at a large swatch of densities and temperatures in order to map out the warm-dense matter regime. The electron gas can be a representative model for a host of real systems, from simple medals to stellar interiors. However, its most common use is as input into density functional theory. To this end, we aim to build an accurate representation of the electron gas from the ground state to the classical limit and examine its use in finite-temperature density functional formulations. The latter half of this thesis focuses on possible routes beyond the fixed-node approximation. As a first step, we utilize the variational principle inherent in the path integral Monte Carlo method to optimize the nodal surface. By using a ansatz resembling a free particle density matrix, we make a unique connection between a nodal effective mass and the traditional effective mass of many-body quantum theory. We then propose and test several alternate nodal ansatzes and apply them to single atomic systems. Finally, we propose a method to tackle the sign problem head on, by leveraging the relatively simple structure of permutation space. Using this method, we find we can perform exact simulations this of the electron gas and 3He that were previously impossible.

  11. Motion planning for autonomous vehicle based on radial basis function neural network in unstructured environment.

    PubMed

    Chen, Jiajia; Zhao, Pan; Liang, Huawei; Mei, Tao

    2014-09-18

    The autonomous vehicle is an automated system equipped with features like environment perception, decision-making, motion planning, and control and execution technology. Navigating in an unstructured and complex environment is a huge challenge for autonomous vehicles, due to the irregular shape of road, the requirement of real-time planning, and the nonholonomic constraints of vehicle. This paper presents a motion planning method, based on the Radial Basis Function (RBF) neural network, to guide the autonomous vehicle in unstructured environments. The proposed algorithm extracts the drivable region from the perception grid map based on the global path, which is available in the road network. The sample points are randomly selected in the drivable region, and a gradient descent method is used to train the RBF network. The parameters of the motion-planning algorithm are verified through the simulation and experiment. It is observed that the proposed approach produces a flexible, smooth, and safe path that can fit any road shape. The method is implemented on autonomous vehicle and verified against many outdoor scenes; furthermore, a comparison of proposed method with the existing well-known Rapidly-exploring Random Tree (RRT) method is presented. The experimental results show that the proposed method is highly effective in planning the vehicle path and offers better motion quality.

  12. Motion Planning for Autonomous Vehicle Based on Radial Basis Function Neural Network in Unstructured Environment

    PubMed Central

    Chen, Jiajia; Zhao, Pan; Liang, Huawei; Mei, Tao

    2014-01-01

    The autonomous vehicle is an automated system equipped with features like environment perception, decision-making, motion planning, and control and execution technology. Navigating in an unstructured and complex environment is a huge challenge for autonomous vehicles, due to the irregular shape of road, the requirement of real-time planning, and the nonholonomic constraints of vehicle. This paper presents a motion planning method, based on the Radial Basis Function (RBF) neural network, to guide the autonomous vehicle in unstructured environments. The proposed algorithm extracts the drivable region from the perception grid map based on the global path, which is available in the road network. The sample points are randomly selected in the drivable region, and a gradient descent method is used to train the RBF network. The parameters of the motion-planning algorithm are verified through the simulation and experiment. It is observed that the proposed approach produces a flexible, smooth, and safe path that can fit any road shape. The method is implemented on autonomous vehicle and verified against many outdoor scenes; furthermore, a comparison of proposed method with the existing well-known Rapidly-exploring Random Tree (RRT) method is presented. The experimental results show that the proposed method is highly effective in planning the vehicle path and offers better motion quality. PMID:25237902

  13. Improved graphite furnace atomizer

    DOEpatents

    Siemer, D.D.

    1983-05-18

    A graphite furnace atomizer for use in graphite furnace atomic absorption spectroscopy is described wherein the heating elements are affixed near the optical path and away from the point of sample deposition, so that when the sample is volatilized the spectroscopic temperature at the optical path is at least that of the volatilization temperature, whereby analyteconcomitant complex formation is advantageously reduced. The atomizer may be elongated along its axis to increase the distance between the optical path and the sample deposition point. Also, the atomizer may be elongated along the axis of the optical path, whereby its analytical sensitivity is greatly increased.

  14. Photo ion spectrometer

    DOEpatents

    Gruen, Dieter M.; Young, Charles E.; Pellin, Michael J.

    1989-01-01

    A method and apparatus for extracting for quantitative analysis ions of selected atomic components of a sample. A lens system is configured to provide a slowly diminishing field region for a volume containing the selected atomic components, enabling accurate energy analysis of ions generated in the slowly diminishing field region. The lens system also enables focusing on a sample of a charged particle beam, such as an ion beam, along a path length perpendicular to the sample and extraction of the charged particles along a path length also perpendicular to the sample. Improvement of signal to noise ratio is achieved by laser excitation of ions to selected autoionization states before carrying out quantitative analysis. Accurate energy analysis of energetic charged particles is assured by using a preselected resistive thick film configuration disposed on an insulator substrate for generating predetermined electric field boundary conditions to achieve for analysis the required electric field potential. The spectrometer also is applicable in the fields of SIMS, ISS and electron spectroscopy.

  15. Photo ion spectrometer

    DOEpatents

    Gruen, D.M.; Young, C.E.; Pellin, M.J.

    1989-08-08

    A method and apparatus are described for extracting for quantitative analysis ions of selected atomic components of a sample. A lens system is configured to provide a slowly diminishing field region for a volume containing the selected atomic components, enabling accurate energy analysis of ions generated in the slowly diminishing field region. The lens system also enables focusing on a sample of a charged particle beam, such as an ion beam, along a path length perpendicular to the sample and extraction of the charged particles along a path length also perpendicular to the sample. Improvement of signal to noise ratio is achieved by laser excitation of ions to selected auto-ionization states before carrying out quantitative analysis. Accurate energy analysis of energetic charged particles is assured by using a preselected resistive thick film configuration disposed on an insulator substrate for generating predetermined electric field boundary conditions to achieve for analysis the required electric field potential. The spectrometer also is applicable in the fields of SIMS, ISS and electron spectroscopy. 8 figs.

  16. The influence of parenting style on academic achievement and career path

    PubMed Central

    ZAHED ZAHEDANI, ZAHRA; REZAEE, RITA; YAZDANI, ZAHRA; BAGHERI, SINA; NABEIEI, PARISA

    2016-01-01

    Introduction Several factors affect the academic performance of college students and parenting style is one significant factor. The current study has been done with the purpose of investigating the relationship between parenting styles, academic achievement and career path of students at Shiraz University of Medical Sciences.     Methods This is a correlation study carried out at Shiraz University of Medical Sciences. Among 1600 students, 310 students were selected randomly as the sample. Baumrind’s Parenting Style and Moqimi’s Career Path questionnaires were used and the obtained scores were correlated with the students' transcripts. To study the relation between variables Pearson correlation coefficient was used. Results There was a significant relationship between authoritarian parenting style and educational success (p=0.03). Also findings showed a significant relationship between firm parenting style and Career Path of the students, authoritarian parenting style and Career Path of the students, educational success and Career Path of the students (p=0.001). Conclusion Parents have an important role in identifying children’s talent and guiding them. Mutual understanding and close relationship between parents and children are recommended. Therefore, it is recommended that the methods of correct interaction of parents and children be more valued and parents familiarize their children with roles of businesses in society and the need for employment in legitimate businesses and this important affair should be more emphasized through mass media and family training classes. PMID:27382580

  17. Some path-following techniques for solution of nonlinear equations and comparison with parametric differentiation

    NASA Technical Reports Server (NTRS)

    Barger, R. L.; Walters, R. W.

    1986-01-01

    Some path-following techniques are described and compared with other methods. Use of multipurpose techniques that can be used at more than one stage of the path-following computation results in a system that is relatively simple to understand, program, and use. Comparison of path-following methods with the method of parametric differentiation reveals definite advantages for the path-following methods. The fact that parametric differentiation has found a broader range of applications indicates that path-following methods have been underutilized.

  18. Can an inadequate cervical cytology sample in ThinPrep be converted to a satisfactory sample by processing it with a SurePath preparation?

    PubMed

    Sørbye, Sveinung Wergeland; Pedersen, Mette Kristin; Ekeberg, Bente; Williams, Merete E Johansen; Sauer, Torill; Chen, Ying

    2017-01-01

    The Norwegian Cervical Cancer Screening Program recommends screening every 3 years for women between 25 and 69 years of age. There is a large difference in the percentage of unsatisfactory samples between laboratories that use different brands of liquid-based cytology. We wished to examine if inadequate ThinPrep samples could be satisfactory by processing them with the SurePath protocol. A total of 187 inadequate ThinPrep specimens from the Department of Clinical Pathology at University Hospital of North Norway were sent to Akershus University Hospital for conversion to SurePath medium. Ninety-one (48.7%) were processed through the automated "gynecologic" application for cervix cytology samples, and 96 (51.3%) were processed with the "nongynecological" automatic program. Out of 187 samples that had been unsatisfactory by ThinPrep, 93 (49.7%) were satisfactory after being converted to SurePath. The rate of satisfactory cytology was 36.6% and 62.5% for samples run through the "gynecology" program and "nongynecology" program, respectively. Of the 93 samples that became satisfactory after conversion from ThinPrep to SurePath, 80 (86.0%) were screened as normal while 13 samples (14.0%) were given an abnormal diagnosis, which included 5 atypical squamous cells of undetermined significance, 5 low-grade squamous intraepithelial lesion, 2 atypical glandular cells not otherwise specified, and 1 atypical squamous cells cannot exclude high-grade squamous intraepithelial lesion. A total of 2.1% (4/187) of the women got a diagnosis of cervical intraepithelial neoplasia 2 or higher at a later follow-up. Converting cytology samples from ThinPrep to SurePath processing can reduce the number of unsatisfactory samples. The samples should be run through the "nongynecology" program to ensure an adequate number of cells.

  19. Surface Wave Tomography with Spatially Varying Smoothing Based on Continuous Model Regionalization

    NASA Astrophysics Data System (ADS)

    Liu, Chuanming; Yao, Huajian

    2017-03-01

    Surface wave tomography based on continuous regionalization of model parameters is widely used to invert for 2-D phase or group velocity maps. An inevitable problem is that the distribution of ray paths is far from homogeneous due to the spatially uneven distribution of stations and seismic events, which often affects the spatial resolution of the tomographic model. We present an improved tomographic method with a spatially varying smoothing scheme that is based on the continuous regionalization approach. The smoothness of the inverted model is constrained by the Gaussian a priori model covariance function with spatially varying correlation lengths based on ray path density. In addition, a two-step inversion procedure is used to suppress the effects of data outliers on tomographic models. Both synthetic and real data are used to evaluate this newly developed tomographic algorithm. In the synthetic tests, when the contrived model has different scales of anomalies but with uneven ray path distribution, we compare the performance of our spatially varying smoothing method with the traditional inversion method, and show that the new method is capable of improving the recovery in regions of dense ray sampling. For real data applications, the resulting phase velocity maps of Rayleigh waves in SE Tibet produced using the spatially varying smoothing method show similar features to the results with the traditional method. However, the new results contain more detailed structures and appears to better resolve the amplitude of anomalies. From both synthetic and real data tests we demonstrate that our new approach is useful to achieve spatially varying resolution when used in regions with heterogeneous ray path distribution.

  20. Measurement of tracer gas distributions using an open-path FTIR system coupled with computed tomography

    NASA Astrophysics Data System (ADS)

    Drescher, Anushka C.; Yost, Michael G.; Park, Doo Y.; Levine, Steven P.; Gadgil, Ashok J.; Fischer, Marc L.; Nazaroff, William W.

    1995-05-01

    Optical remote sensing and iterative computed tomography (CT) can be combined to measure the spatial distribution of gaseous pollutant concentrations in a plane. We have conducted chamber experiments to test this combination of techniques using an Open Path Fourier Transform Infrared Spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). ART was found to converge to solutions that showed excellent agreement with the ray integral concentrations measured by the FTIR but were inconsistent with simultaneously gathered point sample concentration measurements. A new CT method was developed based on (a) the superposition of bivariate Gaussians to model the concentration distribution and (b) a simulated annealing minimization routine to find the parameters of the Gaussians that resulted in the best fit to the ray integral concentration data. This new method, named smooth basis function minimization (SBFM) generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present one set of illustrative experimental data to compare the performance of ART and SBFM.

  1. Enzymatic Kinetic Isotope Effects from First-Principles Path Sampling Calculations.

    PubMed

    Varga, Matthew J; Schwartz, Steven D

    2016-04-12

    In this study, we develop and test a method to determine the rate of particle transfer and kinetic isotope effects in enzymatic reactions, specifically yeast alcohol dehydrogenase (YADH), from first-principles. Transition path sampling (TPS) and normal mode centroid dynamics (CMD) are used to simulate these enzymatic reactions without knowledge of their reaction coordinates and with the inclusion of quantum effects, such as zero-point energy and tunneling, on the transferring particle. Though previous studies have used TPS to calculate reaction rate constants in various model and real systems, it has not been applied to a system as large as YADH. The calculated primary H/D kinetic isotope effect agrees with previously reported experimental results, within experimental error. The kinetic isotope effects calculated with this method correspond to the kinetic isotope effect of the transfer event itself. The results reported here show that the kinetic isotope effects calculated from first-principles, purely for barrier passage, can be used to predict experimental kinetic isotope effects in enzymatic systems.

  2. Accurate Exchange-Correlation Energies for the Warm Dense Electron Gas.

    PubMed

    Malone, Fionn D; Blunt, N S; Brown, Ethan W; Lee, D K K; Spencer, J S; Foulkes, W M C; Shepherd, James J

    2016-09-09

    The density matrix quantum Monte Carlo (DMQMC) method is used to sample exact-on-average N-body density matrices for uniform electron gas systems of up to 10^{124} matrix elements via a stochastic solution of the Bloch equation. The results of these calculations resolve a current debate over the accuracy of the data used to parametrize finite-temperature density functionals. Exchange-correlation energies calculated using the real-space restricted path-integral formalism and the k-space configuration path-integral formalism disagree by up to ∼10% at certain reduced temperatures T/T_{F}≤0.5 and densities r_{s}≤1. Our calculations confirm the accuracy of the configuration path-integral Monte Carlo results available at high density and bridge the gap to lower densities, providing trustworthy data in the regime typical of planetary interiors and solids subject to laser irradiation. We demonstrate that the DMQMC method can calculate free energies directly and present exact free energies for T/T_{F}≥1 and r_{s}≤2.

  3. Core self-evaluations and work engagement: Testing a perception, action, and development path.

    PubMed

    Tims, Maria; Akkermans, Jos

    2017-01-01

    Core self-evaluations (CSE) have predictive value for important work outcomes such as job satisfaction and job performance. However, little is known about the mechanisms that may explain these relationships. The purpose of the present study is to contribute to CSE theory by proposing and subsequently providing a first test of theoretically relevant mediating paths through which CSE may be related to work engagement. Based on approach/avoidance motivation and Job Demands-Resources theory, we examined a perception (via job characteristics), action (via job crafting), and development path (via career competencies). Two independent samples were obtained from employees working in Germany and The Netherlands (N = 303 and N = 404, respectively). When taking all mediators into account, results showed that the perception path represented by autonomy and social support played a minor role in the relationship between CSE and work engagement. Specifically, autonomy did not function as a mediator in both samples while social support played a marginally significant role in the CSE-work engagement relationship in sample 1 and received full support in sample 2. The action path exemplified by job crafting mediated the relationship between CSE and work engagement in both samples. Finally, the development path operationalized with career competencies mediated the relationship between CSE and work engagement in sample 1. The study presents evidence for an action and development path over and above the often tested perception path to explain how CSE is related to work engagement. This is one of the first studies to propose and show that CSE not only influences perceptions but also triggers employee actions and developmental strategies that relate to work engagement.

  4. Core self-evaluations and work engagement: Testing a perception, action, and development path

    PubMed Central

    Akkermans, Jos

    2017-01-01

    Core self-evaluations (CSE) have predictive value for important work outcomes such as job satisfaction and job performance. However, little is known about the mechanisms that may explain these relationships. The purpose of the present study is to contribute to CSE theory by proposing and subsequently providing a first test of theoretically relevant mediating paths through which CSE may be related to work engagement. Based on approach/avoidance motivation and Job Demands-Resources theory, we examined a perception (via job characteristics), action (via job crafting), and development path (via career competencies). Two independent samples were obtained from employees working in Germany and The Netherlands (N = 303 and N = 404, respectively). When taking all mediators into account, results showed that the perception path represented by autonomy and social support played a minor role in the relationship between CSE and work engagement. Specifically, autonomy did not function as a mediator in both samples while social support played a marginally significant role in the CSE–work engagement relationship in sample 1 and received full support in sample 2. The action path exemplified by job crafting mediated the relationship between CSE and work engagement in both samples. Finally, the development path operationalized with career competencies mediated the relationship between CSE and work engagement in sample 1. The study presents evidence for an action and development path over and above the often tested perception path to explain how CSE is related to work engagement. This is one of the first studies to propose and show that CSE not only influences perceptions but also triggers employee actions and developmental strategies that relate to work engagement. PMID:28787464

  5. Multiscale simulations of patchy particle systems combining Molecular Dynamics, Path Sampling and Green's Function Reaction Dynamics

    NASA Astrophysics Data System (ADS)

    Bolhuis, Peter

    Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.

  6. Nuclear astrophysics at FRANZ

    NASA Astrophysics Data System (ADS)

    Reifarth, R.; Dababneh, S.; Fiebiger, S.; Glorius, J.; Göbel, K.; Heil, M.; Hillmann, P.; Heftrich, T.; Langer, C.; Meusel, O.; Plag, R.; Schmidt, S.; Slavkovská, Z.; Veltum, D.; Weigand, M.; Wiesner, C.; Wolf, C.; Zadeh, A.

    2018-01-01

    The neutron capture cross section of radioactive isotopes for neutron energies in the keV region will be measured by a time-of-flight (TOF) experiment. NAUTILUS will provide a unique facility realizing the TOF technique with an ultra-short flight path at the FRANZ setup at Goethe-University Frankfurt am Main, Germany. A highly optimized spherical photon calorimeter will be built and installed at an ultra-short flight path. This new method allows the measurement of neutron capture cross sections on extremely small sample as needed in the case of 85Kr, which will be produced as an isotopically pure radioactive sample. The successful measurement will provide insights into the dynamics of the late stages of stars, an important independent check of the evolution of the Universe and the proof of principle.

  7. Diffractometer data collecting method and apparatus

    DOEpatents

    Steinmeyer, P.A.

    1991-04-16

    Diffractometer data is collected without the use of a movable receiver. A scanning device, positioned in the diffractometer between a sample and detector, varies the amount of the beam diffracted from the sample that is received by the detector in such a manner that the beam is detected in an integrated form. In one embodiment, a variable diameter beam stop is used which comprises a drop of mercury captured between a pair of spaced sheets and disposed in the path of the diffracted beam. By varying the spacing between the sheets, the diameter of the mercury drop is varied. In another embodiment, an adjustable iris diaphragm is positioned in the path of the diffracted beam and the iris opening is adjusted to control the amount of the beam reaching the detector. 5 figures.

  8. Diffractometer data collecting method and apparatus

    DOEpatents

    Steinmeyer, Peter A.

    1991-04-16

    Diffractometer data is collected without the use of a movable receiving s. A scanning device, positioned in the diffractometer between a sample and detector, varies the amount of the beam diffracted from the sample that is received by the detector in such a manner that the beam is detected in an integrated form. In one embodiment, a variable diameter beam stop is used which comprises a drop of mercury captured between a pair of spaced sheets and disposed in the path of the diffracted beam. By varying the spacing between the sheets, the diameter of the mercury drop is varied. In another embodiment, an adjustable iris diaphragm is positioned in the path of the diffracted beam and the iris opening is adjusted to control the amount of the beam reaching the detector.

  9. Revisiting the finite temperature string method for the calculation of reaction tubes and free energies

    NASA Astrophysics Data System (ADS)

    Vanden-Eijnden, Eric; Venturoli, Maddalena

    2009-05-01

    An improved and simplified version of the finite temperature string (FTS) method [W. E, W. Ren, and E. Vanden-Eijnden, J. Phys. Chem. B 109, 6688 (2005)] is proposed. Like the original approach, the new method is a scheme to calculate the principal curves associated with the Boltzmann-Gibbs probability distribution of the system, i.e., the curves which are such that their intersection with the hyperplanes perpendicular to themselves coincides with the expected position of the system in these planes (where perpendicular is understood with respect to the appropriate metric). Unlike more standard paths such as the minimum energy path or the minimum free energy path, the location of the principal curve depends on global features of the energy or the free energy landscapes and thereby may remain appropriate in situations where the landscape is rough on the thermal energy scale and/or entropic effects related to the width of the reaction channels matter. Instead of using constrained sampling in hyperplanes as in the original FTS, the new method calculates the principal curve via sampling in the Voronoi tessellation whose generating points are the discretization points along this curve. As shown here, this modification results in greater algorithmic simplicity. As a by-product, it also gives the free energy associated with the Voronoi tessellation. The new method can be applied both in the original Cartesian space of the system or in a set of collective variables. We illustrate FTS on test-case examples and apply it to the study of conformational transitions of the nitrogen regulatory protein C receiver domain using an elastic network model and to the isomerization of solvated alanine dipeptide.

  10. Yield surface evolution for columnar ice

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiwei; Ma, Wei; Zhang, Shujuan; Mu, Yanhu; Zhao, Shunpin; Li, Guoyu

    A series of triaxial compression tests, which has capable of measuring the volumetric strain of the sample, were conducted on columnar ice. A new testing approach of probing the experimental yield surface was performed from a single sample in order to investigate yield and hardening behaviors of the columnar ice under complex stress states. Based on the characteristic of the volumetric strain, a new method of defined the multiaxial yield strengths of the columnar ice is proposed. The experimental yield surface remains elliptical shape in the stress space of effective stress versus mean stress. The effect of temperature, loading rate and loading path in the initial yield surface and deformation properties of the columnar ice were also studied. Subsequent yield surfaces of the columnar ice have been explored by using uniaxial and hydrostatic paths. The evolution of the subsequent yield surface exhibits significant path-dependent characteristics. The multiaxial hardening law of the columnar ice was established experimentally. A phenomenological yield criterion was presented for multiaxial yield and hardening behaviors of the columnar ice. The comparisons between the theoretical and measured results indicate that this current model is capable of giving a reasonable prediction for the multiaxial yield and post-yield properties of the columnar ice subjected to different temperature, loading rate and path conditions.

  11. Methods and Devices for Modifying Active Paths in a K-Delta-1-Sigma Modulator

    NASA Technical Reports Server (NTRS)

    Ardalan, Sasan (Inventor)

    2017-01-01

    The invention relates to an improved K-Delta-1-Sigma Modulators (KG1Ss) that achieve multi GHz sampling rates with 90 nm and 45 nm CMOS processes, and that provide the capability to balance performance with power in many applications. The improved KD1Ss activate all paths when high performance is needed (e.g. high bandwidth), and reduce the effective bandwidth by shutting down multiple paths when low performance is required. The improved KD1Ss can adjust the baseband filtering for lower bandwidth, and can provide large savings in power consumption while maintaining the communication link, which is a great advantage in space communications. The improved KD1Ss herein provides a receiver that adjusts to accommodate a higher rate when a packet is received at a low bandwidth, and at a initial lower rate, power is saved by turning off paths in the KD1S Analog to Digital Converter, and where when a higher rate is required, multiple paths are enabled in the KD1S to accommodate the higher band widths.

  12. Quadratic String Method for Locating Instantons in Tunneling Splitting Calculations.

    PubMed

    Cvitaš, Marko T

    2018-03-13

    The ring-polymer instanton (RPI) method is an efficient technique for calculating approximate tunneling splittings in high-dimensional molecular systems. In the RPI method, tunneling splitting is evaluated from the properties of the minimum action path (MAP) connecting the symmetric wells, whereby the extensive sampling of the full potential energy surface of the exact quantum-dynamics methods is avoided. Nevertheless, the search for the MAP is usually the most time-consuming step in the standard numerical procedures. Recently, nudged elastic band (NEB) and string methods, originaly developed for locating minimum energy paths (MEPs), were adapted for the purpose of MAP finding with great efficiency gains [ J. Chem. Theory Comput. 2016 , 12 , 787 ]. In this work, we develop a new quadratic string method for locating instantons. The Euclidean action is minimized by propagating the initial guess (a path connecting two wells) over the quadratic potential energy surface approximated by means of updated Hessians. This allows the algorithm to take many minimization steps between the potential/gradient calls with further reductions in the computational effort, exploiting the smoothness of potential energy surface. The approach is general, as it uses Cartesian coordinates, and widely applicable, with computational effort of finding the instanton usually lower than that of determining the MEP. It can be combined with expensive potential energy surfaces or on-the-fly electronic-structure methods to explore a wide variety of molecular systems.

  13. On the efficacy of spatial sampling using manual scanning paths to determine the spatial average sound pressure level in rooms.

    PubMed

    Hopkins, Carl

    2011-05-01

    In architectural acoustics, noise control and environmental noise, there are often steady-state signals for which it is necessary to measure the spatial average, sound pressure level inside rooms. This requires using fixed microphone positions, mechanical scanning devices, or manual scanning. In comparison with mechanical scanning devices, the human body allows manual scanning to trace out complex geometrical paths in three-dimensional space. To determine the efficacy of manual scanning paths in terms of an equivalent number of uncorrelated samples, an analytical approach is solved numerically. The benchmark used to assess these paths is a minimum of five uncorrelated fixed microphone positions at frequencies above 200 Hz. For paths involving an operator walking across the room, potential problems exist with walking noise and non-uniform scanning speeds. Hence, paths are considered based on a fixed standing position or rotation of the body about a fixed point. In empty rooms, it is shown that a circle, helix, or cylindrical-type path satisfy the benchmark requirement with the latter two paths being highly efficient at generating large number of uncorrelated samples. In furnished rooms where there is limited space for the operator to move, an efficient path comprises three semicircles with 45°-60° separations.

  14. Comparison Of Reaction Barriers In Energy And Free Energy For Enzyme Catalysis

    NASA Astrophysics Data System (ADS)

    Andrés Cisneros, G.; Yang, Weitao

    Reaction paths on potential energy surfaces obtained from QM/MM calculations of enzymatic or solution reactions depend on the starting structure employed for the path calculations. The free energies associated with these paths should be more reliable for studying reaction mechanisms, because statistical averages are used. To investigate this, the role of enzyme environment fluctuations on reaction paths has been studied with an ab initio QM/MM method for the first step of the reaction catalyzed by 4-oxalocrotonate tautomerase (4OT). Four minimum energy paths (MEPs) are compared, which have been determined with two different methods. The first path (path A) has been determined with a procedure that combines the nudged elastic band (NEB) method and a second order parallel path optimizer recently developed in our group. The second path (path B) has also been determined by the combined procedure, however, the enzyme environment has been relaxed by molecular dynamics (MD) simulations. The third path (path C) has been determined with the coordinate driving (CD) method, using the enzyme environment from path B. We compare these three paths to a previously determined path (path D) determined with the CD method. In all four cases the QM/MM-FE method (Y. Zhang et al., JCP, 112, 3483) was employed to obtain the free energy barriers for all four paths. In the case of the combined procedure, the reaction path is approximated by a small number of images which are optimized to the MEP in parallel, which results in a reduced computational cost. However, this does not allow the FEP calculation on the MEP. In order to perform FEP calculations on these paths, we introduce a modification to the NEB method that enables the addition of as many extra images to the path as needed for the FEP calculations. The calculated potential energy barriers show differences in the activation barrier between the calculated paths of as much as 5.17 kcal/mol. However, the largest free energy barrier difference is 1.58 kcal/mol. These results show the importance of the inclusion of the environment fluctuation in the calculation of enzymatic activation barriers

  15. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    PubMed Central

    Cao, Youfang; Liang, Jie

    2013-01-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966

  16. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    NASA Astrophysics Data System (ADS)

    Cao, Youfang; Liang, Jie

    2013-07-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  17. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method.

    PubMed

    Cao, Youfang; Liang, Jie

    2013-07-14

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  18. Graph transformation method for calculating waiting times in Markov chains.

    PubMed

    Trygubenko, Semen A; Wales, David J

    2006-06-21

    We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.

  19. Digital micromirror device-based common-path quantitative phase imaging.

    PubMed

    Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Yaqoob, Zahid; So, Peter T C

    2017-04-01

    We propose a novel common-path quantitative phase imaging (QPI) method based on a digital micromirror device (DMD). The DMD is placed in a plane conjugate to the objective back-aperture plane for the purpose of generating two plane waves that illuminate the sample. A pinhole is used in the detection arm to filter one of the beams after sample to create a reference beam. Additionally, a transmission-type liquid crystal device, placed at the objective back-aperture plane, eliminates the specular reflection noise arising from all the "off" state DMD micromirrors, which is common in all DMD-based illuminations. We have demonstrated high sensitivity QPI, which has a measured spatial and temporal noise of 4.92 nm and 2.16 nm, respectively. Experiments with calibrated polystyrene beads illustrate the desired phase measurement accuracy. In addition, we have measured the dynamic height maps of red blood cell membrane fluctuations, showing the efficacy of the proposed system for live cell imaging. Most importantly, the DMD grants the system convenience in varying the interference fringe period on the camera to easily satisfy the pixel sampling conditions. This feature also alleviates the pinhole alignment complexity. We envision that the proposed DMD-based common-path QPI system will allow for system miniaturization and automation for a broader adaption.

  20. Digital micromirror device-based common-path quantitative phase imaging

    PubMed Central

    Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Yaqoob, Zahid; So, Peter T. C.

    2017-01-01

    We propose a novel common-path quantitative phase imaging (QPI) method based on a digital micromirror device (DMD). The DMD is placed in a plane conjugate to the objective back-aperture plane for the purpose of generating two plane waves that illuminate the sample. A pinhole is used in the detection arm to filter one of the beams after sample to create a reference beam. Additionally, a transmission-type liquid crystal device, placed at the objective back-aperture plane, eliminates the specular reflection noise arising from all the “off” state DMD micromirrors, which is common in all DMD-based illuminations. We have demonstrated high sensitivity QPI, which has a measured spatial and temporal noise of 4.92 nm and 2.16 nm, respectively. Experiments with calibrated polystyrene beads illustrate the desired phase measurement accuracy. In addition, we have measured the dynamic height maps of red blood cell membrane fluctuations, showing the efficacy of the proposed system for live cell imaging. Most importantly, the DMD grants the system convenience in varying the interference fringe period on the camera to easily satisfy the pixel sampling conditions. This feature also alleviates the pinhole alignment complexity. We envision that the proposed DMD-based common-path QPI system will allow for system miniaturization and automation for a broader adaption. PMID:28362789

  1. Zero-Slack, Noncritical Paths

    ERIC Educational Resources Information Center

    Simons, Jacob V., Jr.

    2017-01-01

    The critical path method/program evaluation and review technique method of project scheduling is based on the importance of managing a project's critical path(s). Although a critical path is the longest path through a network, its location in large projects is facilitated by the computation of activity slack. However, logical fallacies in…

  2. The Relations among Cumulative Risk, Parenting, and Behavior Problems during Early Childhood

    ERIC Educational Resources Information Center

    Trentacosta, Christopher J.; Hyde, Luke W.; Shaw, Daniel S.; Dishion, Thomas J.; Gardner, Frances; Wilson, Melvin

    2008-01-01

    Background: This study examined relations among cumulative risk, nurturant and involved parenting, and behavior problems across early childhood. Methods: Cumulative risk, parenting, and behavior problems were measured in a sample of low-income toddlers participating in a family-centered program to prevent conduct problems. Results: Path analysis…

  3. Bidirectional amplifier

    DOEpatents

    Wright, James T.

    1986-01-01

    A bilateral circuit is operable for transmitting signals in two directions without generation of ringing due to feedback caused by the insertion of the circuit. The circuit may include gain for each of the signals to provide a bidirectional amplifier. The signals are passed through two separate paths, with a unidirectional amplifier in each path. A controlled sampling device is provided in each path for sampling the two signals. Any feedback loop between the two signals is disrupted by providing a phase displacement between the control signals for the two sampling devices.

  4. Bidirectional amplifier

    DOEpatents

    Wright, J.T.

    1984-02-02

    A bilateral circuit is operable for transmitting signals in two directions without generation of ringing due to feedback caused by the insertion of the circuit. The circuit may include gain for each of the signals to provide a bidirectional amplifier. The signals are passed through two separate paths, with a unidirectional amplifier in each path. A controlled sampling device is provided in each path for sampling the two signals. Any feedback loop between the two signals is disrupted by providing a phase displacement between the control signals for the two sampling devices.

  5. Reconfigurable radio receiver with fractional sample rate converter and multi-rate ADC based on LO-derived sampling clock

    NASA Astrophysics Data System (ADS)

    Park, Sungkyung; Park, Chester Sungchung

    2018-03-01

    A composite radio receiver back-end and digital front-end, made up of a delta-sigma analogue-to-digital converter (ADC) with a high-speed low-noise sampling clock generator, and a fractional sample rate converter (FSRC), is proposed and designed for a multi-mode reconfigurable radio. The proposed radio receiver architecture contributes to saving the chip area and thus lowering the design cost. To enable inter-radio access technology handover and ultimately software-defined radio reception, a reconfigurable radio receiver consisting of a multi-rate ADC with its sampling clock derived from a local oscillator, followed by a rate-adjustable FSRC for decimation, is designed. Clock phase noise and timing jitter are examined to support the effectiveness of the proposed radio receiver. A FSRC is modelled and simulated with a cubic polynomial interpolator based on Lagrange method, and its spectral-domain view is examined in order to verify its effect on aliasing, nonlinearity and signal-to-noise ratio, giving insight into the design of the decimation chain. The sampling clock path and the radio receiver back-end data path are designed in a 90-nm CMOS process technology with 1.2V supply.

  6. A Comparison of Two Path Planners for Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Tarokh, M.; Shiller, Z.; Hayati, S.

    1999-01-01

    The paper presents two path planners suitable for planetary rovers. The first is based on fuzzy description of the terrain, and genetic algorithm to find a traversable path in a rugged terrain. The second planner uses a global optimization method with a cost function that is the path distance divided by the velocity limit obtained from the consideration of the rover static and dynamic stability. A description of both methods is provided, and the results of paths produced are given which show the effectiveness of the path planners in finding near optimal paths. The features of the methods and their suitability and application for rover path planning are compared

  7. Path analyses of cross-sectional and longitudinal data suggest that variability in natural communities of blood-associated parasites is derived from host characteristics and not interspecific interactions.

    PubMed

    Cohen, Carmit; Einav, Monica; Hawlena, Hadas

    2015-08-19

    The parasite composition of wild host individuals often impacts their behavior and physiology, and the transmission dynamics of pathogenic species thereby determines disease risk in natural communities. Yet, the determinants of parasite composition in natural communities are still obscure. In particular, three fundamental questions remain open: (1) what are the relative roles of host and environmental characteristics compared with direct interactions between parasites in determining the community composition of parasites? (2) do these determinants affect parasites belonging to the same guild and those belonging to different guilds in similar manners? and (3) can cross-sectional and longitudinal analyses work interchangeably in detecting community determinants? Our study was designed to answer these three questions in a natural community of rodents and their fleas, ticks, and two vector-borne bacteria. We sampled a natural population of Gerbillus andersoni rodents and their blood-associated parasites on two occasions. By combining path analysis and model selection approaches, we then explored multiple direct and indirect paths that connect (i) the environmental and host-related characteristics to the infection probability of a host by each of the four parasite species, and (ii) the infection probabilities of the four species by each other. Our results suggest that the majority of paths shaping the blood-associated communities are indirect, mostly determined by host characteristics and not by interspecific interactions or environmental conditions. The exact effects of host characteristics on infection probability by a given parasite depend on its life history and on the method of sampling, in which the cross-sectional and longitudinal methods are complementary. Despite the awareness of the need of ecological investigations into natural host-vector-parasite communities in light of the emergence and re-emergence of vector-borne diseases, we lack sampling methods that are both practical and reliable. Here we illustrated how comprehensive patterns can be revealed from observational data by applying path analysis and model selection approaches and combining cross-sectional and longitudinal analyses. By employing this combined approach on blood-associated parasites, we were able to distinguish between direct and indirect effects and to predict the causal relationships between host-related characteristics and the parasite composition over time and space. We concluded that direct interactions within the community play only a minor role in determining community composition relative to host characteristics and the life history of the community members.

  8. CPMC-Lab: A MATLAB package for Constrained Path Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy; Shi, Hao; Xu, Jie; Zhang, Shiwei

    2014-12-01

    We describe CPMC-Lab, a MATLAB program for the constrained-path and phaseless auxiliary-field Monte Carlo methods. These methods have allowed applications ranging from the study of strongly correlated models, such as the Hubbard model, to ab initio calculations in molecules and solids. The present package implements the full ground-state constrained-path Monte Carlo (CPMC) method in MATLAB with a graphical interface, using the Hubbard model as an example. The package can perform calculations in finite supercells in any dimensions, under periodic or twist boundary conditions. Importance sampling and all other algorithmic details of a total energy calculation are included and illustrated. This open-source tool allows users to experiment with various model and run parameters and visualize the results. It provides a direct and interactive environment to learn the method and study the code with minimal overhead for setup. Furthermore, the package can be easily generalized for auxiliary-field quantum Monte Carlo (AFQMC) calculations in many other models for correlated electron systems, and can serve as a template for developing a production code for AFQMC total energy calculations in real materials. Several illustrative studies are carried out in one- and two-dimensional lattices on total energy, kinetic energy, potential energy, and charge- and spin-gaps.

  9. Development of a visible light transmission (VLT) measurement system using an open-path optical method

    NASA Astrophysics Data System (ADS)

    Nurulain, S.; Manap, H.

    2017-09-01

    This paper describes about a visible light transmission (VLT) measurement system using an optical method. VLT rate plays an important role in order to determine the visibility of a medium. Current instrument to measure visibility has a gigantic set up, costly and mostly fails to function at low light condition environment. This research focuses on the development of a VLT measurement system using a simple experimental set-up and at a low cost. An open path optical technique is used to measure a few series of known-VLT thin film that act as sample of different visibilities. This measurement system is able to measure the light intensity of these thin films within the visible light region (535-540 nm) and the response time is less than 1s.

  10. Proposal of ultrasonic-assisted mid-infrared spectroscopy for incorporating into daily life like smart-toilet and non-invasive blood glucose sensor

    NASA Astrophysics Data System (ADS)

    Kitazaki, Tomoya; Mori, Keita; Yamamoto, Naoyuki; Wang, Congtao; Kawashima, Natsumi; Ishimaru, Ichiro

    2017-07-01

    We proposed the extremely compact beans-size snap-shot mid-infrared spectroscopy that will be able to be built in smartphones. And also the easy preparation method of thin-film samples generated by ultrasonic standing wave is proposed. Mid-infrared spectroscopy is able to identify material components and estimate component concentrations quantitatively from absorption spectra. But conventional spectral instruments were very large-size and too expensive to incorporate into daily life. And preparations of thin-film sample were very troublesome task. Because water absorption in mid-infrared lights is very strong, moisture-containing-sample thickness should be less than 100[μm]. Thus, midinfrared spectroscopy has been utilized only by analytical experts in their laboratories. Because ultrasonic standing wave is compressional wave, we can generate periodical refractive-index distributions inside of samples. A high refractiveindex plane is correspond to a reflection boundary. When we use a several MHz ultrasonic transducer, the distance between sample surface and generated first node become to be several ten μm. Thus, the double path of this distance is correspond to sample thickness. By combining these two proposed methods, as for liquid samples, urinary albumin and glucose concentrations will be able to be measured inside of toilet. And as for solid samples, by attaching these apparatus to earlobes, the enhancement of reflection lights from near skin surface will create a new path to realize the non-invasive blood glucose sensor. Using the small ultrasonic-transducer whose diameter was 10[mm] and applied voltage 8[V], we detected the internal reflection lights from colored water as liquid sample and acrylic board as solid sample.

  11. Diffusing-wave spectroscopy in a standard dynamic light scattering setup

    NASA Astrophysics Data System (ADS)

    Fahimi, Zahra; Aangenendt, Frank J.; Voudouris, Panayiotis; Mattsson, Johan; Wyss, Hans M.

    2017-12-01

    Diffusing-wave spectroscopy (DWS) extends dynamic light scattering measurements to samples with strong multiple scattering. DWS treats the transport of photons through turbid samples as a diffusion process, thereby making it possible to extract the dynamics of scatterers from measured correlation functions. The analysis of DWS data requires knowledge of the path length distribution of photons traveling through the sample. While for flat sample cells this path length distribution can be readily calculated and expressed in analytical form; no such expression is available for cylindrical sample cells. DWS measurements have therefore typically relied on dedicated setups that use flat sample cells. Here we show how DWS measurements, in particular DWS-based microrheology measurements, can be performed in standard dynamic light scattering setups that use cylindrical sample cells. To do so we perform simple random-walk simulations that yield numerical predictions of the path length distribution as a function of both the transport mean free path and the detection angle. This information is used in experiments to extract the mean-square displacement of tracer particles in the material, as well as the corresponding frequency-dependent viscoelastic response. An important advantage of our approach is that by performing measurements at different detection angles, the average path length through the sample can be varied. For measurements performed on a single sample cell, this gives access to a wider range of length and time scales than obtained in a conventional DWS setup. Such angle-dependent measurements also offer an important consistency check, as for all detection angles the DWS analysis should yield the same tracer dynamics, even though the respective path length distributions are very different. We validate our approach by performing measurements both on aqueous suspensions of tracer particles and on solidlike gelatin samples, for which we find our DWS-based microrheology data to be in good agreement with rheological measurements performed on the same samples.

  12. An improved reaction path optimization method using a chain of conformations

    NASA Astrophysics Data System (ADS)

    Asada, Toshio; Sawada, Nozomi; Nishikawa, Takuya; Koseki, Shiro

    2018-05-01

    The efficient fast path optimization (FPO) method is proposed to optimize the reaction paths on energy surfaces by using chains of conformations. No artificial spring force is used in the FPO method to ensure the equal spacing of adjacent conformations. The FPO method is applied to optimize the reaction path on two model potential surfaces. The use of this method enabled the optimization of the reaction paths with a drastically reduced number of optimization cycles for both potentials. It was also successfully utilized to define the MEP of the isomerization of the glycine molecule in water by FPO method.

  13. Group refractive index reconstruction with broadband interferometric confocal microscopy

    PubMed Central

    Marks, Daniel L.; Schlachter, Simon C.; Zysk, Adam M.; Boppart, Stephen A.

    2010-01-01

    We propose a novel method of measuring the group refractive index of biological tissues at the micrometer scale. The technique utilizes a broadband confocal microscope embedded into a Mach–Zehnder interferometer, with which spectral interferograms are measured as the sample is translated through the focus of the beam. The method does not require phase unwrapping and is insensitive to vibrations in the sample and reference arms. High measurement stability is achieved because a single spectral interferogram contains all the information necessary to compute the optical path delay of the beam transmitted through the sample. Included are a physical framework defining the forward problem, linear solutions to the inverse problem, and simulated images of biologically relevant phantoms. PMID:18451922

  14. Statistical Symbolic Execution with Informed Sampling

    NASA Technical Reports Server (NTRS)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  15. Apparatus and method for quantitative measurement of small differences in optical absorptivity between two samples using differential interferometry and the thermooptic effect

    DOEpatents

    Cremers, D.A.; Keller, R.A.

    1984-05-08

    An apparatus and method for the measurement of small differences in optical absorptivity of weakly absorbing solutions using differential interferometry and the thermooptic effect have been developed. Two sample cells are placed in each arm of an interferometer and are traversed by colinear probe and heating laser beams. The interrogation probe beams are recombined forming a fringe pattern, the intensity of which can be related to changes in optical path length of these laser beams through the cells. This in turn can be related to small differences in optical absorptivity which results in different amounts of sample heating when the heating laser beams are turned on, by the fact that the index of refraction of a liquid is temperature dependent. A critical feature of this invention is the stabilization of the optical path of the probe beams against drift. Background (solvent) absorption can then be suppressed by a factor of approximately 400. Solute absorptivities of about 10[sup [minus]5] cm[sup [minus]1] can then be determined in the presence of background absorptions in excess of 10[sup [minus]3] cm[sup [minus]1]. In addition, the smallest absorption measured with the instant apparatus and method is about 5 [times] 10[sup [minus]6] cm[sup [minus]1]. 6 figs.

  16. Resolving the problem of trapped water in binding cavities: prediction of host-guest binding free energies in the SAMPL5 challenge by funnel metadynamics

    NASA Astrophysics Data System (ADS)

    Bhakat, Soumendranath; Söderhjelm, Pär

    2017-01-01

    The funnel metadynamics method enables rigorous calculation of the potential of mean force along an arbitrary binding path and thereby evaluation of the absolute binding free energy. A problem of such physical paths is that the mechanism characterizing the binding process is not always obvious. In particular, it might involve reorganization of the solvent in the binding site, which is not easily captured with a few geometrically defined collective variables that can be used for biasing. In this paper, we propose and test a simple method to resolve this trapped-water problem by dividing the process into an artificial host-desolvation step and an actual binding step. We show that, under certain circumstances, the contribution from the desolvation step can be calculated without introducing further statistical errors. We apply the method to the problem of predicting host-guest binding free energies in the SAMPL5 blind challenge, using two octa-acid hosts and six guest molecules. For one of the hosts, well-converged results are obtained and the prediction of relative binding free energies is the best among all the SAMPL5 submissions. For the other host, which has a narrower binding pocket, the statistical uncertainties are slightly higher; longer simulations would therefore be needed to obtain conclusive results.

  17. Investigation of real tissue water equivalent path lengths using an efficient dose extinction method

    NASA Astrophysics Data System (ADS)

    Zhang, Rongxiao; Baer, Esther; Jee, Kyung-Wook; Sharp, Gregory C.; Flanz, Jay; Lu, Hsiao-Ming

    2017-07-01

    For proton therapy, an accurate conversion of CT HU to relative stopping power (RSP) is essential. Validation of the conversion based on real tissue samples is more direct than the current practice solely based on tissue substitutes and can potentially address variations over the population. Based on a novel dose extinction method, we measured water equivalent path lengths (WEPL) on animal tissue samples to evaluate the accuracy of CT HU to RSP conversion and potential variations over a population. A broad proton beam delivered a spread out Bragg peak to the samples sandwiched between a water tank and a 2D ion-chamber detector. WEPLs of the samples were determined from the transmission dose profiles measured as a function of the water level in the tank. Tissue substitute inserts and Lucite blocks with known WEPLs were used to validate the accuracy. A large number of real tissue samples were measured. Variations of WEPL over different batches of tissue samples were also investigated. The measured WEPLs were compared with those computed from CT scans with the Stoichiometric calibration method. WEPLs were determined within  ±0.5% percentage deviation (% std/mean) and  ±0.5% error for most of the tissue surrogate inserts and the calibration blocks. For biological tissue samples, percentage deviations were within  ±0.3%. No considerable difference (<1%) in WEPL was observed for the same type of tissue from different sources. The differences between measured WEPLs and those calculated from CT were within 1%, except for some bony tissues. Depending on the sample size, each dose extinction measurement took around 5 min to produce ~1000 WEPL values to be compared with calculations. This dose extinction system measures WEPL efficiently and accurately, which allows the validation of CT HU to RSP conversions based on the WEPL measured for a large number of samples and real tissues.

  18. Study on the measuring distance for blood glucose infrared spectral measuring by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Li, Xiang

    2016-10-01

    Blood glucose monitoring is of great importance for controlling diabetes procedure and preventing the complications. At present, the clinical blood glucose concentration measurement is invasive and could be replaced by noninvasive spectroscopy analytical techniques. Among various parameters of optical fiber probe used in spectrum measuring, the measurement distance is the key one. The Monte Carlo technique is a flexible method for simulating light propagation in tissue. The simulation is based on the random walks that photons make as they travel through tissue, which are chosen by statistically sampling the probability distributions for step size and angular deflection per scattering event. The traditional method for determine the optimal distance between transmitting fiber and detector is using Monte Carlo simulation to find out the point where most photons come out. But there is a problem. In the epidermal layer there is no artery, vein or capillary vessel. Thus, when photons propagate and interactive with tissue in epidermal layer, no information is given to the photons. A new criterion is proposed to determine the optimal distance, which is named effective path length in this paper. The path length of each photons travelling in dermis is recorded when running Monte-Carlo simulation, which is the effective path length defined above. The sum of effective path length of every photon at each point is calculated. The detector should be place on the point which has most effective path length. Then the optimal measuring distance between transmitting fiber and detector is determined.

  19. Effect of deformation path on microstructure, microhardness and texture evolution of interstitial free steel fabricated by differential speed rolling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamad, Kotiba; Chung, Bong Kwon; Ko, Young Gun, E-mail: younggun@ynu.ac.kr

    2014-08-15

    This paper reports the effect of the deformation path on the microstructure, microhardness, and texture evolution of interstitial free (IF) steel processed by differential speed rolling (DSR) method. For this purpose, total height reductions of 50% and 75% were imposed on the samples by a series of differential speed rolling operations with various height reductions per pass (deformation levels) ranging from 10 to 50% under a fixed roll speed ratio of 1:4 for the upper and lower rolls, respectively. Microstructural observations using transmission electron microscopy and electron backscattered diffraction measurements showed that the samples rolled at deformation level of 50%more » had the finest mean grain size (∼ 0.5 μm) compared to the other counterparts; also the samples rolled at deformation level of 50% showed a more uniform microstructure. Based on the microhardness measurements along the thickness direction of the deformed samples, gradual evolution of the microhardness value and its homogeneity was observed with the increase of the deformation level per pass. Texture analysis showed that, as the deformation level per pass increased, the fraction of alpha fiber and gamma fiber in the deformed samples increased. The textures obtained by the differential speed rolling process under the lubricated condition would be equivalent to those obtained by the conventional rolling. - Highlights: • Effect of DSR deformation path on microstructure of IF steel is significant. • IF steel rolled at deformation level of 50% has the ultrafine grains of ∼ 0.5 μm. • Rolling texture components are pronounced with increasing deformation level.« less

  20. Hybrid quantum and classical methods for computing kinetic isotope effects of chemical reactions in solutions and in enzymes.

    PubMed

    Gao, Jiali; Major, Dan T; Fan, Yao; Lin, Yen-Lin; Ma, Shuhua; Wong, Kin-Yiu

    2008-01-01

    A method for incorporating quantum mechanics into enzyme kinetics modeling is presented. Three aspects are emphasized: 1) combined quantum mechanical and molecular mechanical methods are used to represent the potential energy surface for modeling bond forming and breaking processes, 2) instantaneous normal mode analyses are used to incorporate quantum vibrational free energies to the classical potential of mean force, and 3) multidimensional tunneling methods are used to estimate quantum effects on the reaction coordinate motion. Centroid path integral simulations are described to make quantum corrections to the classical potential of mean force. In this method, the nuclear quantum vibrational and tunneling contributions are not separable. An integrated centroid path integral-free energy perturbation and umbrella sampling (PI-FEP/UM) method along with a bisection sampling procedure was summarized, which provides an accurate, easily convergent method for computing kinetic isotope effects for chemical reactions in solution and in enzymes. In the ensemble-averaged variational transition state theory with multidimensional tunneling (EA-VTST/MT), these three aspects of quantum mechanical effects can be individually treated, providing useful insights into the mechanism of enzymatic reactions. These methods are illustrated by applications to a model process in the gas phase, the decarboxylation reaction of N-methyl picolinate in water, and the proton abstraction and reprotonation process catalyzed by alanine racemase. These examples show that the incorporation of quantum mechanical effects is essential for enzyme kinetics simulations.

  1. Photothermal method of determining calorific properties of coal

    DOEpatents

    Amer, N.M.

    1983-05-16

    Predetermined amounts of heat are generated within a coal sample by directing pump light pulses of predetermined energy content into a small surface region of the sample. A beam of probe light is directed along the sample surface and deflection of the probe beam from thermally induced changes of index of refraction in the fluid medium adjacent the heated region are detected. Deflection amplitude and the phase lag of the deflection, relative to the initiating pump light pulse, are indicative of the calorific value and the porosity of the sample. The method provides rapid, accurate and nondestructive analysis of the heat producing capabilities of coal samples. In the preferred form, sequences of pump light pulses of increasing durations are directed into the sample at each of a series of minute regions situated along a raster scan path enabling detailed analysis of variations of thermal properties at different areas of the sample and at different depths.

  2. CMPF: class-switching minimized pathfinding in metabolic networks.

    PubMed

    Lim, Kevin; Wong, Limsoon

    2012-01-01

    The metabolic network is an aggregation of enzyme catalyzed reactions that converts one compound to another. Paths in a metabolic network are a sequence of enzymes that describe how a chemical compound of interest can be produced in a biological system. As the number of such paths is quite large, many methods have been developed to score paths so that the k-shortest paths represent the set of paths that are biologically meaningful or efficient. However, these approaches do not consider whether the sequence of enzymes can be manufactured in the same pathway/species/localization. As a result, a predicted sequence might consist of groups of enzymes that operate in distinct pathway/species/localization and may not truly reflect the events occurring within cell. We propose a path weighting method CMPF (Class-switching Minimized Pathfinder) to search for routes in a metabolic network which minimizes pathway switching. In biological terms, a pathway is a series of chemical reactions which define a specific function (e.g. glycolysis). We conjecture that routes that cross many pathways are inefficient since different pathways define different metabolic functions. In addition, native routes are also well characterized within pathways, suggesting that reasonable paths should not involve too many pathway switches. Our method can be generalized when reactions participate in a class set (e.g., pathways, species or cellular localization) so that the paths predicted have minimal class crossings. We show that our method generates k-paths that involve the least number of class switching. In addition, we also show that native paths are recoverable and alternative paths deviates less from native paths compared to other methods. This suggests that paths ranked by our method could be a way to predict paths that are likely to occur in biological systems.

  3. Analysis of explicit model predictive control for path-following control

    PubMed Central

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration. PMID:29534080

  4. Analysis of explicit model predictive control for path-following control.

    PubMed

    Lee, Junho; Chang, Hyuk-Jun

    2018-01-01

    In this paper, explicit Model Predictive Control(MPC) is employed for automated lane-keeping systems. MPC has been regarded as the key to handle such constrained systems. However, the massive computational complexity of MPC, which employs online optimization, has been a major drawback that limits the range of its target application to relatively small and/or slow problems. Explicit MPC can reduce this computational burden using a multi-parametric quadratic programming technique(mp-QP). The control objective is to derive an optimal front steering wheel angle at each sampling time so that autonomous vehicles travel along desired paths, including straight, circular, and clothoid parts, at high entry speeds. In terms of the design of the proposed controller, a method of choosing weighting matrices in an optimization problem and the range of horizons for path-following control are described through simulations. For the verification of the proposed controller, simulation results obtained using other control methods such as MPC, Linear-Quadratic Regulator(LQR), and driver model are employed, and CarSim, which reflects the features of a vehicle more realistically than MATLAB/Simulink, is used for reliable demonstration.

  5. Hayabusa Re-Entry: Trajectory Analysis and Observation Mission Design

    NASA Technical Reports Server (NTRS)

    Cassell, Alan M.; Winter, Michael W.; Allen, Gary A.; Grinstead, Jay H.; Antimisiaris, Manny E.; Albers, James; Jenniskens, Peter

    2011-01-01

    On June 13th, 2010, the Hayabusa sample return capsule successfully re-entered Earth s atmosphere over the Woomera Prohibited Area in southern Australia in its quest to return fragments from the asteroid 1998 SF36 Itokawa . The sample return capsule entered at a super-orbital velocity of 12.04 km/sec (inertial), making it the second fastest human-made object to traverse the atmosphere. The NASA DC-8 airborne observatory was utilized as an instrument platform to record the luminous portion of the sample return capsule re-entry (60 sec) with a variety of on-board spectroscopic imaging instruments. The predicted sample return capsule s entry state information at 200 km altitude was propagated through the atmosphere to generate aerothermodynamic and trajectory data used for initial observation flight path design and planning. The DC- 8 flight path was designed by considering safety, optimal sample return capsule viewing geometry and aircraft capabilities in concert with key aerothermodynamic events along the predicted trajectory. Subsequent entry state vector updates provided by the Deep Space Network team at NASA s Jet Propulsion Laboratory were analyzed after the planned trajectory correction maneuvers to further refine the DC-8 observation flight path. Primary and alternate observation flight paths were generated during the mission planning phase which required coordination with Australian authorities for pre-mission approval. The final observation flight path was chosen based upon trade-offs between optimal viewing requirements, ground based observer locations (to facilitate post-flight trajectory reconstruction), predicted weather in the Woomera Prohibited Area and constraints imposed by flight path filing deadlines. To facilitate sample return capsule tracking by the instrument operators, a series of two racetrack flight path patterns were performed prior to the observation leg so the instruments could be pointed towards the region in the star background where the sample return capsule was expected to become visible. An overview of the design methodologies and trade-offs used in the Hayabusa re-entry observation campaign are presented.

  6. Gregg T. Beckham | NREL

    Science.gov Websites

    Molecular Dynamics, and a suite of free energy methods such as MD Umbrella Sampling, Equilibrium Path chain on the crystal surface, and the degree of crystallinity in the substrate. We have used free energy shows the free energy results for edge, middle, and corner chains for all four types of cellulose. All

  7. Feller processes: the next generation in modeling. Brownian motion, Lévy processes and beyond.

    PubMed

    Böttcher, Björn

    2010-12-03

    We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes.

  8. Feller Processes: The Next Generation in Modeling. Brownian Motion, Lévy Processes and Beyond

    PubMed Central

    Böttcher, Björn

    2010-01-01

    We present a simple construction method for Feller processes and a framework for the generation of sample paths of Feller processes. The construction is based on state space dependent mixing of Lévy processes. Brownian Motion is one of the most frequently used continuous time Markov processes in applications. In recent years also Lévy processes, of which Brownian Motion is a special case, have become increasingly popular. Lévy processes are spatially homogeneous, but empirical data often suggest the use of spatially inhomogeneous processes. Thus it seems necessary to go to the next level of generalization: Feller processes. These include Lévy processes and in particular Brownian motion as special cases but allow spatial inhomogeneities. Many properties of Feller processes are known, but proving the very existence is, in general, very technical. Moreover, an applicable framework for the generation of sample paths of a Feller process was missing. We explain, with practitioners in mind, how to overcome both of these obstacles. In particular our simulation technique allows to apply Monte Carlo methods to Feller processes. PMID:21151931

  9. Disentangling the stochastic behavior of complex time series

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Tabar, M. Reza Rahimi; Peinke, Joachim; Lehnertz, Klaus

    2016-10-01

    Complex systems involving a large number of degrees of freedom, generally exhibit non-stationary dynamics, which can result in either continuous or discontinuous sample paths of the corresponding time series. The latter sample paths may be caused by discontinuous events - or jumps - with some distributed amplitudes, and disentangling effects caused by such jumps from effects caused by normal diffusion processes is a main problem for a detailed understanding of stochastic dynamics of complex systems. Here we introduce a non-parametric method to address this general problem. By means of a stochastic dynamical jump-diffusion modelling, we separate deterministic drift terms from different stochastic behaviors, namely diffusive and jumpy ones, and show that all of the unknown functions and coefficients of this modelling can be derived directly from measured time series. We demonstrate appli- cability of our method to empirical observations by a data-driven inference of the deterministic drift term and of the diffusive and jumpy behavior in brain dynamics from ten epilepsy patients. Particularly these different stochastic behaviors provide extra information that can be regarded valuable for diagnostic purposes.

  10. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach.

    PubMed

    Park, Hyunseok; Magee, Christopher L

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents.

  11. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach

    PubMed Central

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents. PMID:28135304

  12. Computing the optimal path in stochastic dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora

    2016-08-15

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensionalmore » system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.« less

  13. An improved multi-paths optimization method for video stabilization

    NASA Astrophysics Data System (ADS)

    Qin, Tao; Zhong, Sheng

    2018-03-01

    For video stabilization, the difference between original camera motion path and the optimized one is proportional to the cropping ratio and warping ratio. A good optimized path should preserve the moving tendency of the original one meanwhile the cropping ratio and warping ratio of each frame should be kept in a proper range. In this paper we use an improved warping-based motion representation model, and propose a gauss-based multi-paths optimization method to get a smoothing path and obtain a stabilized video. The proposed video stabilization method consists of two parts: camera motion path estimation and path smoothing. We estimate the perspective transform of adjacent frames according to warping-based motion representation model. It works well on some challenging videos where most previous 2D methods or 3D methods fail for lacking of long features trajectories. The multi-paths optimization method can deal well with parallax, as we calculate the space-time correlation of the adjacent grid, and then a kernel of gauss is used to weigh the motion of adjacent grid. Then the multi-paths are smoothed while minimize the crop ratio and the distortion. We test our method on a large variety of consumer videos, which have casual jitter and parallax, and achieve good results.

  14. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  15. Comparison of Decisions Quality of Heuristic Methods with Limited Depth-First Search Techniques in the Graph Shortest Path Problem

    NASA Astrophysics Data System (ADS)

    Vatutin, Eduard

    2017-12-01

    The article deals with the problem of analysis of effectiveness of the heuristic methods with limited depth-first search techniques of decision obtaining in the test problem of getting the shortest path in graph. The article briefly describes the group of methods based on the limit of branches number of the combinatorial search tree and limit of analyzed subtree depth used to solve the problem. The methodology of comparing experimental data for the estimation of the quality of solutions based on the performing of computational experiments with samples of graphs with pseudo-random structure and selected vertices and arcs number using the BOINC platform is considered. It also shows description of obtained experimental results which allow to identify the areas of the preferable usage of selected subset of heuristic methods depending on the size of the problem and power of constraints. It is shown that the considered pair of methods is ineffective in the selected problem and significantly inferior to the quality of solutions that are provided by ant colony optimization method and its modification with combinatorial returns.

  16. Multiple pass gas absorption cell utilizing a spherical mirror opposite one or more pair of obliquely disposed flat mirrors

    NASA Technical Reports Server (NTRS)

    Pearson, Richard (Inventor); Lynch, Dana H. (Inventor); Gunter, William D. (Inventor)

    1995-01-01

    A method and apparatus for passing light bundles through a multiple pass sampling cell is disclosed. The multiple pass sampling cell includes a sampling chamber having first and second ends positioned along a longitudinal axis of the sampling cell. The sampling cell further includes an entrance opening, located adjacent the first end of the sampling cell at a first azimuthal angular position. The entrance opening permits a light bundle to pass into the sampling cell. The sampling cell also includes an exit opening at a second azimuthal angular position. The light exit permits a light bundle to pass out of the sampling cell after the light bundle has followed a predetermined path.

  17. (Un)Folding Mechanisms of the FBP28 WW Domain in Explicit Solvent Revealed by Multiple Rare Event Simulation Methods

    PubMed Central

    Juraszek, Jarek; Bolhuis, Peter G.

    2010-01-01

    Abstract We report a numerical study of the (un)folding routes of the truncated FBP28 WW domain at ambient conditions using a combination of four advanced rare event molecular simulation techniques. We explore the free energy landscape of the native state, the unfolded state, and possible intermediates, with replica exchange molecular dynamics. Subsequent application of bias-exchange metadynamics yields three tentative unfolding pathways at room temperature. Using these paths to initiate a transition path sampling simulation reveals the existence of two major folding routes, differing in the formation order of the two main hairpins, and in hydrophobic side-chain interactions. Having established that the hairpin strand separation distances can act as reasonable reaction coordinates, we employ metadynamics to compute the unfolding barriers and find that the barrier with the lowest free energy corresponds with the most likely pathway found by transition path sampling. The unfolding barrier at 300 K is ∼17 kBT ≈ 42 kJ/mol, in agreement with the experimental unfolding rate constant. This work shows that combining several powerful simulation techniques provides a more complete understanding of the kinetic mechanism of protein folding. PMID:20159161

  18. Locally enhanced sampling molecular dynamics study of the dioxygen transport in human cytoglobin.

    PubMed

    Orlowski, Slawomir; Nowak, Wieslaw

    2007-07-01

    Cytoglobin (Cyg)--a new member of the vertebrate heme globin family--is expressed in many tissues of the human body but its physiological role is still unclear. It may deliver oxygen under hypoxia, serve as a scavenger of reactive species or be involved in collagen synthesis. This protein is usually six-coordinated and binds oxygen by a displacement of the distal HisE7 imidazole. In this paper, the results of 60 ns molecular dynamics (MD) simulations of dioxygen diffusion inside Cyg matrix are discussed. In addition to a classical MD trajectory, an approximate Locally Enhanced Sampling (LES) method has been employed. Classical diffusion paths were carefully analyzed, five cavities in dynamical structures were determined and at least four distinct ligand exit paths were identified. The most probable exit/entry path is connected with a large tunnel present in Cyg. Several residues that are perhaps critical for kinetics of small gaseous diffusion were discovered. A comparison of gaseous ligand transport in Cyg and in the most studied heme protein myoglobin is presented. Implications of efficient oxygen transport found in Cyg to its possible physiological role are discussed.

  19. Infiltration and hydraulic connections from the Niagara River to a fractured-dolomite aquifer in Niagara Falls, New York

    USGS Publications Warehouse

    Yager, R.M.; Kappel, W.M.

    1998-01-01

    The spatial distribution of hydrogen and oxygen stable-isotope values in groundwater can be used to distinguish different sources of recharge and to trace groundwater flow directions from recharge boundaries. This method can be particularly useful in fractured-rock settings where multiple lines of evidence are required to delineate preferential flow paths that result from heterogeneity within fracture zones. Flow paths delineated with stable isotopes can be combined with hydraulic data to form a more complete picture of the groundwater flow system. In this study values of ??D and ??18O were used to delineate paths of river-water infiltration into the Lockport Group, a fractured dolomite aquifer, and to compute the percentage of fiver water in groundwater samples from shallow bedrock wells. Flow paths were correlated with areas of high hydraulic diffusivity in the shallow bedrock that were delineated from water-level fluctuations induced by diurnal stage fluctuations in man-made hydraulic structures. Flow paths delineated with the stable-isotope and hydraulic data suggest that fiver infiltration reaches an unlined storm sewer in the bedrock through a drainage system that surrounds aqueducts carrying river water to hydroelectric power plants. This finding is significant because the storm sewer is the discharge point for contaminated groundwater from several chemical waste-disposal sites and the cost of treating the storm sewer's discharge could be reduced if the volume of infiltration from the river were decreased.The spatial distribution of hydrogen and oxygen stable-isotope values in groundwater can be used to distinguish different sources of recharge and to trace groundwater flow directions from recharge boundaries. This method can be particularly useful in fractured-rock settings where multiple lines of evidence are required to delineate preferential flow paths that result from heterogeneity within fracture zones. Flow paths delineated with stable isotopes can be combined with hydraulic data to form a more complete picture of the groundwater flow system. In this study values of ??D and ??18O were used to delineate paths of river-water infiltration into the Lockport Group, a fractured dolomite aquifer, and to compute the percentage of river water in groundwater samples from shallow bedrock wells. Flow paths were correlated with areas of high hydraulic diffusivity in the shallow bedrock that were delineated from water-level fluctuations induced by diurnal stage fluctuations in man-made hydraulic structures. Flow paths delineated with the stable-isotope and hydraulic data suggest that river infiltration reaches an unlined storm sewer in the bedrock through a drainage system that surrounds aqueducts carrying river water to hydroelectric power plants. This finding is significant because the storm sewer is the discharge point for contaminated groundwater from several chemical waste-disposal sites and the cost of treating the storm sewer's discharge could be reduced if the volume of infiltration from the river were decreased.

  20. Stochastic sediment property inversion in Shallow Water 06.

    PubMed

    Michalopoulou, Zoi-Heleni

    2017-11-01

    Received time-series at a short distance from the source allow the identification of distinct paths; four of these are direct, surface and bottom reflections, and sediment reflection. In this work, a Gibbs sampling method is used for the estimation of the arrival times of these paths and the corresponding probability density functions. The arrival times for the first three paths are then employed along with linearization for the estimation of source range and depth, water column depth, and sound speed in the water. Propagating densities of arrival times through the linearized inverse problem, densities are also obtained for the above parameters, providing maximum a posteriori estimates. These estimates are employed to calculate densities and point estimates of sediment sound speed and thickness using a non-linear, grid-based model. Density computation is an important aspect of this work, because those densities express the uncertainty in the inversion for sediment properties.

  1. Flow paths in the Edwards aquifer, northern Medina and northeastern Uvalde counties, Texas, based on hydrologic identification and geochemical characterization and simulation

    USGS Publications Warehouse

    Clark, Allan K.; Journey, Celeste A.

    2006-01-01

    The U.S. Geological Survey, in cooperation with the San Antonio Water System, conducted a 4-year study during 2001– 04 to identify major ground-water flow paths in the Edwards aquifer in northern Medina and northeastern Uvalde Counties, Texas. The study involved use of geologic structure, surfacewater and ground-water data, and geochemistry to identify ground-water flow paths. Relay ramps and associated faulting in northern Medina County appear to channel ground-water flow along four distinct flow paths that move water toward the southwest. The northwestern Medina flow path is bounded on the north by the Woodard Cave fault and on the south by the Parkers Creek fault. Water moves downdip toward the southwest until the flow encounters a cross fault along Seco Creek. This barrier to flow might force part or most of the flow to the south. Departure hydrographs for two wells and discharge departure for a streamflow-gaging station provide evidence for flow in the northwestern Medina flow path. The north-central Medina flow path (northern part) is bounded by the Parkers Creek fault on the north and the Medina Lake fault on the south. The adjacent north-central Medina flow path (southern part) is bounded on the north by the Medina Lake fault and on the south by the Diversion Lake fault. The north-central Medina flow path is separated into a northern and southern part because of water-level differences. Ground water in both parts of the northcentral Medina flow path moves downgradient (and down relay ramp) from eastern Medina County toward the southwest. The north-central Medina flow path is hypothesized to turn south in the vicinity of Seco Creek as it begins to be influenced by structural features. Departure hydrographs for four wells and Medina Lake and discharge departure for a streamflow-gaging station provide evidence for flow in the north-central Medina flow path. The south-central Medina flow path is bounded on the north by the Seco Creek and Diversion Lake faults and on the south by the Haby Crossing fault. Because of bounding faults oriented northeast-southwest and adjacent flow paths directed south by other geologic structures, the south-central Medina flow path follows the configuration of the adjacent flow paths—oriented initially southwest and then south. Immediately after turning south, the south-central Medina flow path turns sharply east. Departure hydrographs for four wells and discharge departure for a streamflow-gaging station provide evidence for flow in the south-central Medina flow path. Statistical correlations between water-level departures for 11 continuously monitored wells provide additional evidence for the hypothesized flow paths. Of the 55 combinations of departure dataset pairs, the stronger correlations (those greater than .6) are all among wells in the same flow path, with one exception. Simulations of compositional differences in water chemistry along a hypothesized flow path in the Edwards aquifer and between ground-water and surface-water systems near Medina Lake were developed using the geochemical model PHREEQC. Ground-water chemistry for samples from five wells in the Edwards aquifer in the northwestern Medina flow path were used to evaluate the evolution of ground-water chemistry in the northwestern Medina flow path. Seven simulations were done for samples from pairs of these wells collected during 2001–03; three of the seven yielded plausible models. Ground-water samples from 13 wells were used to evaluate the evolution of ground-water chemistry in the north-central Medina flow path (northern and southern parts). Five of the wells in the most upgradient part of the flow path were completed in the Trinity aquifer; the remaining eight were completed in the Edwards aquifer. Nineteen simulations were done for samples from well pairs collected during 1995–2003; eight of the 19 yielded plausible models. Ground-water samples from seven wells were used to evaluate the evolution of ground-water chemistry in the south-central Medina flow path. One well was the Trinity aquifer end-member well upgradient from all flow paths, and another was a Trinity aquifer well in the most upgradient part of the flow path; all other wells were completed in the Edwards aquifer. Nine simulations were done for samples from well pairs collected during 1996–2003; seven of the nine yielded plausible models. The plausible models demonstrate that the four hypothesized flow paths can be partially supported geochemically. 

  2. Accelerated sampling by infinite swapping of path integral molecular dynamics with surface hopping

    NASA Astrophysics Data System (ADS)

    Lu, Jianfeng; Zhou, Zhennan

    2018-02-01

    To accelerate the thermal equilibrium sampling of multi-level quantum systems, the infinite swapping limit of a recently proposed multi-level ring polymer representation is investigated. In the infinite swapping limit, the ring polymer evolves according to an averaged Hamiltonian with respect to all possible surface index configurations of the ring polymer and thus connects the surface hopping approach to the mean-field path-integral molecular dynamics. A multiscale integrator for the infinite swapping limit is also proposed to enable efficient sampling based on the limiting dynamics. Numerical results demonstrate the huge improvement of sampling efficiency of the infinite swapping compared with the direct simulation of path-integral molecular dynamics with surface hopping.

  3. Reasoning on the Self-Organizing Incremental Associative Memory for Online Robot Path Planning

    NASA Astrophysics Data System (ADS)

    Kawewong, Aram; Honda, Yutaro; Tsuboyama, Manabu; Hasegawa, Osamu

    Robot path-planning is one of the important issues in robotic navigation. This paper presents a novel robot path-planning approach based on the associative memory using Self-Organizing Incremental Neural Networks (SOINN). By the proposed method, an environment is first autonomously divided into a set of path-fragments by junctions. Each fragment is represented by a sequence of preliminarily generated common patterns (CPs). In an online manner, a robot regards the current path as the associative path-fragments, each connected by junctions. The reasoning technique is additionally proposed for decision making at each junction to speed up the exploration time. Distinct from other methods, our method does not ignore the important information about the regions between junctions (path-fragments). The resultant number of path-fragments is also less than other method. Evaluation is done via Webots physical 3D-simulated and real robot experiments, where only distance sensors are available. Results show that our method can represent the environment effectively; it enables the robot to solve the goal-oriented navigation problem in only one episode, which is actually less than that necessary for most of the Reinforcement Learning (RL) based methods. The running time is proved finite and scales well with the environment. The resultant number of path-fragments matches well to the environment.

  4. Fast exploration of an optimal path on the multidimensional free energy surface

    PubMed Central

    Chen, Changjun

    2017-01-01

    In a reaction, determination of an optimal path with a high reaction rate (or a low free energy barrier) is important for the study of the reaction mechanism. This is a complicated problem that involves lots of degrees of freedom. For simple models, one can build an initial path in the collective variable space by the interpolation method first and then update the whole path constantly in the optimization. However, such interpolation method could be risky in the high dimensional space for large molecules. On the path, steric clashes between neighboring atoms could cause extremely high energy barriers and thus fail the optimization. Moreover, performing simulations for all the snapshots on the path is also time-consuming. In this paper, we build and optimize the path by a growing method on the free energy surface. The method grows a path from the reactant and extends its length in the collective variable space step by step. The growing direction is determined by both the free energy gradient at the end of the path and the direction vector pointing at the product. With fewer snapshots on the path, this strategy can let the path avoid the high energy states in the growing process and save the precious simulation time at each iteration step. Applications show that the presented method is efficient enough to produce optimal paths on either the two-dimensional or the twelve-dimensional free energy surfaces of different small molecules. PMID:28542475

  5. Aircraft path planning for optimal imaging using dynamic cost functions

    NASA Astrophysics Data System (ADS)

    Christie, Gordon; Chaudhry, Haseeb; Kochersberger, Kevin

    2015-05-01

    Unmanned aircraft development has accelerated with recent technological improvements in sensing and communications, which has resulted in an "applications lag" for how these aircraft can best be utilized. The aircraft are becoming smaller, more maneuverable and have longer endurance to perform sensing and sampling missions, but operating them aggressively to exploit these capabilities has not been a primary focus in unmanned systems development. This paper addresses a means of aerial vehicle path planning to provide a realistic optimal path in acquiring imagery for structure from motion (SfM) reconstructions and performing radiation surveys. This method will allow SfM reconstructions to occur accurately and with minimal flight time so that the reconstructions can be executed efficiently. An assumption is made that we have 3D point cloud data available prior to the flight. A discrete set of scan lines are proposed for the given area that are scored based on visibility of the scene. Our approach finds a time-efficient path and calculates trajectories between scan lines and over obstacles encountered along those scan lines. Aircraft dynamics are incorporated into the path planning algorithm as dynamic cost functions to create optimal imaging paths in minimum time. Simulations of the path planning algorithm are shown for an urban environment. We also present our approach for image-based terrain mapping, which is able to efficiently perform a 3D reconstruction of a large area without the use of GPS data.

  6. Optical path switching based differential absorption radiometry for substance detection

    NASA Technical Reports Server (NTRS)

    Sachse, Glen W. (Inventor)

    2005-01-01

    An optical path switch divides sample path radiation into a time series of alternating first polarized components and second polarized components. The first polarized components are transmitted along a first optical path and the second polarized components along a second optical path. A first gasless optical filter train filters the first polarized components to isolate at least a first wavelength band thereby generating first filtered radiation. A second gasless optical filter train filters the second polarized components to isolate at least a second wavelength band thereby generating second filtered radiation. A beam combiner combines the first and second filtered radiation to form a combined beam of radiation. A detector is disposed to monitor magnitude of at least a portion of the combined beam alternately at the first wavelength band and the second wavelength band as an indication of the concentration of the substance in the sample path.

  7. Optical path switching based differential absorption radiometry for substance detection

    NASA Technical Reports Server (NTRS)

    Sachse, Glen W. (Inventor)

    2003-01-01

    An optical path switch divides sample path radiation into a time series of alternating first polarized components and second polarized components. The first polarized components are transmitted along a first optical path and the second polarized components along a second optical path. A first gasless optical filter train filters the first polarized components to isolate at least a first wavelength band thereby generating first filtered radiation. A second gasless optical filter train filters the second polarized components to isolate at least a second wavelength band thereby generating second filtered radiation. A beam combiner combines the first and second filtered radiation to form a combined beam of radiation. A detector is disposed to monitor magnitude of at least a portion of the combined beam alternately at the first wavelength band and the second wavelength band as an indication of the concentration of the substance in the sample path.

  8. Path-Following Solutions Of Nonlinear Equations

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.; Walters, Robert W.

    1989-01-01

    Report describes some path-following techniques for solution of nonlinear equations and compares with other methods. Use of multipurpose techniques applicable at more than one stage of path-following computation results in system relatively simple to understand, program, and use. Comparison of techniques with method of parametric differentiation (MPD) reveals definite advantages for path-following methods. Emphasis in investigation on multiuse techniques being applied at more than one stage of path-following computation. Incorporation of multipurpose techniques results in concise computer code relatively simple to use.

  9. Path integral Monte Carlo ground state approach: formalism, implementation, and applications

    NASA Astrophysics Data System (ADS)

    Yan, Yangqian; Blume, D.

    2017-11-01

    Monte Carlo techniques have played an important role in understanding strongly correlated systems across many areas of physics, covering a wide range of energy and length scales. Among the many Monte Carlo methods applicable to quantum mechanical systems, the path integral Monte Carlo approach with its variants has been employed widely. Since semi-classical or classical approaches will not be discussed in this review, path integral based approaches can for our purposes be divided into two categories: approaches applicable to quantum mechanical systems at zero temperature and approaches applicable to quantum mechanical systems at finite temperature. While these two approaches are related to each other, the underlying formulation and aspects of the algorithm differ. This paper reviews the path integral Monte Carlo ground state (PIGS) approach, which solves the time-independent Schrödinger equation. Specifically, the PIGS approach allows for the determination of expectation values with respect to eigen states of the few- or many-body Schrödinger equation provided the system Hamiltonian is known. The theoretical framework behind the PIGS algorithm, implementation details, and sample applications for fermionic systems are presented.

  10. Robotic path-finding in inverse treatment planning for stereotactic radiosurgery with continuous dose delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandewouw, Marlee M., E-mail: marleev@mie.utoronto

    Purpose: Continuous dose delivery in radiation therapy treatments has been shown to decrease total treatment time while improving the dose conformity and distribution homogeneity over the conventional step-and-shoot approach. The authors develop an inverse treatment planning method for Gamma Knife® Perfexion™ that continuously delivers dose along a path in the target. Methods: The authors’ method is comprised of two steps: find a path within the target, then solve a mixed integer optimization model to find the optimal collimator configurations and durations along the selected path. Robotic path-finding techniques, specifically, simultaneous localization and mapping (SLAM) using an extended Kalman filter, aremore » used to obtain a path that travels sufficiently close to selected isocentre locations. SLAM is novelly extended to explore a 3D, discrete environment, which is the target discretized into voxels. Further novel extensions are incorporated into the steering mechanism to account for target geometry. Results: The SLAM method was tested on seven clinical cases and compared to clinical, Hamiltonian path continuous delivery, and inverse step-and-shoot treatment plans. The SLAM approach improved dose metrics compared to the clinical plans and Hamiltonian path continuous delivery plans. Beam-on times improved over clinical plans, and had mixed performance compared to Hamiltonian path continuous plans. The SLAM method is also shown to be robust to path selection inaccuracies, isocentre selection, and dose distribution. Conclusions: The SLAM method for continuous delivery provides decreased total treatment time and increased treatment quality compared to both clinical and inverse step-and-shoot plans, and outperforms existing path methods in treatment quality. It also accounts for uncertainty in treatment planning by accommodating inaccuracies.« less

  11. Phonon Scattering and Confinement in Crystalline Films

    NASA Astrophysics Data System (ADS)

    Parrish, Kevin D.

    The operating temperature of energy conversion and electronic devices affects their efficiency and efficacy. In many devices, however, the reference values of the thermal properties of the materials used are no longer applicable due to processing techniques performed. This leads to challenges in thermal management and thermal engineering that demand accurate predictive tools and high fidelity measurements. The thermal conductivity of strained, nanostructured, and ultra-thin dielectrics are predicted computationally using solutions to the Boltzmann transport equation. Experimental measurements of thermal diffusivity are performed using transient grating spectroscopy. The thermal conductivities of argon, modeled using the Lennard-Jones potential, and silicon, modeled using density functional theory, are predicted under compressive and tensile strain from lattice dynamics calculations. The thermal conductivity of silicon is found to be invariant with compression, a result that is in disagreement with previous computational efforts. This difference is attributed to the more accurate force constants calculated from density functional theory. The invariance is found to be a result of competing effects of increased phonon group velocities and decreased phonon lifetimes, demonstrating how the anharmonic contribution of the atomic potential can scale differently than the harmonic contribution. Using three Monte Carlo techniques, the phonon-boundary scattering and the subsequent thermal conductivity reduction are predicted for nanoporous silicon thin films. The Monte Carlo techniques used are free path sampling, isotropic ray-tracing, and a new technique, modal ray-tracing. The thermal conductivity predictions from all three techniques are observed to be comparable to previous experimental measurements on nanoporous silicon films. The phonon mean free paths predicted from isotropic ray-tracing, however, are unphysical as compared to those predicted by free path sampling. Removing the isotropic assumption, leading to the formulation of modal ray-tracing, corrects the mean free path distribution. The effect of phonon line-of-sight is investigated in nanoporous silicon films using free path sampling. When the line-of-sight is cut off there is a distinct change in thermal conductivity versus porosity. By analyzing the free paths of an obstructed phonon mode, it is concluded that the trend change is due to a hard upper limit on the free paths that can exist due to the nanopore geometry in the material. The transient grating technique is an optical contact-less laser based experiment for measuring the in-plane thermal diffusivity of thin films and membranes. The theory of operation and physical setup of a transient grating experiment is detailed. The procedure for extracting the thermal diffusivity from the raw experimental signal is improved upon by removing arbitrary user choice in the fitting parameters used and constructing a parameterless error minimizing procedure. The thermal conductivity of ultra-thin argon films modeled with the Lennard-Jones potential is calculated from both the Monte Carlo free path sampling technique and from explicit reduced dimensionality lattice dynamics calculations. In these ultra-thin films, the phonon properties are altered in more than a perturbative manner, referred to as the confinement regime. The free path sampling technique, which is a perturbative method, is compared to a reduced dimensionality lattice dynamics calculation where the entire film thickness is taken as the unit cell. Divergence in thermal conductivity magnitude and trend is found at few unit cell thick argon films. Although the phonon group velocities and lifetimes are affected, it is found that alterations to the phonon density of states are the primary cause of the deviation in thermal conductivity in the confinement regime.

  12. High-precision diode-laser-based temperature measurement for air refractive index compensation.

    PubMed

    Hieta, Tuomas; Merimaa, Mikko; Vainio, Markku; Seppä, Jeremias; Lassila, Antti

    2011-11-01

    We present a laser-based system to measure the refractive index of air over a long path length. In optical distance measurements, it is essential to know the refractive index of air with high accuracy. Commonly, the refractive index of air is calculated from the properties of the ambient air using either Ciddor or Edlén equations, where the dominant uncertainty component is in most cases the air temperature. The method developed in this work utilizes direct absorption spectroscopy of oxygen to measure the average temperature of air and of water vapor to measure relative humidity. The method allows measurement of temperature and humidity over the same beam path as in optical distance measurement, providing spatially well-matching data. Indoor and outdoor measurements demonstrate the effectiveness of the method. In particular, we demonstrate an effective compensation of the refractive index of air in an interferometric length measurement at a time-variant and spatially nonhomogeneous temperature over a long time period. Further, we were able to demonstrate 7 mK RMS noise over a 67 m path length using a 120 s sample time. To our knowledge, this is the best temperature precision reported for a spectroscopic temperature measurement. © 2011 Optical Society of America

  13. Monitoring urban land cover change by updating the national land cover database impervious surface products

    USGS Publications Warehouse

    Xian, George Z.; Homer, Collin G.

    2009-01-01

    The U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001 is widely used as a baseline for national land cover and impervious conditions. To ensure timely and relevant data, it is important to update this base to a more recent time period. A prototype method was developed to update the land cover and impervious surface by individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season from both 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, impervious surface was estimated for areas of change by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain a variety of metropolitan areas. Results from the five study areas show that the vast majority of impervious surface changes associated with urban developments were accurately captured and updated. The approach optimizes mapping efficiency and can provide users a flexible method to generate updated impervious surface at national and regional scales.

  14. The influence of parenting style on academic achievement and career path.

    PubMed

    Zahed Zahedani, Zahra; Rezaee, Rita; Yazdani, Zahra; Bagheri, Sina; Nabeiei, Parisa

    2016-07-01

    Several factors affect the academic performance of college students and parenting style is one significant factor. The current study has been done with the purpose of investigating the relationship between parenting styles, academic achievement and career path of students at Shiraz University of Medical Sciences. This is a correlation study carried out at Shiraz University of Medical Sciences. Among 1600 students, 310 students were selected randomly as the sample. Baumrind's Parenting Style and Moqimi's Career Path questionnaires were used and the obtained scores were correlated with the students' transcripts. To study the relation between variables Pearson correlation coefficient was used. There was a significant relationship between authoritarian parenting style and educational success (p=0.03). Also findings showed a significant relationship between firm parenting style and Career Path of the students, authoritarian parenting style and Career Path of the students, educational success and Career Path of the students (p=0.001). Parents have an important role in identifying children's talent and guiding them. Mutual understanding and close relationship between parents and children are recommended. Therefore, it is recommended that the methods of correct interaction of parents and children be more valued and parents familiarize their children with roles of businesses in society and the need for employment in legitimate businesses and this important affair should be more emphasized through mass media and family training classes.

  15. Neck Muscle Moment Arms Obtained In-Vivo from MRI: Effect of Curved and Straight Modeled Paths.

    PubMed

    Suderman, Bethany L; Vasavada, Anita N

    2017-08-01

    Musculoskeletal models of the cervical spine commonly represent neck muscles with straight paths. However, straight lines do not best represent the natural curvature of muscle paths in the neck, because the paths are constrained by bone and soft tissue. The purpose of this study was to estimate moment arms of curved and straight neck muscle paths using different moment arm calculation methods: tendon excursion, geometric, and effective torque. Curved and straight muscle paths were defined for two subject-specific cervical spine models derived from in vivo magnetic resonance images (MRI). Modeling neck muscle paths with curvature provides significantly different moment arm estimates than straight paths for 10 of 15 neck muscles (p < 0.05, repeated measures two-way ANOVA). Moment arm estimates were also found to be significantly different among moment arm calculation methods for 11 of 15 neck muscles (p < 0.05, repeated measures two-way ANOVA). In particular, using straight lines to model muscle paths can lead to overestimating neck extension moment. However, moment arm methods for curved paths should be investigated further, as different methods of calculating moment arm can provide different estimates.

  16. Development of a short path thermal desorption-gas chromatography/mass spectrometry method for the determination of polycyclic aromatic hydrocarbons in indoor air.

    PubMed

    Li, Yingjie; Xian, Qiming; Li, Li

    2017-05-12

    Polycyclic aromatic hydrocarbons (PAHs) are present in petroleum based products and are combustion by-products of organic matters. Determination of levels of PAHs in the indoor environment is important for assessing human exposure to these chemicals. A new short path thermal desorption (SPTD) gas chromatography/mass spectrometry (GC/MS) method for determining levels of PAHs in indoor air was developed. Thermal desorption (TD) tubes packed with glass beads, Carbopack C, and Carbopack B in sequence, were used for sample collection. Indoor air was sampled using a small portable pump over 7 days at 100ml/min. Target PAHs were thermally released and introduced into the GC/MS for analysis through the SPTD unit. During tube desorption, PAHs were cold trapped (-20°C) at the front end of the GC column. Thermal desorption efficiencies were 100% for PAHs with 2 and 3 rings, and 99-97% for PAHs with 4-6 rings. Relative standard deviation (RSD) values among replicate samples spiked at three different levels were around 10-20%. The detection limit of this method was at or below 0.1μg/m 3 except for naphthalene (0.61μg/m 3 ), fluorene (0.28μg/m 3 ) and phenanthrene (0.35μg/m 3 ). This method was applied to measure PAHs in indoor air in nine residential homes. The levels of PAHs in indoor air found in these nine homes are similar to indoor air values reported by others. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Microfabricated capillary array electrophoresis device and method

    DOEpatents

    Simpson, Peter C.; Mathies, Richard A.; Woolley, Adam T.

    2000-01-01

    A capillary array electrophoresis (CAE) micro-plate with an array of separation channels connected to an array of sample reservoirs on the plate. The sample reservoirs are organized into one or more sample injectors. One or more waste reservoirs are provided to collect wastes from reservoirs in each of the sample injectors. Additionally, a cathode reservoir is also multiplexed with one or more separation channels. To complete the electrical path, an anode reservoir which is common to some or all separation channels is also provided on the micro-plate. Moreover, the channel layout keeps the distance from the anode to each of the cathodes approximately constant.

  18. Microfabricated capillary array electrophoresis device and method

    DOEpatents

    Simpson, Peter C.; Mathies, Richard A.; Woolley, Adam T.

    2004-06-15

    A capillary array electrophoresis (CAE) micro-plate with an array of separation channels connected to an array of sample reservoirs on the plate. The sample reservoirs are organized into one or more sample injectors. One or more waste reservoirs are provided to collect wastes from reservoirs in each of the sample injectors. Additionally, a cathode reservoir is also multiplexed with one or more separation channels. To complete the electrical path, an anode reservoir which is common to some or all separation channels is also provided on the micro-plate. Moreover, the channel layout keeps the distance from the anode to each of the cathodes approximately constant.

  19. HAI, a new airborne, absolute, twin dual-channel, multi-phase TDLAS-hygrometer: background, design, setup, and first flight data

    NASA Astrophysics Data System (ADS)

    Buchholz, Bernhard; Afchine, Armin; Klein, Alexander; Schiller, Cornelius; Krämer, Martina; Ebert, Volker

    2017-01-01

    The novel Hygrometer for Atmospheric Investigation (HAI) realizes a unique concept for simultaneous gas-phase and total (gas-phase + evaporated cloud particles) water measurements. It has been developed and successfully deployed for the first time on the German HALO research aircraft. This new instrument combines direct tunable diode laser absorption spectroscopy (dTDLAS) with a first-principle evaluation method to allow absolute water vapor measurements without any initial or repetitive sensor calibration using a reference gas or a reference humidity generator. HAI contains two completely independent dual-channel (closed-path, open-path) spectrometers, one at 1.4 and one at 2.6 µm, which together allow us to cover the entire atmospheric H2O range from 1 to 40 000 ppmv with a single instrument. Both spectrometers each comprise a separate, wavelength-individual extractive, closed-path cell for total water (ice and gas-phase) measurements. Additionally, both spectrometers couple light into a common open-path cell outside of the aircraft fuselage for a direct, sampling-free, and contactless determination of the gas-phase water content. This novel twin dual-channel setup allows for the first time multiple self-validation functions, in particular a reliable, direct, in-flight validation of the open-path channels. During the first field campaigns, the in-flight deviations between the independent and calibration-free channels (i.e., closed-path to closed-path and open-path to closed-path) were on average in the 2 % range. Further, the fully autonomous HAI hygrometer allows measurements up to 240 Hz with a minimal integration time of 1.4 ms. The best precision is achieved by the 1.4 µm closed-path cell at 3.8 Hz (0.18 ppmv) and by the 2.6 µm closed-path cell at 13 Hz (0.055 ppmv). The requirements, design, operation principle, and first in-flight performance of the hygrometer are described and discussed in this work.

  20. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  1. Analytic solution of the Spencer-Lewis angular-spatial moments equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filippone, W.L.

    A closed-form solution for the angular-spatial moments of the Spencer-Lewis equation is presented that is valid for infinite homogeneous media. From the moments, the electron density distribution as a function of position and path length (energy) is reconstructed for several sample problems involving plane isotropic sources of electrons in aluminium. The results are in excellent agreement with those determined numerically using the streaming ray method. The primary use of the closed form solution will most likely be to generate accurate electron transport benchmark solutions. In principle, the electron density as a function of space, path length, and direction can bemore » determined for planar sources of arbitrary angular distribution.« less

  2. A randomized controlled trial of acupuncture and moxibustion to treat Bell's palsy according to different stages: design and protocol.

    PubMed

    Chen, Xiaoqin; Li, Ying; Zheng, Hui; Hu, Kaming; Zhang, Hongxing; Zhao, Ling; Li, Yan; Liu, Lian; Mang, Lingling; Yu, Shuyuan

    2009-07-01

    Acupuncture to treat Bell's palsy is one of the most commonly used methods in China. There are a variety of acupuncture treatment options to treat Bell's palsy in clinical practice. Since Bell's palsy has three different path-stages (acute stage, resting stage and restoration stage), so whether acupuncture is effective in the different path-stages and which acupuncture treatment is the best method are major issues in acupuncture clinical trials about Bell's palsy. In this article, we report the design and protocol of a large sample multi-center randomized controlled trial to treat Bell's palsy with acupuncture. There are five acupuncture groups, with four according to different path-stages and one not. In total, 900 patients with Bell's palsy are enrolled in this study. These patients are randomly assigned to receive one of the following four treatment groups according to different path-stages, i.e. 1) staging acupuncture group, 2) staging acupuncture and moxibustion group, 3) staging electro-acupuncture group, 4) staging acupuncture along yangming musculature group or non-staging acupuncture control group. The outcome measurements in this trial are the effect comparison achieved among these five groups in terms of House-Brackmann scale (Global Score and Regional Score), Facial Disability Index scale, Classification scale of Facial Paralysis, and WHOQOL-BREF scale before randomization (baseline phase) and after randomization. The result of this trial will certify the efficacy of using staging acupuncture and moxibustion to treat Bell's palsy, and to approach a best acupuncture treatment among these five different methods for treating Bell's palsy.

  3. Method for detection of dental caries and periodontal disease using optical imaging

    DOEpatents

    Nathel, Howard; Kinney, John H.; Otis, Linda L.

    1996-01-01

    A method for detecting the presence of active and inactive caries in teeth and diagnosing periodontal disease uses non-ionizing radiation with techniques for reducing interference from scattered light. A beam of non-ionizing radiation is divided into sample and reference beams. The region to be examined is illuminated by the sample beam, and reflected or transmitted radiation from the sample is recombined with the reference beam to form an interference pattern on a detector. The length of the reference beam path is adjustable, allowing the operator to select the reflected or transmitted sample photons that recombine with the reference photons. Thus radiation scattered by the dental or periodontal tissue can be prevented from obscuring the interference pattern. A series of interference patterns may be generated and interpreted to locate dental caries and periodontal tissue interfaces.

  4. Annealed importance sampling with constant cooling rate

    NASA Astrophysics Data System (ADS)

    Giovannelli, Edoardo; Cardini, Gianni; Gellini, Cristina; Pietraperzia, Giangaetano; Chelli, Riccardo

    2015-02-01

    Annealed importance sampling is a simulation method devised by Neal [Stat. Comput. 11, 125 (2001)] to assign weights to configurations generated by simulated annealing trajectories. In particular, the equilibrium average of a generic physical quantity can be computed by a weighted average exploiting weights and estimates of this quantity associated to the final configurations of the annealed trajectories. Here, we review annealed importance sampling from the perspective of nonequilibrium path-ensemble averages [G. E. Crooks, Phys. Rev. E 61, 2361 (2000)]. The equivalence of Neal's and Crooks' treatments highlights the generality of the method, which goes beyond the mere thermal-based protocols. Furthermore, we show that a temperature schedule based on a constant cooling rate outperforms stepwise cooling schedules and that, for a given elapsed computer time, performances of annealed importance sampling are, in general, improved by increasing the number of intermediate temperatures.

  5. Multiparallel Three-Dimensional Optical Microscopy

    NASA Technical Reports Server (NTRS)

    Nguyen, Lam K.; Price, Jeffrey H.; Kellner, Albert L.; Bravo-Zanoquera, Miguel

    2010-01-01

    Multiparallel three-dimensional optical microscopy is a method of forming an approximate three-dimensional image of a microscope sample as a collection of images from different depths through the sample. The imaging apparatus includes a single microscope plus an assembly of beam splitters and mirrors that divide the output of the microscope into multiple channels. An imaging array of photodetectors in each channel is located at a different distance along the optical path from the microscope, corresponding to a focal plane at a different depth within the sample. The optical path leading to each photodetector array also includes lenses to compensate for the variation of magnification with distance so that the images ultimately formed on all the photodetector arrays are of the same magnification. The use of optical components common to multiple channels in a simple geometry makes it possible to obtain high light-transmission efficiency with an optically and mechanically simple assembly. In addition, because images can be read out simultaneously from all the photodetector arrays, the apparatus can support three-dimensional imaging at a high scanning rate.

  6. An axisymmetric single-path model for gas transport in the conducting airways.

    PubMed

    Madasu, Srinath; Borhan, All; Ultman, James S

    2006-02-01

    In conventional one-dimensional single-path models, radially averaged concentration is calculated as a function of time and longitudinal position in the lungs, and coupled convection and diffusion are accounted for with a dispersion coefficient. The axisymmetric single-path model developed in this paper is a two-dimensional model that incorporates convective-diffusion processes in a more fundamental manner by simultaneously solving the Navier-Stokes and continuity equations with the convection-diffusion equation. A single airway path was represented by a series of straight tube segments interconnected by leaky transition regions that provide for flow loss at the airway bifurcations. As a sample application, the model equations were solved by a finite element method to predict the unsteady state dispersion of an inhaled pulse of inert gas along an airway path having dimensions consistent with Weibel's symmetric airway geometry. Assuming steady, incompressible, and laminar flow, a finite element analysis was used to solve for the axisymmetric pressure, velocity and concentration fields. The dispersion calculated from these numerical solutions exhibited good qualitative agreement with the experimental values, but quantitatively was in error by 20%-30% due to the assumption of axial symmetry and the inability of the model to capture the complex recirculatory flows near bifurcations.

  7. Speech Understanding with a New Implant Technology: A Comparative Study with a New Nonskin Penetrating Baha System

    PubMed Central

    Caversaccio, Marco

    2014-01-01

    Objective. To compare hearing and speech understanding between a new, nonskin penetrating Baha system (Baha Attract) to the current Baha system using a skin-penetrating abutment. Methods. Hearing and speech understanding were measured in 16 experienced Baha users. The transmission path via the abutment was compared to a simulated Baha Attract transmission path by attaching the implantable magnet to the abutment and then by adding a sample of artificial skin and the external parts of the Baha Attract system. Four different measurements were performed: bone conduction thresholds directly through the sound processor (BC Direct), aided sound field thresholds, aided speech understanding in quiet, and aided speech understanding in noise. Results. The simulated Baha Attract transmission path introduced an attenuation starting from approximately 5 dB at 1000 Hz, increasing to 20–25 dB above 6000 Hz. However, aided sound field threshold shows smaller differences and aided speech understanding in quiet and in noise does not differ significantly between the two transmission paths. Conclusion. The Baha Attract system transmission path introduces predominately high frequency attenuation. This attenuation can be partially compensated by adequate fitting of the speech processor. No significant decrease in speech understanding in either quiet or in noise was found. PMID:25140314

  8. Path Sampling Methods for Enzymatic Quantum Particle Transfer Reactions

    PubMed Central

    Dzierlenga, M.W.; Varga, M.J.

    2016-01-01

    The mechanisms of enzymatic reactions are studied via a host of computational techniques. While previous methods have been used successfully, many fail to incorporate the full dynamical properties of enzymatic systems. This can lead to misleading results in cases where enzyme motion plays a significant role in the reaction coordinate, which is especially relevant in particle transfer reactions where nuclear tunneling may occur. In this chapter, we outline previous methods, as well as discuss newly developed dynamical methods to interrogate mechanisms of enzymatic particle transfer reactions. These new methods allow for the calculation of free energy barriers and kinetic isotope effects (KIEs) with the incorporation of quantum effects through centroid molecular dynamics (CMD) and the full complement of enzyme dynamics through transition path sampling (TPS). Recent work, summarized in this chapter, applied the method for calculation of free energy barriers to reaction in lactate dehydrogenase (LDH) and yeast alcohol dehydrogenase (YADH). It was found that tunneling plays an insignificant role in YADH but plays a more significant role in LDH, though not dominant over classical transfer. Additionally, we summarize the application of a TPS algorithm for the calculation of reaction rates in tandem with CMD to calculate the primary H/D KIE of YADH from first principles. It was found that the computationally obtained KIE is within the margin of error of experimentally determined KIEs, and corresponds to the KIE of particle transfer in the enzyme. These methods provide new ways to investigate enzyme mechanism with the inclusion of protein and quantum dynamics. PMID:27497161

  9. Photothermal method of determining calorific properties of coal

    DOEpatents

    Amer, Nabil M.

    1985-01-01

    Predetermined amounts of heat are generated within a coal sample (11) by directing pump light pulses (14) of predetermined energy content into a small surface region (16) of the sample (11). A beam (18) of probe light is directed along the sample surface (19) and deflection of the probe beam (18) from thermally induced changes of index of refraction in the fluid medium adjacent the heated region (16) are detected. Deflection amplitude and the phase lag of the deflection, relative to the initiating pump light pulse (14), are indicative of the calorific value and the porosity of the sample (11). The method provides rapid, accurate and non-destructive analysis of the heat producing capabilities of coal samples (11). In the preferred form, sequences of pump light pulses (14) of increasing durations are directed into the sample (11) at each of a series of minute regions (16) situated along a raster scan path (21) enabling detailed analysis of variations of thermal properties at different areas of the sample (11) and at different depths.

  10. Path Following in the Exact Penalty Method of Convex Programming.

    PubMed

    Zhou, Hua; Lange, Kenneth

    2015-07-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value.

  11. Path Following in the Exact Penalty Method of Convex Programming

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2015-01-01

    Classical penalty methods solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. In practice, the kinks in the penalty and the unknown magnitude of the penalty constant prevent wide application of the exact penalty method in nonlinear programming. In this article, we examine a strategy of path following consistent with the exact penalty method. Instead of performing optimization at a single penalty constant, we trace the solution as a continuous function of the penalty constant. Thus, path following starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. For quadratic programming, the solution path is piecewise linear and takes large jumps from constraint to constraint. For a general convex program, the solution path is piecewise smooth, and path following operates by numerically solving an ordinary differential equation segment by segment. Our diverse applications to a) projection onto a convex set, b) nonnegative least squares, c) quadratically constrained quadratic programming, d) geometric programming, and e) semidefinite programming illustrate the mechanics and potential of path following. The final detour to image denoising demonstrates the relevance of path following to regularized estimation in inverse problems. In regularized estimation, one follows the solution path as the penalty constant decreases from a large value. PMID:26366044

  12. Research on the Calculation Method of Optical Path Difference of the Shanghai Tian Ma Telescope

    NASA Astrophysics Data System (ADS)

    Dong, J.; Fu, L.; Jiang, Y. B.; Liu, Q. H.; Gou, W.; Yan, F.

    2016-03-01

    Based on the Shanghai Tian Ma Telescope (TM), an optical path difference calculation method of the shaped Cassegrain antenna is presented in the paper. Firstly, the mathematical model of the TM optics is established based on the antenna reciprocity theorem. Secondly, the TM sub-reflector and main reflector are fitted by the Non-Uniform Rational B-Splines (NURBS). Finally, the method of optical path difference calculation is implemented, and the expanding application of the Ruze optical path difference formulas in the TM is researched. The method can be used to calculate the optical path difference distributions across the aperture field of the TM due to misalignment like the axial and lateral displacements of the feed and sub-reflector, or the tilt of the sub-reflector. When the misalignment quantity is small, the expanding Ruze optical path difference formulas can be used to calculate the optical path difference quickly. The paper supports the real-time measurement and adjustment of the TM structure. The research has universality, and can provide reference for the optical path difference calculation of other radio telescopes with shaped surfaces.

  13. Planning Under Uncertainty: Methods and Applications

    DTIC Science & Technology

    2010-06-09

    begun research into fundamental algorithms for optimization and re?optimization of continuous optimization problems (such as linear and quadratic... algorithm yields a 14.3% improvement over the original design while saving 68.2 % of the simulation evaluations compared to standard sample-path...They provide tools for building and justifying computational algorithms for such problems. Year. 2010 Month: 03 Final Research under this grant

  14. Effects of Caregiver Status, Coping Styles, and Social Support on the Physical Health of Korean American Caregivers

    ERIC Educational Resources Information Center

    Kim, Jung-Hyun; Knight, Bob G.

    2008-01-01

    Purpose: This study investigated direct and indirect effects of caregiver status on the physical health of Korean American caregivers in terms of caregiver coping styles and the quantity and the quality of informal social support. Design and Methods: Using a sample of 87 caregivers and 87 matched noncaregivers, we analyzed a path model, employing…

  15. Calculating Path-Dependent Travel Time Prediction Variance and Covariance fro a Global Tomographic P-Velocity Model

    NASA Astrophysics Data System (ADS)

    Ballard, S.; Hipp, J. R.; Encarnacao, A.; Young, C. J.; Begnaud, M. L.; Phillips, W. S.

    2012-12-01

    Seismic event locations can be made more accurate and precise by computing predictions of seismic travel time through high fidelity 3D models of the wave speed in the Earth's interior. Given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from SALSA3D, our global, seamless 3D tomographic P-velocity model. Typical global 3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  16. A dynamic Brownian bridge movement model to estimate utilization distributions for heterogeneous animal movement.

    PubMed

    Kranstauber, Bart; Kays, Roland; Lapoint, Scott D; Wikelski, Martin; Safi, Kamran

    2012-07-01

    1. The recently developed Brownian bridge movement model (BBMM) has advantages over traditional methods because it quantifies the utilization distribution of an animal based on its movement path rather than individual points and accounts for temporal autocorrelation and high data volumes. However, the BBMM assumes unrealistic homogeneous movement behaviour across all data. 2. Accurate quantification of the utilization distribution is important for identifying the way animals use the landscape. 3. We improve the BBMM by allowing for changes in behaviour, using likelihood statistics to determine change points along the animal's movement path. 4. This novel extension, outperforms the current BBMM as indicated by simulations and examples of a territorial mammal and a migratory bird. The unique ability of our model to work with tracks that are not sampled regularly is especially important for GPS tags that have frequent failed fixes or dynamic sampling schedules. Moreover, our model extension provides a useful one-dimensional measure of behavioural change along animal tracks. 5. This new method provides a more accurate utilization distribution that better describes the space use of realistic, behaviourally heterogeneous tracks. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  17. Metadynamics for training neural network model chemistries: A competitive assessment

    NASA Astrophysics Data System (ADS)

    Herr, John E.; Yao, Kun; McIntyre, Ryker; Toth, David W.; Parkhill, John

    2018-06-01

    Neural network model chemistries (NNMCs) promise to facilitate the accurate exploration of chemical space and simulation of large reactive systems. One important path to improving these models is to add layers of physical detail, especially long-range forces. At short range, however, these models are data driven and data limited. Little is systematically known about how data should be sampled, and "test data" chosen randomly from some sampling techniques can provide poor information about generality. If the sampling method is narrow, "test error" can appear encouragingly tiny while the model fails catastrophically elsewhere. In this manuscript, we competitively evaluate two common sampling methods: molecular dynamics (MD), normal-mode sampling, and one uncommon alternative, Metadynamics (MetaMD), for preparing training geometries. We show that MD is an inefficient sampling method in the sense that additional samples do not improve generality. We also show that MetaMD is easily implemented in any NNMC software package with cost that scales linearly with the number of atoms in a sample molecule. MetaMD is a black-box way to ensure samples always reach out to new regions of chemical space, while remaining relevant to chemistry near kbT. It is a cheap tool to address the issue of generalization.

  18. Dynamic path planning for autonomous driving on various roads with avoidance of static and moving obstacles

    NASA Astrophysics Data System (ADS)

    Hu, Xuemin; Chen, Long; Tang, Bo; Cao, Dongpu; He, Haibo

    2018-02-01

    This paper presents a real-time dynamic path planning method for autonomous driving that avoids both static and moving obstacles. The proposed path planning method determines not only an optimal path, but also the appropriate acceleration and speed for a vehicle. In this method, we first construct a center line from a set of predefined waypoints, which are usually obtained from a lane-level map. A series of path candidates are generated by the arc length and offset to the center line in the s - ρ coordinate system. Then, all of these candidates are converted into Cartesian coordinates. The optimal path is selected considering the total cost of static safety, comfortability, and dynamic safety; meanwhile, the appropriate acceleration and speed for the optimal path are also identified. Various types of roads, including single-lane roads and multi-lane roads with static and moving obstacles, are designed to test the proposed method. The simulation results demonstrate the effectiveness of the proposed method, and indicate its wide practical application to autonomous driving.

  19. Effective absorption correction for energy dispersive X-ray mapping in a scanning transmission electron microscope: analysing the local indium distribution in rough samples of InGaN alloy layers.

    PubMed

    Wang, X; Chauvat, M-P; Ruterana, P; Walther, T

    2017-12-01

    We have applied our previous method of self-consistent k*-factors for absorption correction in energy-dispersive X-ray spectroscopy to quantify the indium content in X-ray maps of thick compound InGaN layers. The method allows us to quantify the indium concentration without measuring the sample thickness, density or beam current, and works even if there is a drastic local thickness change due to sample roughness or preferential thinning. The method is shown to select, point-by-point in a two-dimensional spectrum image or map, the k*-factor from the local Ga K/L intensity ratio that is most appropriate for the corresponding sample geometry, demonstrating it is not the sample thickness measured along the electron beam direction but the optical path length the X-rays have to travel through the sample that is relevant for the absorption correction. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  20. A path planning method used in fluid jet polishing eliminating lightweight mirror imprinting effect

    NASA Astrophysics Data System (ADS)

    Li, Wenzong; Fan, Bin; Shi, Chunyan; Wang, Jia; Zhuo, Bin

    2014-08-01

    With the development of space technology, the design of optical system tends to large aperture lightweight mirror with high dimension-thickness ratio. However, when the lightweight mirror PV value is less than λ/10 , the surface will show wavy imprinting effect obviously. Imprinting effect introduced by head-tool pressure has become a technological barrier in high-precision lightweight mirror manufacturing. Fluid jet polishing can exclude outside pressure. Presently, machining tracks often used are grating type path, screw type path and pseudo-random path. On the edge of imprinting error, the speed of adjacent path points changes too fast, which causes the machine hard to reflect quickly, brings about new path error, and increases the polishing time due to superfluous path. This paper presents a new planning path method to eliminate imprinting effect. Simulation results show that the path of the improved grating path can better eliminate imprinting effect compared to the general path.

  1. Path Planning for Robot based on Chaotic Artificial Potential Field Method

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng

    2018-03-01

    Robot path planning in unknown environments is one of the hot research topics in the field of robot control. Aiming at the shortcomings of traditional artificial potential field methods, we propose a new path planning for Robot based on chaotic artificial potential field method. The path planning adopts the potential function as the objective function and introduces the robot direction of movement as the control variables, which combines the improved artificial potential field method with chaotic optimization algorithm. Simulations have been carried out and the results demonstrate that the superior practicality and high efficiency of the proposed method.

  2. Linear and nonlinear dynamic analysis of redundant load path bearingless rotor systems

    NASA Technical Reports Server (NTRS)

    Murthy, V. R.

    1985-01-01

    The bearingless rotorcraft offers reduced weight, less complexity and superior flying qualities. Almost all the current industrial structural dynamic programs of conventional rotors which consist of single load path rotor blades employ the transfer matrix method to determine natural vibration characteristics because this method is ideally suited for one dimensional chain like structures. This method is extended to multiple load path rotor blades without resorting to an equivalent single load path approximation. Unlike the conventional blades, it isk necessary to introduce the axial-degree-of-freedom into the solution process to account for the differential axial displacements in the different load paths. With the present extension, the current rotor dynamic programs can be modified with relative ease to account for the multiple load paths without resorting to the equivalent single load path modeling. The results obtained by the transfer matrix method are validated by comparing with the finite element solutions. A differential stiffness matrix due to blade rotation is derived to facilitate the finite element solutions.

  3. Positron lifetime spectrometer using a DC positron beam

    DOEpatents

    Xu, Jun; Moxom, Jeremy

    2003-10-21

    An entrance grid is positioned in the incident beam path of a DC beam positron lifetime spectrometer. The electrical potential difference between the sample and the entrance grid provides simultaneous acceleration of both the primary positrons and the secondary electrons. The result is a reduction in the time spread induced by the energy distribution of the secondary electrons. In addition, the sample, sample holder, entrance grid, and entrance face of the multichannel plate electron detector assembly are made parallel to each other, and are arranged at a tilt angle to the axis of the positron beam to effectively separate the path of the secondary electrons from the path of the incident positrons.

  4. Parallel replica dynamics method for bistable stochastic reaction networks: Simulation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ting; Plecháč, Petr

    2017-12-01

    Stochastic reaction networks that exhibit bistable behavior are common in systems biology, materials science, and catalysis. Sampling of stationary distributions is crucial for understanding and characterizing the long-time dynamics of bistable stochastic dynamical systems. However, simulations are often hindered by the insufficient sampling of rare transitions between the two metastable regions. In this paper, we apply the parallel replica method for a continuous time Markov chain in order to improve sampling of the stationary distribution in bistable stochastic reaction networks. The proposed method uses parallel computing to accelerate the sampling of rare transitions. Furthermore, it can be combined with the path-space information bounds for parametric sensitivity analysis. With the proposed methodology, we study three bistable biological networks: the Schlögl model, the genetic switch network, and the enzymatic futile cycle network. We demonstrate the algorithmic speedup achieved in these numerical benchmarks. More significant acceleration is expected when multi-core or graphics processing unit computer architectures and programming tools such as CUDA are employed.

  5. Computational Approaches to Simulation and Analysis of Large Conformational Transitions in Proteins

    NASA Astrophysics Data System (ADS)

    Seyler, Sean L.

    In a typical living cell, millions to billions of proteins--nanomachines that fluctuate and cycle among many conformational states--convert available free energy into mechanochemical work. A fundamental goal of biophysics is to ascertain how 3D protein structures encode specific functions, such as catalyzing chemical reactions or transporting nutrients into a cell. Protein dynamics span femtosecond timescales (i.e., covalent bond oscillations) to large conformational transition timescales in, and beyond, the millisecond regime (e.g., glucose transport across a phospholipid bilayer). Actual transition events are fast but rare, occurring orders of magnitude faster than typical metastable equilibrium waiting times. Equilibrium molecular dynamics (EqMD) can capture atomistic detail and solute-solvent interactions, but even microseconds of sampling attainable nowadays still falls orders of magnitude short of transition timescales, especially for large systems, rendering observations of such "rare events" difficult or effectively impossible. Advanced path-sampling methods exploit reduced physical models or biasing to produce plausible transitions while balancing accuracy and efficiency, but quantifying their accuracy relative to other numerical and experimental data has been challenging. Indeed, new horizons in elucidating protein function necessitate that present methodologies be revised to more seamlessly and quantitatively integrate a spectrum of methods, both numerical and experimental. In this dissertation, experimental and computational methods are put into perspective using the enzyme adenylate kinase (AdK) as an illustrative example. We introduce Path Similarity Analysis (PSA)--an integrative computational framework developed to quantify transition path similarity. PSA not only reliably distinguished AdK transitions by the originating method, but also traced pathway differences between two methods back to charge-charge interactions (neglected by the stereochemical model, but not the all-atom force field) in several conserved salt bridges. Cryo-electron microscopy maps of the transporter Bor1p are directly incorporated into EqMD simulations using MD flexible fitting to produce viable structural models and infer a plausible transport mechanism. Conforming to the theme of integration, a short compendium of an exploratory project--developing a hybrid atomistic-continuum method--is presented, including initial results and a novel fluctuating hydrodynamics model and corresponding numerical code.

  6. Infrared (IR) photon-sensitive spectromicroscopy in a cryogenic environment

    DOEpatents

    Pereverzev, Sergey

    2016-06-14

    A system designed to suppress thermal radiation background and to allow IR single-photon sensitive spectromicroscopy of small samples by using both absorption, reflection, and emission/luminescence measurements. The system in one embodiment includes: a light source; a plurality of cold mirrors configured to direct light along a beam path; a cold or warm sample holder in the beam path; windows of sample holder (or whole sample holder) are transparent in a spectral region of interest, so they do not emit thermal radiation in the same spectral region of interest; a cold monochromator or other cold spectral device configured to direct a selected fraction of light onto a cold detector; a system of cold apertures and shields positioned along the beam path to prevent unwanted thermal radiation from arriving at the cold monochromator and/or the detector; a plurality of optical, IR and microwave filters positioned along the beam path and configured to adjust a spectral composition of light incident upon the sample under investigation and/or on the detector; a refrigerator configured to maintain the detector at a temperature below 1.0K; and an enclosure configured to: thermally insulate the light source, the plurality of mirrors, the sample holder, the cold monochromator and the refrigerator.

  7. Dynamic path planning for mobile robot based on particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Cai, Feng; Wang, Ying

    2017-08-01

    In the contemporary, robots are used in many fields, such as cleaning, medical treatment, space exploration, disaster relief and so on. The dynamic path planning of robot without collision is becoming more and more the focus of people's attention. A new method of path planning is proposed in this paper. Firstly, the motion space model of the robot is established by using the MAKLINK graph method. Then the A* algorithm is used to get the shortest path from the start point to the end point. Secondly, this paper proposes an effective method to detect and avoid obstacles. When an obstacle is detected on the shortest path, the robot will choose the nearest safety point to move. Moreover, calculate the next point which is nearest to the target. Finally, the particle swarm optimization algorithm is used to optimize the path. The experimental results can prove that the proposed method is more effective.

  8. Microscopic optical path length difference and polarization measurement system for cell analysis

    NASA Astrophysics Data System (ADS)

    Satake, H.; Ikeda, K.; Kowa, H.; Hoshiba, T.; Watanabe, E.

    2018-03-01

    In recent years, noninvasive, nonstaining, and nondestructive quantitative cell measurement techniques have become increasingly important in the medical field. These cell measurement techniques enable the quantitative analysis of living cells, and are therefore applied to various cell identification processes, such as those determining the passage number limit during cell culturing in regenerative medicine. To enable cell measurement, we developed a quantitative microscopic phase imaging system based on a Mach-Zehnder interferometer that measures the optical path length difference distribution without phase unwrapping using optical phase locking. The applicability of our phase imaging system was demonstrated by successful identification of breast cancer cells amongst normal cells. However, the cell identification method using this phase imaging system exhibited a false identification rate of approximately 7%. In this study, we implemented a polarimetric imaging system by introducing a polarimetric module to one arm of the Mach-Zehnder interferometer of our conventional phase imaging system. This module was comprised of a quarter wave plate and a rotational polarizer on the illumination side of the sample, and a linear polarizer on the optical detector side. In addition, we developed correction methods for the measurement errors of the optical path length and birefringence phase differences that arose through the influence of elements other than cells, such as the Petri dish. As the Petri dish holding the fluid specimens was transparent, it did not affect the amplitude information; however, the optical path length and birefringence phase differences were affected. Therefore, we proposed correction of the optical path length and birefringence phase for the influence of elements other than cells, as a prerequisite for obtaining highly precise phase and polarimetric images.

  9. Method for detection of dental caries and periodontal disease using optical imaging

    DOEpatents

    Nathel, H.; Kinney, J.H.; Otis, L.L.

    1996-10-29

    A method is disclosed for detecting the presence of active and inactive caries in teeth and diagnosing periodontal disease uses non-ionizing radiation with techniques for reducing interference from scattered light. A beam of non-ionizing radiation is divided into sample and reference beams. The region to be examined is illuminated by the sample beam, and reflected or transmitted radiation from the sample is recombined with the reference beam to form an interference pattern on a detector. The length of the reference beam path is adjustable, allowing the operator to select the reflected or transmitted sample photons that recombine with the reference photons. Thus radiation scattered by the dental or periodontal tissue can be prevented from obscuring the interference pattern. A series of interference patterns may be generated and interpreted to locate dental caries and periodontal tissue interfaces. 7 figs.

  10. Statistical multi-path exposure method for assessing the whole-body SAR in a heterogeneous human body model in a realistic environment.

    PubMed

    Vermeeren, Günter; Joseph, Wout; Martens, Luc

    2013-04-01

    Assessing the whole-body absorption in a human in a realistic environment requires a statistical approach covering all possible exposure situations. This article describes the development of a statistical multi-path exposure method for heterogeneous realistic human body models. The method is applied for the 6-year-old Virtual Family boy (VFB) exposed to the GSM downlink at 950 MHz. It is shown that the whole-body SAR does not differ significantly over the different environments at an operating frequency of 950 MHz. Furthermore, the whole-body SAR in the VFB for multi-path exposure exceeds the whole-body SAR for worst-case single-incident plane wave exposure by 3.6%. Moreover, the ICNIRP reference levels are not conservative with the basic restrictions in 0.3% of the exposure samples for the VFB at the GSM downlink of 950 MHz. The homogeneous spheroid with the dielectric properties of the head suggested by the IEC underestimates the absorption compared to realistic human body models. Moreover, the variation in the whole-body SAR for realistic human body models is larger than for homogeneous spheroid models. This is mainly due to the heterogeneity of the tissues and the irregular shape of the realistic human body model compared to homogeneous spheroid human body models. Copyright © 2012 Wiley Periodicals, Inc.

  11. A Path Model of Job Stress Using Thai Job Content Questionnaire (Thai-JCQ) among Thai Immigrant Employees at the Central Region of Thailand

    PubMed Central

    KAEWANUCHIT, Chonticha; SAWANGDEE, Yothin

    2016-01-01

    Background: The aim of this study was to verify a path model of job stress using Thai-JCQ. Methods: The population of this cross-sectional study was 800 immigrant employees in the central region of Thailand in 2015 by stratified random sampling. Instruments used both the applied and standard questionnaires. Job stress was measured using Thai-JCQ dealt with psychosocial work factors. A path model of job stress using Thai-JCQ was verified using M-plus. Results: Variables could explain the job stress change by 22.2%. Working conditions, job securities, workloads had direct effect on job stress while, workloads had indirect effect as well. Wages did not have any significance. Conclusion: The results of this study have implications for public health under occupational health research and practice by making public health and occupational health professionals aware of the importance a comprehensive approach to job stress prevention in the vulnerable population. PMID:27928528

  12. The prediction of postpartum depression: The role of the PRECEDE model and health locus of control

    PubMed Central

    Moshki, Mahdi; Kharazmi, Akram; Cheravi, Khadijeh; Beydokhti, Tahereh Baloochi

    2015-01-01

    Background: The main purpose of this study was to investigate the effect of the PRECEDE model and health locus of control (HLC) on postpartum depression. This study used the path analysis to test the pattern of causal relations through the correlation coefficients. Materials and Method: The participants included 230 pregnant women in the north-east of Iran who were selected by convenience sampling. To analyze data, Pearson correlation and path analysis were applied to examine the relationships between variables using SPSS 20 and LISREL 8.50software. Results: The result of path analysis showed that a positive correlation exists between predisposing (knowledge, internal HLC, powerful others HLC, chance HLC) enabling and reinforcing factors with postpartum depression by GHQ score (GFI = 1, RSMEA = 000). Conclusion: The current study supported the application of the PRECEDE model and HLC in understanding the promoting behaviors in mental health and demonstrated their relationships with postpartum depression. PMID:26288792

  13. An updated system for guidance of heterogeneous platforms used for multiple gliders in a real-time experiment

    NASA Astrophysics Data System (ADS)

    Smedstad, L.; Barron, C. N.; Book, J. W.; Osborne, J. J.; Souopgui, I.; Rice, A. E.; Linzell, R. S.

    2017-12-01

    The Guidance of Heterogeneous Observation Systems (GHOST) is a tool designed to sample ocean model outputs to determine a suite of possible path options for unmanned platforms. The system is built around a Runge-Kutta method to determine all possible paths, followed by a cost function calculation, an enforcement of safe operating area, and an analysis to determine a top 10% level of cost function and to rank the paths that qualify. A field experiment took place from 16 May until 5 June 2017 aboard the R/V Savannah operating out of the Duke University Marine Laboratory (DUML) in Beaufort, NC. Gliders were deployed in alternating groups with missions defined by one of two possible categories: a station-keeping array and a moving array. Unlike previous versions of the software, which monitored platforms individually, these gliders were placed in groups of 2-5 gliders with the same tasks. Daily runs of the GHOST software were performed for each mission category and for two different 1 km orientations of the Navy Coastal Ocean Model (NCOM). By limiting the number of trial solutions and by sorting through the best results, a quick turnaround was made possible for glider operators to determine waypoints in order to remain in desired areas or to move in paths that sampled areas of highest thermohaline variability. Limiting risk by restricting solutions to defined areas with statistically less likely occurrences of high ocean currents was an important consideration in this study area that was located just inshore of the Gulf Stream.

  14. Dynamic compression of copper to over 450 GPa: A high-pressure standard

    DOE PAGES

    Kraus, R. G.; Davis, J. -P.; Seagle, C. T.; ...

    2016-04-12

    We obtained an absolute stress-density path for shocklessly compressed copper to over 450 GPa. A magnetic pressure drive is temporally tailored to generate shockless compression waves through over 2.5-mm-thick copper samples. Furthermore, the free-surface velocity data is analyzed for Lagrangian sound velocity using the iterative Lagrangian analysis (ILA) technique, which relies upon the method of characteristics. We correct for the effects of strength and plastic work heating to determine an isentropic compression path. By assuming a Debye model for the heat capacity, we can further correct the isentrope to an isotherm. Finally, our determination of the isentrope and isotherm ofmore » copper represents a highly accurate pressure standard for copper to over 450 GPa.« less

  15. Measurement of refractive index of photopolymer for holographic gratings

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Mizuno, Jun; Fujikawa, Chiemi; Kodate, Kashiko

    2007-02-01

    We have made attempts to measure directly the small-scale variation of optical path lengths in photopolymer samples. For those with uniform thickness, the measured quantity is supposed to be proportional to the refractive index of the photopolymer. The system is based on a Mach-Zehnder interferometer using phase-locking technique and measures the change in optical path length during the sample is scanned across the optical axis. The spatial resolution is estimated to be 2μm, which is limited by the sample thickness. The path length resolution is estimated to be 6nm, which corresponds to the change in refractive index less than 10 -3 for the sample of 10μm thick. The measurement results showed clearly that the refractive index of photopolymer is not simply proportional to the exposure energy, contrary to the conventional photosensitive materials such as silver halide emulsion and dichromated gelatine. They also revealed the refractive index fluctuation in uniformly exposed photopolymer sample, which explains the milky appearance that sometimes observed in thick samples.

  16. Assessment of Fecal Exposure Pathways in Low-Income Urban Neighborhoods in Accra, Ghana: Rationale, Design, Methods, and Key Findings of the SaniPath Study

    PubMed Central

    Robb, Katharine; Null, Clair; Teunis, Peter; Yakubu, Habib; Armah, George; Moe, Christine L.

    2017-01-01

    Abstract. Rapid urbanization has contributed to an urban sanitation crisis in low-income countries. Residents in low-income, urban neighborhoods often have poor sanitation infrastructure and services and may experience frequent exposure to fecal contamination through a range of pathways. There are little data to prioritize strategies to decrease exposure to fecal contamination in these complex and highly contaminated environments, and public health priorities are rarely considered when planning urban sanitation investments. The SaniPath Study addresses this need by characterizing pathways of exposure to fecal contamination. Over a 16 month period, an in-depth, interdisciplinary exposure assessment was conducted in both public and private domains of four neighborhoods in Accra, Ghana. Microbiological analyses of environmental samples and behavioral data collection techniques were used to quantify fecal contamination in the environment and characterize the behaviors of adults and children associated with exposure to fecal contamination. Environmental samples (n = 1,855) were collected and analyzed for fecal indicators and enteric pathogens. A household survey with 800 respondents and over 500 hours of structured observation of young children were conducted. Approximately 25% of environmental samples were collected in conjunction with structured observations (n = 441 samples). The results of the study highlight widespread and often high levels of fecal contamination in both public and private domains and the food supply. The dominant fecal exposure pathway for young children in the household was through consumption of uncooked produce. The SaniPath Study provides critical information on exposure to fecal contamination in low-income, urban environments and ultimately can inform investments and policies to reduce these public health risks. PMID:28722599

  17. The Relationship between Experiences of Discrimination and Mental Health among Lesbians and Gay Men: An Examination of Internalized Homonegativity and Rejection Sensitivity as Potential Mechanisms

    ERIC Educational Resources Information Center

    Feinstein, Brian A.; Goldfried, Marvin R.; Davila, Joanne

    2012-01-01

    Objective: The current study used path analysis to examine potential mechanisms through which experiences of discrimination influence depressive and social anxiety symptoms. Method: The sample included 218 lesbians and 249 gay men (total N = 467) who participated in an online survey about minority stress and mental health. The proposed model…

  18. A Statistical Method for Synthesizing Mediation Analyses Using the Product of Coefficient Approach Across Multiple Trials

    PubMed Central

    Huang, Shi; MacKinnon, David P.; Perrino, Tatiana; Gallo, Carlos; Cruden, Gracelyn; Brown, C Hendricks

    2016-01-01

    Mediation analysis often requires larger sample sizes than main effect analysis to achieve the same statistical power. Combining results across similar trials may be the only practical option for increasing statistical power for mediation analysis in some situations. In this paper, we propose a method to estimate: 1) marginal means for mediation path a, the relation of the independent variable to the mediator; 2) marginal means for path b, the relation of the mediator to the outcome, across multiple trials; and 3) the between-trial level variance-covariance matrix based on a bivariate normal distribution. We present the statistical theory and an R computer program to combine regression coefficients from multiple trials to estimate a combined mediated effect and confidence interval under a random effects model. Values of coefficients a and b, along with their standard errors from each trial are the input for the method. This marginal likelihood based approach with Monte Carlo confidence intervals provides more accurate inference than the standard meta-analytic approach. We discuss computational issues, apply the method to two real-data examples and make recommendations for the use of the method in different settings. PMID:28239330

  19. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  20. Simulating Mission Command for Planning and Analysis

    DTIC Science & Technology

    2015-06-01

    mission plan. 14. SUBJECT TERMS Mission Planning, CPM , PERT, Simulation, DES, Simkit, Triangle Distribution, Critical Path 15. NUMBER OF...Battalion Task Force CO Company CPM Critical Path Method DES Discrete Event Simulation FA BAT Field Artillery Battalion FEL Future Event List FIST...management tools that can be utilized to find the critical path in military projects. These are the Critical Path Method ( CPM ) and the Program Evaluation and

  1. Quantifying the fate of agricultural nitrogen in an unconfined aquifer: Stream-based observations at three measurement scales

    NASA Astrophysics Data System (ADS)

    Gilmore, Troy E.; Genereux, David P.; Solomon, D. Kip; Solder, John E.; Kimball, Briant A.; Mitasova, Helena; Birgand, François

    2016-03-01

    We compared three stream-based sampling methods to study the fate of nitrate in groundwater in a coastal plain watershed: point measurements beneath the streambed, seepage blankets (novel seepage-meter design), and reach mass-balance. The methods gave similar mean groundwater seepage rates into the stream (0.3-0.6 m/d) during two 3-4 day field campaigns despite an order of magnitude difference in stream discharge between the campaigns. At low flow, estimates of flow-weighted mean nitrate concentrations in groundwater discharge ([NO3-]FWM) and nitrate flux from groundwater to the stream decreased with increasing degree of channel influence and measurement scale, i.e., [NO3-]FWM was 654, 561, and 451 µM for point, blanket, and reach mass-balance sampling, respectively. At high flow the trend was reversed, likely because reach mass-balance captured inputs from shallow transient high-nitrate flow paths while point and blanket measurements did not. Point sampling may be better suited to estimating aquifer discharge of nitrate, while reach mass-balance reflects full nitrate inputs into the channel (which at high flow may be more than aquifer discharge due to transient flow paths, and at low flow may be less than aquifer discharge due to channel-based nitrate removal). Modeling dissolved N2 from streambed samples suggested (1) about half of groundwater nitrate was denitrified prior to discharge from the aquifer, and (2) both extent of denitrification and initial nitrate concentration in groundwater (700-1300 µM) were related to land use, suggesting these forms of streambed sampling for groundwater can reveal watershed spatial relations relevant to nitrate contamination and fate in the aquifer.

  2. Spatial connectivity in a highly heterogeneous aquifer: From cores to preferential flow paths

    USGS Publications Warehouse

    Bianchi, M.; Zheng, C.; Wilson, C.; Tick, G.R.; Liu, Gaisheng; Gorelick, S.M.

    2011-01-01

    This study investigates connectivity in a small portion of the extremely heterogeneous aquifer at the Macrodispersion Experiment (MADE) site in Columbus, Mississippi. A total of 19 fully penetrating soil cores were collected from a rectangular grid of 4 m by 4 m. Detailed grain size analysis was performed on 5 cm segments of each core, yielding 1740 hydraulic conductivity (K) estimates. Three different geostatistical simulation methods were used to generate 3-D conditional realizations of the K field for the sampled block. Particle tracking calculations showed that the fastest particles, as represented by the first 5% to arrive, converge along preferential flow paths and exit the model domain within preferred areas. These 5% fastest flow paths accounted for about 40% of the flow. The distribution of preferential flow paths and particle exit locations is clearly influenced by the occurrence of clusters formed by interconnected cells with K equal to or greater than the 0.9 decile of the data distribution (10% of the volume). The fraction of particle paths within the high-K clusters ranges from 43% to 69%. In variogram-based K fields, some of the fastest paths are through media with lower K values, suggesting that transport connectivity may not require fully connected zones of relatively homogenous K. The high degree of flow and transport connectivity was confirmed by the values of two groups of connectivity indicators. In particular, the ratio between effective and geometric mean K (on average, about 2) and the ratio between the average arrival time and the arrival time of the fastest particles (on average, about 9) are consistent with flow and advective transport behavior characterized by channeling along preferential flow paths. ?? 2011 by the American Geophysical Union.

  3. ADAPTIVE METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS VIA NATURAL EMBEDDINGS AND REJECTION SAMPLING WITH MEMORY.

    PubMed

    Rackauckas, Christopher; Nie, Qing

    2017-01-01

    Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.

  4. ADAPTIVE METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS VIA NATURAL EMBEDDINGS AND REJECTION SAMPLING WITH MEMORY

    PubMed Central

    Rackauckas, Christopher

    2017-01-01

    Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs. PMID:29527134

  5. Heterogeneous path ensembles for conformational transitions in semi–atomistic models of adenylate kinase

    PubMed Central

    Bhatt, Divesh; Zuckerman, Daniel M.

    2010-01-01

    We performed “weighted ensemble” path–sampling simulations of adenylate kinase, using several semi–atomistic protein models. The models have an all–atom backbone with various levels of residue interactions. The primary result is that full statistically rigorous path sampling required only a few weeks of single–processor computing time with these models, indicating the addition of further chemical detail should be readily feasible. Our semi–atomistic path ensembles are consistent with previous biophysical findings: the presence of two distinct pathways, identification of intermediates, and symmetry of forward and reverse pathways. PMID:21660120

  6. Effect of geometrical parameters on pressure distributions of impulse manufacturing technologies

    NASA Astrophysics Data System (ADS)

    Brune, Ryan Carl

    Impulse manufacturing techniques constitute a growing field of methods that utilize high-intensity pressure events to conduct useful mechanical operations. As interest in applying this technology continues to grow, greater understanding must be achieved with respect to output pressure events in both magnitude and distribution. In order to address this need, a novel pressure measurement has been developed called the Profile Indentation Pressure Evaluation (PIPE) method that systematically analyzes indentation patterns created with impulse events. Correlation with quasi-static test data and use of software-assisted analysis techniques allows for colorized pressure maps to be generated for both electromagnetic and vaporizing foil actuator (VFA) impulse forming events. Development of this technique aided introduction of a design method for electromagnetic path actuator systems, where key geometrical variables are considered using a newly developed analysis method, which is called the Path Actuator Proximal Array (PAPA) pressure model. This model considers key current distribution and proximity effects and interprets generated pressure by considering the adjacent conductor surfaces as proximal arrays of individual conductors. According to PIPE output pressure analysis, the PAPA model provides a reliable prediction of generated pressure for path actuator systems as local geometry is changed. Associated mechanical calculations allow for pressure requirements to be calculated for shearing, flanging, and hemming operations, providing a design process for such cases. Additionally, geometry effect is investigated through a formability enhancement study using VFA metalworking techniques. A conical die assembly is utilized with both VFA high velocity and traditional quasi-static test methods on varied Hasek-type sample geometries to elicit strain states consistent with different locations on a forming limit diagram. Digital image correlation techniques are utilized to measure major and minor strains for each sample type to compare limit strain results. Overall testing indicated decreased formability at high velocity for 304 DDQ stainless steel and increased formability at high velocity for 3003-H14 aluminum. Microstructural and fractographic analysis helped dissect and analyze the observed differences in these cases. Overall, these studies comprehensively explore the effects of geometrical parameters on magnitude and distribution of impulse manufacturing generated pressure, establishing key guidelines and models for continued development and implementation in commercial applications.

  7. A True Eddy Accumulation - Eddy Covariance hybrid for measurements of turbulent trace gas fluxes

    NASA Astrophysics Data System (ADS)

    Siebicke, Lukas

    2016-04-01

    Eddy covariance (EC) is state-of-the-art in directly and continuously measuring turbulent fluxes of carbon dioxide and water vapor. However, low signal-to-noise ratios, high flow rates and missing or complex gas analyzers limit it's application to few scalars. True eddy accumulation, based on conditional sampling ideas by Desjardins in 1972, requires no fast response analyzers and is therefore potentially applicable to a wider range of scalars. Recently we showed possibly the first successful implementation of True Eddy Accumulation (TEA) measuring net ecosystem exchange of carbon dioxide of a grassland. However, most accumulation systems share the complexity of having to store discrete air samples in physical containers representing entire flux averaging intervals. The current study investigates merging principles of eddy accumulation and eddy covariance, which we here refer to as "true eddy accumulation in transient mode" (TEA-TM). This direct flux method TEA-TM combines true eddy accumulation with continuous sampling. The TEA-TM setup is simpler than discrete accumulation methods while avoiding the need for fast response gas analyzers and high flow rates required for EC. We implemented the proposed TEA-TM method and measured fluxes of carbon dioxide (CO2), methane (CH4) and water vapor (H2O) above a mixed beech forest at the Hainich Fluxnet and ICOS site, Germany, using a G2301 laser spectrometer (Picarro Inc., USA). We further simulated a TEA-TM sampling system using measured high frequency CO2 time series from an open-path gas analyzer. We operated TEA-TM side-by-side with open-, enclosed- and closed-path EC flux systems for CO2, H2O and CH4 (LI-7500, LI-7200, LI-6262, LI-7700, Licor, USA, and FGGA LGR, USA). First results show that TEA-TM CO2 fluxes were similar to EC fluxes. Remaining differences were similar to those between the three eddy covariance setups (open-, enclosed- and closed-path gas analyzers). Measured TEA-TM CO2 fluxes from our physical sampling system closely reproduced dynamics of simulated TEA-TM fluxes. In conclusion this study introduces a new approach to trace gas flux measurements using transient-mode true eddy accumulation. First TEA-TM CO2 fluxes compared favorably with side-by-side EC fluxes, in agreement with our previous experiments comparing discrete TEA to EC. True eddy accumulation has thus potential for measuring turbulent fluxes of a range of atmospheric tracers using slow response analyzers.

  8. Updating the 2001 National Land Cover Database land cover classification to 2006 by using Landsat imagery change detection methods

    USGS Publications Warehouse

    Xian, George; Homer, Collin G.; Fry, Joyce

    2009-01-01

    The recent release of the U.S. Geological Survey (USGS) National Land Cover Database (NLCD) 2001, which represents the nation's land cover status based on a nominal date of 2001, is widely used as a baseline for national land cover conditions. To enable the updating of this land cover information in a consistent and continuous manner, a prototype method was developed to update land cover by an individual Landsat path and row. This method updates NLCD 2001 to a nominal date of 2006 by using both Landsat imagery and data from NLCD 2001 as the baseline. Pairs of Landsat scenes in the same season in 2001 and 2006 were acquired according to satellite paths and rows and normalized to allow calculation of change vectors between the two dates. Conservative thresholds based on Anderson Level I land cover classes were used to segregate the change vectors and determine areas of change and no-change. Once change areas had been identified, land cover classifications at the full NLCD resolution for 2006 areas of change were completed by sampling from NLCD 2001 in unchanged areas. Methods were developed and tested across five Landsat path/row study sites that contain several metropolitan areas including Seattle, Washington; San Diego, California; Sioux Falls, South Dakota; Jackson, Mississippi; and Manchester, New Hampshire. Results from the five study areas show that the vast majority of land cover change was captured and updated with overall land cover classification accuracies of 78.32%, 87.5%, 88.57%, 78.36%, and 83.33% for these areas. The method optimizes mapping efficiency and has the potential to provide users a flexible method to generate updated land cover at national and regional scales by using NLCD 2001 as the baseline.

  9. Time signal distribution in communication networks based on synchronous digital hierarchy

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1993-01-01

    A new method that uses round-trip paths to accurately measure transmission delay for time synchronization is proposed. The performance of the method in Synchronous Digital Hierarchy networks is discussed. The feature of this method is that it separately measures the initial round trip path delay and the variations in round-trip path delay. The delay generated in SDH equipment is determined by measuring the initial round-trip path delay. In an experiment with actual SDH equipment, the error of initial delay measurement was suppressed to 30ns.

  10. Structure-guided Protein Transition Modeling with a Probabilistic Roadmap Algorithm.

    PubMed

    Maximova, Tatiana; Plaku, Erion; Shehu, Amarda

    2016-07-07

    Proteins are macromolecules in perpetual motion, switching between structural states to modulate their function. A detailed characterization of the precise yet complex relationship between protein structure, dynamics, and function requires elucidating transitions between functionally-relevant states. Doing so challenges both wet and dry laboratories, as protein dynamics involves disparate temporal scales. In this paper we present a novel, sampling-based algorithm to compute transition paths. The algorithm exploits two main ideas. First, it leverages known structures to initialize its search and define a reduced conformation space for rapid sampling. This is key to address the insufficient sampling issue suffered by sampling-based algorithms. Second, the algorithm embeds samples in a nearest-neighbor graph where transition paths can be efficiently computed via queries. The algorithm adapts the probabilistic roadmap framework that is popular in robot motion planning. In addition to efficiently computing lowest-cost paths between any given structures, the algorithm allows investigating hypotheses regarding the order of experimentally-known structures in a transition event. This novel contribution is likely to open up new venues of research. Detailed analysis is presented on multiple-basin proteins of relevance to human disease. Multiscaling and the AMBER ff14SB force field are used to obtain energetically-credible paths at atomistic detail.

  11. Computationally efficient characterization of potential energy surfaces based on fingerprint distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaefer, Bastian; Goedecker, Stefan, E-mail: stefan.goedecker@unibas.ch

    2016-07-21

    An analysis of the network defined by the potential energy minima of multi-atomic systems and their connectivity via reaction pathways that go through transition states allows us to understand important characteristics like thermodynamic, dynamic, and structural properties. Unfortunately computing the transition states and reaction pathways in addition to the significant energetically low-lying local minima is a computationally demanding task. We here introduce a computationally efficient method that is based on a combination of the minima hopping global optimization method and the insight that uphill barriers tend to increase with increasing structural distances of the educt and product states. This methodmore » allows us to replace the exact connectivity information and transition state energies with alternative and approximate concepts. Without adding any significant additional cost to the minima hopping global optimization approach, this method allows us to generate an approximate network of the minima, their connectivity, and a rough measure for the energy needed for their interconversion. This can be used to obtain a first qualitative idea on important physical and chemical properties by means of a disconnectivity graph analysis. Besides the physical insight obtained by such an analysis, the gained knowledge can be used to make a decision if it is worthwhile or not to invest computational resources for an exact computation of the transition states and the reaction pathways. Furthermore it is demonstrated that the here presented method can be used for finding physically reasonable interconversion pathways that are promising input pathways for methods like transition path sampling or discrete path sampling.« less

  12. APPLYING OPEN-PATH OPTICAL SPECTROSCOPY TO HEAVY-DUTY DIESEL EMISSIONS

    EPA Science Inventory

    Non-dispersive infrared absorption has been used to measure gaseous emissions for both stationary and mobile sources. Fourier transform infrared spectroscopy has been used for stationary sources as both extractive and open-path methods. We have applied the open-path method for bo...

  13. Extended Phase-Space Methods for Enhanced Sampling in Molecular Simulations: A Review.

    PubMed

    Fujisaki, Hiroshi; Moritsugu, Kei; Matsunaga, Yasuhiro; Morishita, Tetsuya; Maragliano, Luca

    2015-01-01

    Molecular Dynamics simulations are a powerful approach to study biomolecular conformational changes or protein-ligand, protein-protein, and protein-DNA/RNA interactions. Straightforward applications, however, are often hampered by incomplete sampling, since in a typical simulated trajectory the system will spend most of its time trapped by high energy barriers in restricted regions of the configuration space. Over the years, several techniques have been designed to overcome this problem and enhance space sampling. Here, we review a class of methods that rely on the idea of extending the set of dynamical variables of the system by adding extra ones associated to functions describing the process under study. In particular, we illustrate the Temperature Accelerated Molecular Dynamics (TAMD), Logarithmic Mean Force Dynamics (LogMFD), and Multiscale Enhanced Sampling (MSES) algorithms. We also discuss combinations with techniques for searching reaction paths. We show the advantages presented by this approach and how it allows to quickly sample important regions of the free-energy landscape via automatic exploration.

  14. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution.

    PubMed

    Baele, Guy; Lemey, Philippe; Vansteelandt, Stijn

    2013-03-06

    Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model's marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. We here assess the original 'model-switch' path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model's marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation.

  15. Make the most of your samples: Bayes factor estimators for high-dimensional models of sequence evolution

    PubMed Central

    2013-01-01

    Background Accurate model comparison requires extensive computation times, especially for parameter-rich models of sequence evolution. In the Bayesian framework, model selection is typically performed through the evaluation of a Bayes factor, the ratio of two marginal likelihoods (one for each model). Recently introduced techniques to estimate (log) marginal likelihoods, such as path sampling and stepping-stone sampling, offer increased accuracy over the traditional harmonic mean estimator at an increased computational cost. Most often, each model’s marginal likelihood will be estimated individually, which leads the resulting Bayes factor to suffer from errors associated with each of these independent estimation processes. Results We here assess the original ‘model-switch’ path sampling approach for direct Bayes factor estimation in phylogenetics, as well as an extension that uses more samples, to construct a direct path between two competing models, thereby eliminating the need to calculate each model’s marginal likelihood independently. Further, we provide a competing Bayes factor estimator using an adaptation of the recently introduced stepping-stone sampling algorithm and set out to determine appropriate settings for accurately calculating such Bayes factors, with context-dependent evolutionary models as an example. While we show that modest efforts are required to roughly identify the increase in model fit, only drastically increased computation times ensure the accuracy needed to detect more subtle details of the evolutionary process. Conclusions We show that our adaptation of stepping-stone sampling for direct Bayes factor calculation outperforms the original path sampling approach as well as an extension that exploits more samples. Our proposed approach for Bayes factor estimation also has preferable statistical properties over the use of individual marginal likelihood estimates for both models under comparison. Assuming a sigmoid function to determine the path between two competing models, we provide evidence that a single well-chosen sigmoid shape value requires less computational efforts in order to approximate the true value of the (log) Bayes factor compared to the original approach. We show that the (log) Bayes factors calculated using path sampling and stepping-stone sampling differ drastically from those estimated using either of the harmonic mean estimators, supporting earlier claims that the latter systematically overestimate the performance of high-dimensional models, which we show can lead to erroneous conclusions. Based on our results, we argue that highly accurate estimation of differences in model fit for high-dimensional models requires much more computational effort than suggested in recent studies on marginal likelihood estimation. PMID:23497171

  16. Efficient path-based computations on pedigree graphs with compact encodings

    PubMed Central

    2012-01-01

    A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898

  17. Sampling of temporal networks: Methods and biases

    NASA Astrophysics Data System (ADS)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  18. Dual stage potential field method for robotic path planning

    NASA Astrophysics Data System (ADS)

    Singh, Pradyumna Kumar; Parida, Pramod Kumar

    2018-04-01

    Path planning for autonomous mobile robots are the root for all autonomous mobile systems. Various methods are used for optimization of path to be followed by the autonomous mobile robots. Artificial potential field based path planning method is one of the most used methods for the researchers. Various algorithms have been proposed using the potential field approach. But in most of the common problems are encounters while heading towards the goal or target. i.e. local minima problem, zero potential regions problem, complex shaped obstacles problem, target near obstacle problem. In this paper we provide a new algorithm in which two types of potential functions are used one after another. The former one is to use to get the probable points and later one for getting the optimum path. In this algorithm we consider only the static obstacle and goal.

  19. Determination of lysine content based on an in situ pretreatment and headspace gas chromatographic measurement technique.

    PubMed

    Wan, Xiao-Fang; Liu, Bao-Lian; Yu, Teng; Yan, Ning; Chai, Xin-Sheng; Li, You-Ming; Chen, Guang-Xue

    2018-05-01

    This work reports on a simple method for the determination of lysine content by an in situ sample pretreatment and headspace gas chromatographic measurement (HS-GC) technique, based on carbon dioxide (CO 2 ) formation from the pretreatment reaction (between lysine and ninhydrin solution) in a closed vial. It was observed that complete lysine conversion to CO 2 could be achieved within 60 min at 60 °C in a phosphate buffer medium (pH = 4.0), with a minimum molar ratio of ninhydrin/lysine of 16. The results showed that the method had a good precision (RSD < 5.23%) and accuracy (within 6.80%), compared to the results measured by a reference method (ninhydrin spectroscopic method). Due to the feature of in situ sample pretreatment and headspace measurement, the present method becomes very simple and particularly suitable to be used for batch sample analysis in lysine-related research and applications. Graphical abstract The flow path of the reaction and HS-GC measurement for the lysine analysis.

  20. The efficiency of parameter estimation of latent path analysis using summated rating scale (SRS) and method of successive interval (MSI) for transformation of score to scale

    NASA Astrophysics Data System (ADS)

    Solimun, Fernandes, Adji Achmad Rinaldo; Arisoesilaningsih, Endang

    2017-12-01

    Research in various fields generally investigates systems and involves latent variables. One method to analyze the model representing the system is path analysis. The data of latent variables measured using questionnaires by applying attitude scale model yields data in the form of score, before analyzed should be transformation so that it becomes data of scale. Path coefficient, is parameter estimator, calculated from scale data using method of successive interval (MSI) and summated rating scale (SRS). In this research will be identifying which data transformation method is better. Path coefficients have smaller varieties are said to be more efficient. The transformation method that produces scaled data and used in path analysis capable of producing path coefficients (parameter estimators) with smaller varieties is said to be better. The result of analysis using real data shows that on the influence of Attitude variable to Intention Entrepreneurship, has relative efficiency (ER) = 1, where it shows that the result of analysis using data transformation of MSI and SRS as efficient. On the other hand, for simulation data, at high correlation between items (0.7-0.9), MSI method is more efficient 1.3 times better than SRS method.

  1. SALSA3D: A Tomographic Model of Compressional Wave Slowness in the Earth’s Mantle for Improved Travel-Time Prediction and Travel-Time Prediction Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.

    The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less

  2. SALSA3D: A Tomographic Model of Compressional Wave Slowness in the Earth’s Mantle for Improved Travel-Time Prediction and Travel-Time Prediction Uncertainty

    DOE PAGES

    Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.; ...

    2016-10-11

    The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less

  3. Accurate cell counts in live mouse embryos using optical quadrature and differential interference contrast microscopy

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; Newmark, Judith A.; Zhao, Bing; Warner, Carol M.; DiMarzio, Charles A.

    2006-02-01

    Present imaging techniques used in in vitro fertilization (IVF) clinics are unable to produce accurate cell counts in developing embryos past the eight-cell stage. We have developed a method that has produced accurate cell counts in live mouse embryos ranging from 13-25 cells by combining Differential Interference Contrast (DIC) and Optical Quadrature Microscopy. Optical Quadrature Microscopy is an interferometric imaging modality that measures the amplitude and phase of the signal beam that travels through the embryo. The phase is transformed into an image of optical path length difference, which is used to determine the maximum optical path length deviation of a single cell. DIC microscopy gives distinct cell boundaries for cells within the focal plane when other cells do not lie in the path to the objective. Fitting an ellipse to the boundary of a single cell in the DIC image and combining it with the maximum optical path length deviation of a single cell creates an ellipsoidal model cell of optical path length deviation. Subtracting the model cell from the Optical Quadrature image will either show the optical path length deviation of the culture medium or reveal another cell underneath. Once all the boundaries are used in the DIC image, the subtracted Optical Quadrature image is analyzed to determine the cell boundaries of the remaining cells. The final cell count is produced when no more cells can be subtracted. We have produced exact cell counts on 5 samples, which have been validated by Epi-Fluorescence images of Hoechst stained nuclei.

  4. The optimum measurement precision evaluation for blood components using near-infrared spectra on 1000-2500 nm

    NASA Astrophysics Data System (ADS)

    Zhang, Ziyang; Sun, Di; Han, Tongshuai; Guo, Chao; Liu, Jin

    2016-10-01

    In the non-invasive blood components measurement using near infrared spectroscopy, the useful signals caused by the concentration variation in the interested components, such as glucose, hemoglobin, albumin etc., are relative weak. Then the signals may be greatly disturbed by a lot of noises in various ways. We improved the signals by using the optimum path-length for the used wavelength to get a maximum variation of transmitted light intensity when the concentration of a component varies. And after the path-length optimization for every wavelength in 1000-2500 nm, we present the detection limits for the components, including glucose, hemoglobin and albumin, when measuring them in a tissue phantom. The evaluated detection limits could be the best reachable precision level since it assumed the measurement uses a high signal-to-noise ratio (SNR) signal and the optimum path-length. From the results, available wavelengths in 1000-2500 nm for the three component measurements can be screened by comparing their detection limit values with their measurement limit requirements. For other blood components measurement, the evaluation their detection limits could also be designed using the method proposed in this paper. Moreover, we use an equation to estimate the absorbance at the optimum path-length for every wavelength in 1000-2500 nm caused by the three components. It could be an easy way to realize the evaluation because adjusting the sample cell's size to the precise path-length value for every wavelength is not necessary. This equation could also be referred to other blood components measurement using the optimum path-length for every used wavelength.

  5. Understanding Associations between Neighborhood Socioeconomic Status and Negative Consequences of Drinking: A Moderated Mediation Analysis

    PubMed Central

    Karriker-Jaffe, Katherine J.; Liu, HuiGuo; Kaplan, Lauren M.

    2016-01-01

    Aims We explored how neighborhood socioeconomic status (SES) is related to negative consequences of drinking to explain why racial/ethnic minority group members are more at risk than Whites for adverse alcohol outcomes. We tested direct and indirect effects of neighborhood SES on alcohol problems and examined differences by gender and race. Methods We used data from the 2000 and 2005 National Alcohol Surveys (N=7,912 drinkers aged 18 and older; 49% female) linked with data from the 2000 Decennial Census in multivariate path models adjusting for individual demographics. Results In the full sample, neighborhood disadvantage had a significant direct path to increased negative consequences, with no indirect paths through depression, positive affect or pro-drinking attitudes. Neighborhood affluence had significant indirect paths to increased negative consequences through greater pro-drinking attitudes and increased heavy drinking. Sub-group analyses showed the indirect path from affluence to consequences held for White men, with no effects of neighborhood disadvantage. For racial/ethnic minority men, significant indirect paths emerged from both neighborhood disadvantage and affluence to increased consequences through greater pro-drinking attitudes and more heavy drinking. For minority women, there was an indirect effect of neighborhood affluence through reduced depression to fewer drinking consequences. There were limited neighborhood effects on alcohol outcomes for White women. Conclusions Interventions targeting pro-drinking attitudes in both affluent and disadvantaged areas may help reduce alcohol-related problems among men. Initiatives to improve neighborhood conditions could enhance mental health of minority women and reduce alcohol-related health disparities. PMID:26898509

  6. The use of 137Cs to establish longer-term soil erosion rates on footpaths in the UK.

    PubMed

    Rodway-Dyer, S J; Walling, D E

    2010-10-01

    There is increasing awareness of the damage caused to valuable and often unique sensitive habitats by people pressure as degradation causes a loss of plant species, disturbance to wildlife, on-site and off-site impacts of soil movement and loss, and visual destruction of pristine environments. This research developed a new perspective on the problem of recreational induced environmental degradation by assessing the physical aspects of soil erosion using the fallout radionuclide caesium-137 ((137)Cs). Temporal sampling problems have not successfully been overcome by traditional research methods monitoring footpath erosion and, to date, the (137)Cs technique has not been used to estimate longer-term soil erosion in regard to sensitive recreational habitats. The research was based on-sites within Dartmoor National Park (DNP) and the South West Coast Path (SWCP) in south-west England. (137)Cs inventories were reduced on the paths relative to the reference inventory (control), indicating loss of soil from the path areas. The Profile Distribution Model estimated longer-term erosion rates (ca. 40 years) based on the (137)Cs data and showed that the combined mean soil loss for all the sites on 'paths' was 1.41 kg m(-2) yr(-1) whereas the combined 'off path' soil loss was 0.79 kg m(-2) yr(-1), where natural (non-recreational) soil redistribution processes occur. Recreational pressure was shown to increase erosion in the long-term, as greater soil erosion occurred on the paths, especially where there was higher visitor pressure. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  7. MASS SPECTROMETRY

    DOEpatents

    Friedman, L.

    1962-01-01

    method is described for operating a mass spectrometer to improve its resolution qualities and to extend its period of use substantially between cleanings. In this method, a small amount of a beta emitting gas such as hydrogen titride or carbon-14 methane is added to the sample being supplied to the spectrometer for investigation. The additive establishes leakage paths on the surface of the non-conducting film accumulating within the vacuum chamber of the spectrometer, thereby reducing the effect of an accumulated static charge on the electrostatic and magnetic fields established within the instrument. (AEC)

  8. Tissue characterization with ballistic photons: counting scattering and/or absorption centres

    NASA Astrophysics Data System (ADS)

    Corral, F.; Strojnik, M.; Paez, G.

    2015-03-01

    We describe a new method to separate ballistic from the scattered photons for optical tissue characterization. It is based on the hypothesis that the scattered photons acquire a phase delay. The photons passing through the sample without scattering or absorption preserve their coherence so they may participate in interference. We implement a Mach-Zehnder experimental setup where the ballistic photons pass through the sample with the delay caused uniquely by the sample indices of refraction. We incorporate a movable mirror on the piezoelectric actuator in the sample arm to detect the amplitude of the modulation term. We present the theory that predicts the path-integrated (or total) concentration of the scattering and absorption centres. The proposed technique may characterize samples with transmission attenuation of ballistic photons by a factor of 10-14.

  9. Development of a new integrated local trajectory planning and tracking control framework for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Li, Xiaohui; Sun, Zhenping; Cao, Dongpu; Liu, Daxue; He, Hangen

    2017-03-01

    This study proposes a novel integrated local trajectory planning and tracking control (ILTPTC) framework for autonomous vehicles driving along a reference path with obstacles avoidance. For this ILTPTC framework, an efficient state-space sampling-based trajectory planning scheme is employed to smoothly follow the reference path. A model-based predictive path generation algorithm is applied to produce a set of smooth and kinematically-feasible paths connecting the initial state with the sampling terminal states. A velocity control law is then designed to assign a speed value at each of the points along the generated paths. An objective function considering both safety and comfort performance is carefully formulated for assessing the generated trajectories and selecting the optimal one. For accurately tracking the optimal trajectory while overcoming external disturbances and model uncertainties, a combined feedforward and feedback controller is developed. Both simulation analyses and vehicle testing are performed to verify the effectiveness of the proposed ILTPTC framework, and future research is also briefly discussed.

  10. Conservative Diffusions: a Constructive Approach to Nelson's Stochastic Mechanics.

    NASA Astrophysics Data System (ADS)

    Carlen, Eric Anders

    In Nelson's stochastic mechanics, quantum phenomena are described in terms of diffusions instead of wave functions; this thesis is a study of that description. We emphasize that we are concerned here with the possibility of describing, as opposed to explaining, quantum phenomena in terms of diffusions. In this direction, the following questions arise: "Do the diffusions of stochastic mechanics--which are formally given by stochastic differential equations with extremely singular coefficients--really exist?" Given that they exist, one can ask, "Do these diffusions have physically reasonable sample path behavior, and can we use information about sample paths to study the behavior of physical systems?" These are the questions we treat in this thesis. In Chapter I we review stochastic mechanics and diffusion theory, using the Guerra-Morato variational principle to establish the connection with the Schroedinger equation. This chapter is largely expository; however, there are some novel features and proofs. In Chapter II we settle the first of the questions raised above. Using PDE methods, we construct the diffusions of stochastic mechanics. Our result is sufficiently general to be of independent mathematical interest. In Chapter III we treat potential scattering in stochastic mechanics and discuss direct probabilistic methods of studying quantum scattering problems. Our results provide a solid "Yes" in answer to the second question raised above.

  11. Sampling the kinetic pathways of a micelle fusion and fission transition.

    PubMed

    Pool, René; Bolhuis, Peter G

    2007-06-28

    The mechanism and kinetics of micellar breakup and fusion in a dilute solution of a model surfactant are investigated by path sampling techniques. Analysis of the path ensemble gives insight in the mechanism of the transition. For larger, less stable micelles the fission/fusion occurs via a clear neck formation, while for smaller micelles the mechanism is more direct. In addition, path analysis yields an appropriate order parameter to evaluate the fusion and fission rate constants using stochastic transition interface sampling. For the small, stable micelle (50 surfactants) the computed fission rate constant is a factor of 10 lower than the fusion rate constant. The procedure opens the way for accurate calculation of free energy and kinetics for, e.g., membrane fusion, and wormlike micelle endcap formation.

  12. Measurement of thermo-optic properties of Y3Al5O12, Lu3Al5O12, YAIO3, LiYF4, LiLuF4, BaY2F8, KGd(WO4)2, and KY(WO4)2 laser crystals in the 80-300 K temperature range

    NASA Astrophysics Data System (ADS)

    Aggarwal, R. L.; Ripin, D. J.; Ochoa, J. R.; Fan, T. Y.

    2005-11-01

    Thermo-optic materials properties of laser host materials have been measured to enable solid-state laser performance modeling. The thermo-optic properties include thermal diffusivity (β), specific heat at constant pressure (Cp), thermal conductivity (κ), coefficient of thermal expansion (α), thermal coefficient of the optical path length (γ) equal to (dO/dT)/L, and thermal coefficient of refractive index (dn/dT) at 1064nm; O denotes the optical path length, which is equal to the product of the refractive index (n) and sample length (L). Thermal diffusivity and specific heat were measured using laser-flash method. Thermal conductivity was deduced using measured values of β, Cp, and the density (ρ ). Thermal expansion was measured using a Michelson laser interferometer. Thermal coefficient of the optical path length was measured at 1064nm, using interference between light reflected from the front and rear facets of the sample. Thermal coefficient of the refractive index was determined, using the measured values of γ, α, and n. β and κ of Y3Al5O12, YAIO3, and LiYF4 were found to decrease, as expected, upon doping with Yb.

  13. Social network analysis using k-Path centrality method

    NASA Astrophysics Data System (ADS)

    Taniarza, Natya; Adiwijaya; Maharani, Warih

    2018-03-01

    k-Path centrality is deemed as one of the effective methods to be applied in centrality measurement in which the influential node is estimated as the node that is being passed by information path frequently. Regarding this, k-Path centrality has been employed in the analysis of this paper specifically by adapting random-algorithm approach in order to: (1) determine the influential user’s ranking in a social media Twitter; and (2) ascertain the influence of parameter α in the numeration of k-Path centrality. According to the analysis, the findings showed that the method of k-Path centrality with random-algorithm approach can be used to determine user’s ranking which influences in the dissemination of information in Twitter. Furthermore, the findings also showed that parameter α influenced the duration and the ranking results: the less the α value, the longer the duration, yet the ranking results were more stable.

  14. Lateral position detection and control for friction stir systems

    DOEpatents

    Fleming, Paul [Boulder, CO; Lammlein, David H [Houston, TX; Cook, George E [Brentwood, TN; Wilkes, Don Mitchell [Nashville, TN; Strauss, Alvin M [Nashville, TN; Delapp, David R [Ashland City, TN; Hartman, Daniel A [Fairhope, AL

    2011-11-08

    Friction stir methods are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.

  15. Selection of test paths for solder joint intermittent connection faults under DC stimulus

    NASA Astrophysics Data System (ADS)

    Huakang, Li; Kehong, Lv; Jing, Qiu; Guanjun, Liu; Bailiang, Chen

    2018-06-01

    The test path of solder joint intermittent connection faults under direct-current stimulus is examined in this paper. According to the physical structure of the circuit, a network model is established first. A network node is utilised to represent the test node. The path edge refers to the number of intermittent connection faults in the path. Then, the selection criteria of the test path based on the node degree index are proposed and the solder joint intermittent connection faults are covered using fewer test paths. Finally, three circuits are selected to verify the method. To test if the intermittent fault is covered by the test paths, the intermittent fault is simulated by a switch. The results show that the proposed method can detect the solder joint intermittent connection fault using fewer test paths. Additionally, the number of detection steps is greatly reduced without compromising fault coverage.

  16. Snapshot polarization-sensitive plug-in optical module for a Fourier-domain optical coherence tomography system

    NASA Astrophysics Data System (ADS)

    Marques, Manuel J.; Rivet, Sylvain; Bradu, Adrian; Podoleanu, Adrian

    2018-02-01

    In this communication, we present a proof-of-concept polarization-sensitive Optical Coherence Tomography (PS-OCT) which can be used to characterize the retardance and the axis orientation of a linear birefringent sample. This module configuration is an improvement from our previous work1, 2 since it encodes the two polarization channels on the optical path difference, effectively carrying out the polarization measurements simultaneously (snapshot measurement), whilst retaining all the advantages (namely the insensitivity to environmental parameters when using SM fibers) of these two previous configurations. Further progress consists in employing Master Slave OCT technology,3 which is used to automatically compensate for the dispersion mismatch introduced by the elements in the module. This is essential given the encoding of the polarization states on two different optical path lengths, each of them having dissimilar dispersive properties. By utilizing this method instead of the commonly used re-linearization and numerical dispersion compensation methods an improvement in terms of the calculation time required can be achieved.

  17. Evaluation of a new APTIMA specimen collection and transportation kit for high-risk human papillomavirus E6/E7 messenger RNA in cervical and vaginal samples.

    PubMed

    Chernesky, Max; Jang, Dan; Gilchrist, Jodi; Elit, Laurie; Lytwyn, Alice; Smieja, Marek; Dockter, Janel; Getman, Damon; Reid, Jennifer; Hill, Craig

    2014-06-01

    An APTIMA specimen collection and transportation (SCT) kit was developed by Hologic/Gen-Probe. To compare cervical SCT samples to PreservCyt and SurePath samples and self-collected vaginal samples to physician-collected vaginal and cervical SCT samples. To determine ease and comfort of self-collection with the kit. Each woman (n = 580) self-collected a vaginal SCT, then filled out a questionnaire (n = 563) to determine ease and comfort of self-collection. Colposcopy physicians collected a vaginal SCT and cervical PreservCyt, SCT, and SurePath samples. Samples were tested by APTIMA HPV (AHPV) assay. Agreement between testing of cervical SCT and PreservCyt was 91.1% (κ = 0.82), and that of SurePath samples was 86.7% (κ = 0.72). Agreement of self-collected vaginal SCT to physician-collected SCT was 84.7% (κ = 0.68), and that of self-collected vaginal to cervical SCT was 82.0% (κ = 0.63). For 30 patients with CIN2+, AHPV testing of cervical SCT was 100% sensitive and 59.8% specific compared with PreservCyt (96.6% and 66.2%) and SurePath (93.3% and 70.9%). Vaginal SCT sensitivity was 86.7% for self-collection and 80.0% for physician collection. Most patients found that vaginal self-collection was easy, 5.3% reported some difficulty, and 87.6% expressed no discomfort. Cervical samples collected with the new SCT kit compared well to traditional liquid-based samples tested by AHPV. Although there was good agreement between self-collected and physician-collected samples with the SCT, in a limited number of 30 women, vaginal sampling identified fewer with CIN2+ precancerous cervical lesions than cervical SCT sampling. Comfort, ease of use, and detection of high-risk HPV demonstrated that the kit could be used for cervical and vaginal sampling.

  18. Multilaser Herriott Cell for Planetary Tunable Laser Spectrometers

    NASA Technical Reports Server (NTRS)

    Tarsitano, Christopher G.; Webster, Christopher R.

    2007-01-01

    Geometric optics and matrix methods are used to mathematically model multilaser Herriott cells for tunable laser absorption spectrometers for planetary missions. The Herriott cells presented accommodate several laser sources that follow independent optical paths but probe a single gas cell. Strategically placed output holes located in the far mirrors of the Herriott cells reduce the size of the spectrometers. A four-channel Herriott cell configuration is presented for the specific application as the sample cell of the tunable laser spectrometer instrument selected for the sample analysis at Mars analytical suite on the 2009 Mars Science Laboratory mission.

  19. SIEST-A-RT: a study of vacancy diffusion in crystalline silicon using a local-basis first-principle (SIESTA) activation technique (ART).

    NASA Astrophysics Data System (ADS)

    El Mellouhi, Fedwa; Mousseau, Normand; Ordejón, Pablo

    2003-03-01

    We report on a first-principle study of vacancy-induced self-diffusion in crystalline silicon. Our simulations are performed on supercells containing 63 and 215 atoms. We generate the diffusion paths using the activation-relaxation technique (ART) [1], which can sample efficiently the energy landscape of complex systems. The forces and energy are evaluated using SIESTA [2], a selfconsistent density functional method using standard norm-conserving pseudopotentials and a flexible numerical linear combination of atomic orbitals basis set. Combining these two methods allows us to identify diffusion paths that would not be reachable with this degree of accuracy, using other methods. After a full relaxation of the neutral vacancy, we proceed to search for local diffusion paths. We identify various mechanisms like the formation of the four fold coordinated defect, and the recombination of dangling bonds by WWW process. The diffusion of the vacancy proceeds by hops to first nearest neighbor with an energy barrier of 0.69 eV. This work is funded in part by NSERC and NATEQ. NM is a Cottrell Scholar of the Research Corporation. [1] G. T. Barkema and N. Mousseau, Event-based relaxation of continuous disordered systems, Phys. Rev. Lett. 77, 4358 (1996); N. Mousseau and G. T. Barkema, Traveling through potential energy landscapes of disordered materials: ART, Phys. Rev. E 57, 2419 (1998). [2] Density functional method for very large systems with LCAO basis sets D. Sánchez-Portal, P. Ordejón, E. Artacho and J. M. Soler, Int. J. Quant. Chem. 65, 453 (1997).

  20. Investigation on electrical tree propagation in polyethylene based on etching method

    NASA Astrophysics Data System (ADS)

    Shi, Zexiang; Zhang, Xiaohong; Wang, Kun; Gao, Junguo; Guo, Ning

    2017-11-01

    To investigate the characteristic of electrical tree propagation in semi-crystalline polymers, the low-density polyethylene (LDPE) samples containing electrical trees are cut into slices by using ultramicrotome. Then the slice samples are etched by potassium permanganate etchant. Finally, the crystalline structure and the electrical tree propagation path in samples are observed by polarized light microscopy (PLM). According to the observation, the LDPE spherocrystal structure model is established on the basis of crystallization kinetics and morphology of polymers. And the electrical tree growth process in LDPE is discussed based on the free volume breakdown theory, the molecular chain relaxation theory, the electromechanical force theory, the thermal expansion effect and the space charge shielding effect.

  1. Self-consistent collective coordinate for reaction path and inertial mass

    NASA Astrophysics Data System (ADS)

    Wen, Kai; Nakatsukasa, Takashi

    2016-11-01

    We propose a numerical method to determine the optimal collective reaction path for a nucleus-nucleus collision, based on the adiabatic self-consistent collective coordinate (ASCC) method. We use an iterative method, combining the imaginary-time evolution and the finite amplitude method, for the solution of the ASCC coupled equations. It is applied to the simplest case, α -α scattering. We determine the collective path, the potential, and the inertial mass. The results are compared with other methods, such as the constrained Hartree-Fock method, Inglis's cranking formula, and the adiabatic time-dependent Hartree-Fock (ATDHF) method.

  2. Explanation of Two Anomalous Results in Statistical Mediation Analysis.

    PubMed

    Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.

  3. Path-Dependent Travel Time Prediction Variance and Covariance for a Global Tomographic P- and S-Velocity Model

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Ballard, S.; Begnaud, M. L.; Encarnacao, A. V.; Young, C. J.; Phillips, W. S.

    2015-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P- and S-velocity model (SALSA3D) that provides superior first P and first S travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we describe a methodology for accomplishing this by exploiting the full model covariance matrix and show examples of path-dependent travel time prediction uncertainty computed from our latest tomographic model. Typical global 3D SALSA3D models have on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes a prior model covariance constraint) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiplication methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix, we solve for the travel-time covariance associated with arbitrary ray-paths by summing the model covariance along both ray paths. Setting the paths equal and taking the square root yields the travel prediction uncertainty for the single path.

  4. A horse’s locomotor signature: COP path determined by the individual limb

    PubMed Central

    Hobbs, Sarah Jane; Back, Willem

    2017-01-01

    Introduction Ground reaction forces in sound horses with asymmetric hooves show systematic differences in the horizontal braking force and relative timing of break-over. The Center Of Pressure (COP) path quantifies the dynamic load distribution under the hoof in a moving horse. The objective was to test whether anatomical asymmetry, quantified by the difference in dorsal wall angle between the left and right forelimbs, correlates with asymmetry in the COP path between these limbs. In addition, repeatability of the COP path was investigated. Methods A larger group (n = 31) visually sound horses with various degree of dorsal hoof wall asymmetry trotted three times over a pressure mat. COP path was determined in a hoof-bound coordinate system. A relationship between correlations between left and right COP paths and degree of asymmetry was investigated. Results Using a hoof-bound coordinate system made the COP path highly repeatable and unique for each limb. The craniocaudal patterns are usually highly correlated between left and right, but the mediolateral patterns are not. Some patterns were found between COP path and dorsal wall angle but asymmetry in dorsal wall angle did not necessarily result in asymmetry in COP path and the same could be stated for symmetry. Conclusion This method is a highly sensitive method to quantify the net result of the interaction between all of the forces and torques that occur in the limb and its inertial properties. We argue that changes in motor control, muscle force, inertial properties, kinematics and kinetics can potentially be picked up at an early stage using this method and could therefore be used as an early detection method for changes in the musculoskeletal apparatus. PMID:28196073

  5. A Path Algorithm for Constrained Estimation

    PubMed Central

    Zhou, Hua; Lange, Kenneth

    2013-01-01

    Many least-square problems involve affine equality and inequality constraints. Although there are a variety of methods for solving such problems, most statisticians find constrained estimation challenging. The current article proposes a new path-following algorithm for quadratic programming that replaces hard constraints by what are called exact penalties. Similar penalties arise in l1 regularization in model selection. In the regularization setting, penalties encapsulate prior knowledge, and penalized parameter estimates represent a trade-off between the observed data and the prior knowledge. Classical penalty methods of optimization, such as the quadratic penalty method, solve a sequence of unconstrained problems that put greater and greater stress on meeting the constraints. In the limit as the penalty constant tends to ∞, one recovers the constrained solution. In the exact penalty method, squared penalties!are replaced by absolute value penalties, and the solution is recovered for a finite value of the penalty constant. The exact path-following method starts at the unconstrained solution and follows the solution path as the penalty constant increases. In the process, the solution path hits, slides along, and exits from the various constraints. Path following in Lasso penalized regression, in contrast, starts with a large value of the penalty constant and works its way downward. In both settings, inspection of the entire solution path is revealing. Just as with the Lasso and generalized Lasso, it is possible to plot the effective degrees of freedom along the solution path. For a strictly convex quadratic program, the exact penalty algorithm can be framed entirely in terms of the sweep operator of regression analysis. A few well-chosen examples illustrate the mechanics and potential of path following. This article has supplementary materials available online. PMID:24039382

  6. Probing dimensionality using a simplified 4-probe method.

    PubMed

    Kjeldby, Snorre B; Evenstad, Otto M; Cooil, Simon P; Wells, Justin W

    2017-10-04

    4-probe electrical measurements have been in existence for many decades. One of the most useful aspects of the 4-probe method is that it is not only possible to find the resistivity of a sample (independently of the contact resistances), but that it is also possible to probe the dimensionality of the sample. In theory, this is straightforward to achieve by measuring the 4-probe resistance as a function of probe separation. In practice, it is challenging to move all four probes with sufficient precision over the necessary range. Here, we present an alternative approach. We demonstrate that the dimensionality of the conductive path within a sample can be directly probed using a modified 4-probe method in which an unconventional geometry is exploited; three of the probes are rigidly fixed, and the position of only one probe is changed. This allows 2D and 3D (and other) contributions the to resistivity to be readily disentangled. The required experimental instrumentation can be vastly simplified relative to traditional variable spacing 4-probe instruments.

  7. Optical phase conjugation (OPC)-assisted isotropic focusing.

    PubMed

    Jang, Mooseok; Sentenac, Anne; Yang, Changhuei

    2013-04-08

    Isotropic optical focusing - the focusing of light with axial confinement that matches its lateral confinement, is important for a broad range of applications. Conventionally, such focusing is achieved by overlapping the focused beams from a pair of opposite-facing microscope objective lenses. However the exacting requirements for the alignment of the objective lenses and the method's relative intolerance to sample turbidity have significantly limited its utility. In this paper, we present an optical phase conjugation (OPC)-assisted isotropic focusing method that can address both challenges. We exploit the time-reversal nature of OPC playback to naturally guarantee the overlap of the two focused beams even when the objective lenses are significantly misaligned (up to 140 microns transversely and 80 microns axially demonstrated). The scattering correction capability of OPC also enabled us to accomplish isotropic focusing through thick scattering samples (demonstrated with samples of ~7 scattering mean free paths). This method can potentially improve 4Pi microscopy and 3D microstructure patterning.

  8. Estimating Brownian motion dispersal rate, longevity and population density from spatially explicit mark-recapture data on tropical butterflies.

    PubMed

    Tufto, Jarle; Lande, Russell; Ringsby, Thor-Harald; Engen, Steinar; Saether, Bernt-Erik; Walla, Thomas R; DeVries, Philip J

    2012-07-01

    1. We develop a Bayesian method for analysing mark-recapture data in continuous habitat using a model in which individuals movement paths are Brownian motions, life spans are exponentially distributed and capture events occur at given instants in time if individuals are within a certain attractive distance of the traps. 2. The joint posterior distribution of the dispersal rate, longevity, trap attraction distances and a number of latent variables representing the unobserved movement paths and time of death of all individuals is computed using Gibbs sampling. 3. An estimate of absolute local population density is obtained simply by dividing the Poisson counts of individuals captured at given points in time by the estimated total attraction area of all traps. Our approach for estimating population density in continuous habitat avoids the need to define an arbitrary effective trapping area that characterized previous mark-recapture methods in continuous habitat. 4. We applied our method to estimate spatial demography parameters in nine species of neotropical butterflies. Path analysis of interspecific variation in demographic parameters and mean wing length revealed a simple network of strong causation. Larger wing length increases dispersal rate, which in turn increases trap attraction distance. However, higher dispersal rate also decreases longevity, thus explaining the surprising observation of a negative correlation between wing length and longevity. © 2012 The Authors. Journal of Animal Ecology © 2012 British Ecological Society.

  9. Hierarchical Motion Planning for Autonomous Aerial and Terrestrial Vehicles

    NASA Astrophysics Data System (ADS)

    Cowlagi, Raghvendra V.

    Autonomous mobile robots---both aerial and terrestrial vehicles---have gained immense importance due to the broad spectrum of their potential military and civilian applications. One of the indispensable requirements for the autonomy of a mobile vehicle is the vehicle's capability of planning and executing its motion, that is, finding appropriate control inputs for the vehicle such that the resulting vehicle motion satisfies the requirements of the vehicular task. The motion planning and control problem is inherently complex because it involves two disparate sub-problems: (1) satisfaction of the vehicular task requirements, which requires tools from combinatorics and/or formal methods, and (2) design of the vehicle control laws, which requires tools from dynamical systems and control theory. Accordingly, this problem is usually decomposed and solved over two levels of hierarchy. The higher level, called the geometric path planning level, finds a geometric path that satisfies the vehicular task requirements, e.g., obstacle avoidance. The lower level, called the trajectory planning level, involves sufficient smoothening of this geometric path followed by a suitable time parametrization to obtain a reference trajectory for the vehicle. Although simple and efficient, such hierarchical decomposition suffers a serious drawback: the geometric path planner has no information of the kinematical and dynamical constraints of the vehicle. Consequently, the geometric planner may produce paths that the trajectory planner cannot transform into a feasible reference trajectory. Two main ideas appear in the literature to remedy this problem: (a) randomized sampling-based planning, which eliminates the geometric planner altogether by planning in the vehicle state space, and (b) geometric planning supported by feedback control laws. The former class of methods suffer from a lack of optimality of the resultant trajectory, while the latter class of methods makes a restrictive assumption concerning the vehicle kinematical model. We propose a hierarchical motion planning framework based on a novel mode of interaction between these two levels of planning. This interaction rests on the solution of a special shortest-path problem on graphs, namely, one using costs defined on multiple edge transitions in the path instead of the usual single edge transition costs. These costs are provided by a local trajectory generation algorithm, which we implement using model predictive control and the concept of effective target sets for simplifying the non-convex constraints involved in the problem. The proposed motion planner ensures "consistency" between the two levels of planning, i.e., a guarantee that the higher level geometric path is always associated with a kinematically and dynamically feasible trajectory. The main contributions of this thesis are: 1. A motion planning framework based on history-dependent costs (H-costs) in cell decomposition graphs for incorporating vehicle dynamical constraints: this framework offers distinct advantages in comparison with the competing approaches of discretization of the state space, of randomized sampling-based motion planning, and of local feedback-based, decoupled hierarchical motion planning, 2. An efficient and flexible algorithm for finding optimal H-cost paths, 3. A precise and general formulation of a local trajectory problem (the tile motion planning problem) that allows independent development of the discrete planner and the trajectory planner, while maintaining "compatibility" between the two planners, 4. A local trajectory generation algorithm using mpc, and the application of the concept of effective target sets for a significant simplification of the local trajectory generation problem, 5. The geometric analysis of curvature-bounded traversal of rectangular channels, leading to less conservative results in comparison with a result reported in the literature, and also to the efficient construction of effective target sets for the solution of the tile motion planning problem, 6. A wavelet-based multi-resolution path planning scheme, and a proof of completeness of the proposed scheme: such proofs are altogether absent from other works on multi-resolution path planning, 7. A technique for extracting all information about cells---namely, the locations, the sizes, and the associated image intensities---directly from the set of significant detail coefficients considered for path planning at a given iteration, and 8. The extension of the multi-resolution path planning scheme to include vehicle dynamical constraints using the aforementioned history-dependent costs approach. The future work includes an implementation of the proposed framework involving a discrete planner that solves classical planning problems more general than the single-query path planning problem considered thus far, and involving trajectory generation schemes for realistic vehicle dynamical models such as the bicycle model.

  10. Reconstruction for proton computed tomography by tracing proton trajectories: A Monte Carlo study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Tianfang; Liang Zhengrong; Singanallur, Jayalakshmi V.

    Proton computed tomography (pCT) has been explored in the past decades because of its unique imaging characteristics, low radiation dose, and its possible use for treatment planning and on-line target localization in proton therapy. However, reconstruction of pCT images is challenging because the proton path within the object to be imaged is statistically affected by multiple Coulomb scattering. In this paper, we employ GEANT4-based Monte Carlo simulations of the two-dimensional pCT reconstruction of an elliptical phantom to investigate the possible use of the algebraic reconstruction technique (ART) with three different path-estimation methods for pCT reconstruction. The first method assumes amore » straight-line path (SLP) connecting the proton entry and exit positions, the second method adapts the most-likely path (MLP) theoretically determined for a uniform medium, and the third method employs a cubic spline path (CSP). The ART reconstructions showed progressive improvement of spatial resolution when going from the SLP [2 line pairs (lp) cm{sup -1}] to the curved CSP and MLP path estimates (5 lp cm{sup -1}). The MLP-based ART algorithm had the fastest convergence and smallest residual error of all three estimates. This work demonstrates the advantage of tracking curved proton paths in conjunction with the ART algorithm and curved path estimates.« less

  11. Dissemination of veterinary antibiotics and corresponding resistance genes from a concentrated swine feedlot along the waste treatment paths.

    PubMed

    Wang, Jian; Ben, Weiwei; Yang, Min; Zhang, Yu; Qiang, Zhimin

    2016-01-01

    Swine feedlots are an important pollution source of antibiotics and antibiotic resistance genes (ARGs) to the environment. This study investigated the dissemination of two classes of commonly-used veterinary antibiotics, namely, tetracyclines (TCs) and sulfonamides (SAs), and their corresponding ARGs along the waste treatment paths from a concentrated swine feedlot located in Beijing, China. The highest total TC and total SA concentrations detected were 166.7mgkg(-1) and 64.5μgkg(-1) in swine manure as well as 388.7 and 7.56μgL(-1) in swine wastewater, respectively. Fourteen tetracycline resistance genes (TRGs) encoding ribosomal protection proteins (RPP), efflux proteins (EFP) and enzymatic inactivation proteins, three sulfonamide resistance genes (SRGs), and two integrase genes were detected along the waste treatment paths with detection frequencies of 33.3-75.0%. The relative abundances of target ARGs ranged from 2.74×10(-6) to 1.19. The antibiotics and ARGs generally declined along both waste treatment paths, but their degree of reduction was more significant along the manure treatment path. The RPP TRGs dominated in the upstream samples and then decreased continuously along both waste treatment paths, whilst the EFP TRGs and SRGs maintained relatively stable. Strong correlations between antibiotic concentrations and ARGs were observed among both manure and wastewater samples. In addition, seasonal temperature, and integrase genes, moisture content and nutrient level of tested samples could all impact the relative abundances of ARGs along the swine waste treatment paths. This study helps understand the evolution and spread of ARGs from swine feedlots to the environment as well as assess the environmental risk arising from swine waste treatment. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding

    PubMed Central

    Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo

    2017-01-01

    For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components. PMID:28492481

  13. A Vision-Aided 3D Path Teaching Method before Narrow Butt Joint Welding.

    PubMed

    Zeng, Jinle; Chang, Baohua; Du, Dong; Peng, Guodong; Chang, Shuhe; Hong, Yuxiang; Wang, Li; Shan, Jiguo

    2017-05-11

    For better welding quality, accurate path teaching for actuators must be achieved before welding. Due to machining errors, assembly errors, deformations, etc., the actual groove position may be different from the predetermined path. Therefore, it is significant to recognize the actual groove position using machine vision methods and perform an accurate path teaching process. However, during the teaching process of a narrow butt joint, the existing machine vision methods may fail because of poor adaptability, low resolution, and lack of 3D information. This paper proposes a 3D path teaching method for narrow butt joint welding. This method obtains two kinds of visual information nearly at the same time, namely 2D pixel coordinates of the groove in uniform lighting condition and 3D point cloud data of the workpiece surface in cross-line laser lighting condition. The 3D position and pose between the welding torch and groove can be calculated after information fusion. The image resolution can reach 12.5 μm. Experiments are carried out at an actuator speed of 2300 mm/min and groove width of less than 0.1 mm. The results show that this method is suitable for groove recognition before narrow butt joint welding and can be applied in path teaching fields of 3D complex components.

  14. On edge-aware path-based color spatial sampling for Retinex: from Termite Retinex to Light Energy-driven Termite Retinex

    NASA Astrophysics Data System (ADS)

    Simone, Gabriele; Cordone, Roberto; Serapioni, Raul Paolo; Lecca, Michela

    2017-05-01

    Retinex theory estimates the human color sensation at any observed point by correcting its color based on the spatial arrangement of the colors in proximate regions. We revise two recent path-based, edge-aware Retinex implementations: Termite Retinex (TR) and Energy-driven Termite Retinex (ETR). As the original Retinex implementation, TR and ETR scan the neighborhood of any image pixel by paths and rescale their chromatic intensities by intensity levels computed by reworking the colors of the pixels on the paths. Our interest in TR and ETR is due to their unique, content-based scanning scheme, which uses the image edges to define the paths and exploits a swarm intelligence model for guiding the spatial exploration of the image. The exploration scheme of ETR has been showed to be particularly effective: its paths are local minima of an energy functional, designed to favor the sampling of image pixels highly relevant to color sensation. Nevertheless, since its computational complexity makes ETR poorly practicable, here we present a light version of it, named Light Energy-driven TR, and obtained from ETR by implementing a modified, optimized minimization procedure and by exploiting parallel computing.

  15. Apparatus, system, and method for laser-induced breakdown spectroscopy

    DOEpatents

    Effenberger, Jr., Andrew J; Scott, Jill R; McJunkin, Timothy R

    2014-11-18

    In laser-induced breakdown spectroscopy (LIBS), an apparatus includes a pulsed laser configured to generate a pulsed laser signal toward a sample, a constructive interference object and an optical element, each located in a path of light from the sample. The constructive interference object is configured to generate constructive interference patterns of the light. The optical element is configured to disperse the light. A LIBS system includes a first and a second optical element, and a data acquisition module. The data acquisition module is configured to determine an isotope measurement based, at least in part, on light received by an image sensor from the first and second optical elements. A method for performing LIBS includes generating a pulsed laser on a sample to generate light from a plasma, generating constructive interference patterns of the light, and dispersing the light into a plurality of wavelengths.

  16. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    PubMed Central

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2013-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262

  17. Research on Rigid Body Motion Tracing in Space based on NX MCD

    NASA Astrophysics Data System (ADS)

    Wang, Junjie; Dai, Chunxiang; Shi, Karen; Qin, Rongkang

    2018-03-01

    In the use of MCD (Mechatronics Concept Designer) which is a module belong to SIEMENS Ltd industrial design software UG (Unigraphics NX), user can define rigid body and kinematic joint to make objects move according to the existing plan in simulation. At this stage, user may have the desire to see the path of some points in the moving object intuitively. In response to this requirement, this paper will compute the pose through the transformation matrix which can be available from the solver engine, and then fit these sampling points through B-spline curve. Meanwhile, combined with the actual constraints of rigid bodies, the traditional equal interval sampling strategy was optimized. The result shown that this method could satisfy the demand and make up for the deficiency in traditional sampling method. User can still edit and model on this 3D curve. Expected result has been achieved.

  18. Dynamical mechanism in aero-engine gas path system using minimum spanning tree and detrended cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Dong, Keqiang; Zhang, Hong; Gao, You

    2017-01-01

    Identifying the mutual interaction in aero-engine gas path system is a crucial problem that facilitates the understanding of emerging structures in complex system. By employing the multiscale multifractal detrended cross-correlation analysis method to aero-engine gas path system, the cross-correlation characteristics between gas path system parameters are established. Further, we apply multiscale multifractal detrended cross-correlation distance matrix and minimum spanning tree to investigate the mutual interactions of gas path variables. The results can infer that the low-spool rotor speed (N1) and engine pressure ratio (EPR) are main gas path parameters. The application of proposed method contributes to promote our understanding of the internal mechanisms and structures of aero-engine dynamics.

  19. Path-integral method for the source apportionment of photochemical pollutants

    NASA Astrophysics Data System (ADS)

    Dunker, A. M.

    2015-06-01

    A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOCs) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions (CAMx) is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using three or four points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the three paths but also differences due to the different chemical regimes resulting from the emission-control strategies.

  20. Path-integral method for the source apportionment of photochemical pollutants

    NASA Astrophysics Data System (ADS)

    Dunker, A. M.

    2014-12-01

    A new, path-integral method is presented for apportioning the concentrations of pollutants predicted by a photochemical model to emissions from different sources. A novel feature of the method is that it can apportion the difference in a species concentration between two simulations. For example, the anthropogenic ozone increment, which is the difference between a simulation with all emissions present and another simulation with only the background (e.g., biogenic) emissions included, can be allocated to the anthropogenic emission sources. The method is based on an existing, exact mathematical equation. This equation is applied to relate the concentration difference between simulations to line or path integrals of first-order sensitivity coefficients. The sensitivities describe the effects of changing the emissions and are accurately calculated by the decoupled direct method. The path represents a continuous variation of emissions between the two simulations, and each path can be viewed as a separate emission-control strategy. The method does not require auxiliary assumptions, e.g., whether ozone formation is limited by the availability of volatile organic compounds (VOC's) or nitrogen oxides (NOx), and can be used for all the species predicted by the model. A simplified configuration of the Comprehensive Air Quality Model with Extensions is used to evaluate the accuracy of different numerical integration procedures and the dependence of the source contributions on the path. A Gauss-Legendre formula using 3 or 4 points along the path gives good accuracy for apportioning the anthropogenic increments of ozone, nitrogen dioxide, formaldehyde, and nitric acid. Source contributions to these increments were obtained for paths representing proportional control of all anthropogenic emissions together, control of NOx emissions before VOC emissions, and control of VOC emissions before NOx emissions. There are similarities in the source contributions from the three paths but also differences due to the different chemical regimes resulting from the emission-control strategies.

  1. Persistent Psychopathology in the Wake of Civil War: Long-Term Posttraumatic Stress Disorder in Nimba County, Liberia

    PubMed Central

    Rockers, Peter C.; Saydee, Geetor; Macauley, Rose; Varpilah, S. Tornorlah; Kruk, Margaret E.

    2010-01-01

    Objectives. We assessed the geographical distribution of posttraumatic stress disorder (PTSD) in postconflict Nimba County, Liberia, nearly 2 decades after the end of primary conflict in the area, and we related this pattern to the history of conflict. Methods. We administered individual surveys to a population-based sample of 1376 adults aged 19 years or older. In addition, we conducted a historical analysis of conflict in Nimba County, Liberia, where the civil war started in 1989. Results. The prevalence of PTSD in Nimba County was high at 48.3% (95% confidence interval = 45.7, 50.9; n = 664). The geographical patterns of traumatic event experiences and of PTSD were consistent with the best available information about the path of the intranational conflict that Nimba County experienced in 1989–1990. Conclusions. The demonstration of a “path of PTSD” coincident with the decades-old path of violence dramatically underscores the direct link between population burden of psychopathology and the experience of violent conflict. Persistent postconflict disruptions of social and physical context may explain some of the observed patterns. PMID:20634461

  2. Spreading paths in partially observed social networks

    NASA Astrophysics Data System (ADS)

    Onnela, Jukka-Pekka; Christakis, Nicholas A.

    2012-03-01

    Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.

  3. Spreading paths in partially observed social networks.

    PubMed

    Onnela, Jukka-Pekka; Christakis, Nicholas A

    2012-03-01

    Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.

  4. A novel multi-segment path analysis based on a heterogeneous velocity model for the localization of acoustic emission sources in complex propagation media.

    PubMed

    Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas

    2017-02-01

    In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Low cost label-free live cell imaging for biological samples

    NASA Astrophysics Data System (ADS)

    Seniya, C.; Towers, C. E.; Towers, D. P.

    2017-02-01

    This paper reports the progress to develop a practical phase measuring microscope offering new capabilities in terms of phase measurement accuracy and quantification of cell:cell interactions over the longer term. A novel, low cost phase interference microscope for imaging live cells (label-free) is described. The method combines the Zernike phase contrast approach with a dual mirror design to enable phase modulation between the scattered and un-scattered optical fields. Two designs are proposed and demonstrated, one of which retains the common path nature of Zernike's original microscopy concept. In both setups the phase shift is simple to control via a piezoelectric driven mirror in the back focal plane of the imaging system. The approach is significantly cheaper to implement than those based on spatial light modulators (SLM) at approximately 20% of the cost. A quantitative assessment of the performance of a set of phase shifting algorithms is also presented, specifically with regard to broad bandwidth illumination in phase contrast microscopy. The simulation results show that the phase measurement accuracy is strongly dependent on the algorithm selected and the optical path difference in the sample.

  6. Development of a detector in order to investigate (n,γ)-cross sections by ToF method with a very short flight path

    NASA Astrophysics Data System (ADS)

    Wolf, C.; Glorius, J.; Reifarth, R.; Weigand, M.

    2018-01-01

    The determination of neutron capture cross sections of some radioactive isotopes like 85Kr is very important to improve the knowledge about the s process. Based on its own radioactive decay these isotopes can only be used in small samples inside a TOF facility, which is why the neutron flux of these facilities has to be very high. Unfortunately the neutron flux of the FRANZ setup at Goethe University Frankfurt, which will offer the highest neutron flux in astrophysical energy regions (keV region) [1], is still to low to investigate isotopes like 85Kr. Therefore a new setup called NAUTILUS is under development, which will reduce the flight path from 80 cm to a few centimeter to enhance the angular coverage of the sample and therefore increase the neutron flux by a factor of nearly 100. This implies a higher intensity of the γ-flash energy inside the detector and the neutron induced background. Hence the geometry, the scintillator material and the moderator were optimized by GEANT3 simulations.

  7. The application of compressive sampling in rapid ultrasonic computerized tomography (UCT) technique of steel tube slab (STS).

    PubMed

    Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao

    2018-01-01

    This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique.

  8. Importance sampling large deviations in nonequilibrium steady states. I.

    PubMed

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  9. Photothermal method for in situ microanalysis of the chemical composition of coal samples

    DOEpatents

    Amer, Nabil M.

    1986-01-01

    Successive minute regions (13) along a scan path on a coal sample (11) are individually analyzed, at a series of different depths if desired, to determine chemical composition including the locations, sizes and distributions of different maceral inclusions (12). A sequence of infrared light pulses (17) of progressively changing wavelengths is directed into each minute region (13) and a probe light beam (22) is directed along the sample surface (21) adjacent the region (13). Infrared wavelengths at which strong absorption occurs in the region (13) are identified by detecting the resulting deflections (.phi.) of the probe beam (22) caused by thermally induced index of refraction changes in the air or other medium (19) adjacent the region (13). The detected peak absorption wavelengths are correlated with known characteristic peak absorption wavelengths of specific coal constituents to identify the composition of each such minute region (13) of the sample (11). The method enables rapid, convenient and non-destructive analyses of coal specimens to facilitate mining, processing and utilization of coals.

  10. Photothermal method for in situ microanalysis of the chemical composition of coal samples

    DOEpatents

    Amer, N.M.

    1983-10-25

    Successive minute regions along a scan path on a coal sample are individually analyzed, at a series of different depths if desired, to determine chemical composition including the locations, sizes and distributions of different maceral inclusions. A sequence of infrared light pulses of progressively changing wavelengths is directed into each minute region and a probe light beam is directed along the sample surface adjacent the region. Infrared wavelengths at which strong absorption occurs in the region are identified by detecting the resulting deflections of the probe beam caused by thermally induced index of refraction changes in the air or other medium adjacent the region. The detected peak absorption wavelengths are correlated with known characteristic peak absorption wavelengths of specific coal constituents to identify the composition of each such minute region of the sample. The method enables rapid, convenient and non-destructive analyses of coal specimens to facilitate mining, processing and utilization of coals. 2 figures.

  11. Importance sampling large deviations in nonequilibrium steady states. I

    NASA Astrophysics Data System (ADS)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  12. Biomarker Identification for Prostate Cancer and Lymph Node Metastasis from Microarray Data and Protein Interaction Network Using Gene Prioritization Method

    PubMed Central

    Arias, Carlos Roberto; Yeh, Hsiang-Yuan; Soo, Von-Wun

    2012-01-01

    Finding a genetic disease-related gene is not a trivial task. Therefore, computational methods are needed to present clues to the biomedical community to explore genes that are more likely to be related to a specific disease as biomarker. We present biomarker identification problem using gene prioritization method called gene prioritization from microarray data based on shortest paths, extended with structural and biological properties and edge flux using voting scheme (GP-MIDAS-VXEF). The method is based on finding relevant interactions on protein interaction networks, then scoring the genes using shortest paths and topological analysis, integrating the results using a voting scheme and a biological boosting. We applied two experiments, one is prostate primary and normal samples and the other is prostate primary tumor with and without lymph nodes metastasis. We used 137 truly prostate cancer genes as benchmark. In the first experiment, GP-MIDAS-VXEF outperforms all the other state-of-the-art methods in the benchmark by retrieving the truest related genes from the candidate set in the top 50 scores found. We applied the same technique to infer the significant biomarkers in prostate cancer with lymph nodes metastasis which is not established well. PMID:22654636

  13. Mass spectrometer with electron source for reducing space charge effects in sample beam

    DOEpatents

    Houk, Robert S.; Praphairaksit, Narong

    2003-10-14

    A mass spectrometer includes an ion source which generates a beam including positive ions, a sampling interface which extracts a portion of the beam from the ion source to form a sample beam that travels along a path and has an excess of positive ions over at least part of the path, thereby causing space charge effects to occur in the sample beam due to the excess of positive ions in the sample beam, an electron source which adds electrons to the sample beam to reduce space charge repulsion between the positive ions in the sample beam, thereby reducing the space charge effects in the sample beam and producing a sample beam having reduced space charge effects, and a mass analyzer which analyzes the sample beam having reduced space charge effects.

  14. Path Planning Method in Multi-obstacle Marine Environment

    NASA Astrophysics Data System (ADS)

    Zhang, Jinpeng; Sun, Hanxv

    2017-12-01

    In this paper, an improved algorithm for particle swarm optimization is proposed for the application of underwater robot in the complex marine environment. Not only did consider to avoid obstacles when path planning, but also considered the current direction and the size effect on the performance of the robot dynamics. The algorithm uses the trunk binary tree structure to construct the path search space and A * heuristic search method is used in the search space to find a evaluation standard path. Then the particle swarm algorithm to optimize the path by adjusting evaluation function, which makes the underwater robot in the current navigation easier to control, and consume less energy.

  15. Texture developed during deformation of Transformation Induced Plasticity (TRIP) steels

    NASA Astrophysics Data System (ADS)

    Bhargava, M.; Shanta, C.; Asim, T.; Sushil, M.

    2015-04-01

    Automotive industry is currently focusing on using advanced high strength steels (AHSS) due to its high strength and formability for closure applications. Transformation Induced Plasticity (TRIP) steel is promising material for this application among other AHSS. The present work is focused on the microstructure development during deformation of TRIP steel sheets. To mimic complex strain path condition during forming of automotive body, Limit Dome Height (LDH) tests were conducted and samples were deformed in servo hydraulic press to find the different strain path. FEM Simulations were done to predict different strain path diagrams and compared with experimental results. There is a significant difference between experimental and simulation results as the existing material models are not applicable for TRIP steels. Micro texture studies were performed on the samples using EBSD and X-RD techniques. It was observed that austenite is transformed to martensite and texture developed during deformation had strong impact on limit strain and strain path.

  16. Path Analysis and Residual Plotting as Methods of Environmental Scanning in Higher Education: An Illustration with Applications and Enrollments.

    ERIC Educational Resources Information Center

    Morcol, Goktug; McLaughlin, Gerald W.

    1990-01-01

    The study proposes using path analysis and residual plotting as methods supporting environmental scanning in strategic planning for higher education institutions. Path models of three levels of independent variables are developed. Dependent variables measuring applications and enrollments at Virginia Polytechnic Institute and State University are…

  17. PyRETIS: A well-done, medium-sized python library for rare events.

    PubMed

    Lervik, Anders; Riccardi, Enrico; van Erp, Titus S

    2017-10-30

    Transition path sampling techniques are becoming common approaches in the study of rare events at the molecular scale. More efficient methods, such as transition interface sampling (TIS) and replica exchange transition interface sampling (RETIS), allow the investigation of rare events, for example, chemical reactions and structural/morphological transitions, in a reasonable computational time. Here, we present PyRETIS, a Python library for performing TIS and RETIS simulations. PyRETIS directs molecular dynamics (MD) simulations in order to sample rare events with unbiased dynamics. PyRETIS is designed to be easily interfaced with any molecular simulation package and in the present release, it has been interfaced with GROMACS and CP2K, for classical and ab initio MD simulations, respectively. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Cesium isotope ratios as indicators of nuclear power plant operations.

    PubMed

    Delmore, James E; Snyder, Darin C; Tranter, Troy; Mann, Nick R

    2011-11-01

    There are multiple paths by which radioactive cesium can reach the effluent from reactor operations. The radioactive (135)Cs/(137)Cs ratios are controlled by these paths. In an effort to better understand the origin of this radiation, these (135)Cs/(137)Cs ratios in effluents from three power reactor sites have been measured in offsite samples. These ratios are different from global fallout by up to six fold and as such cannot have a significant component from this source. A cesium ratio for a sample collected outside of the plant boundary provides integration over the operating life of the reactor. A sample collected inside the plant at any given time can be much different from this lifetime ratio. The measured cesium ratios vary significantly for the three reactors and indicate that the multiple paths have widely varying levels of contributions. There are too many ways these isotopes can fractionate to be useful for quantitative evaluations of operating parameters in an offsite sample, although it may be possible to obtain limited qualitative information for an onsite sample. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Link prediction based on local weighted paths for complex networks

    NASA Astrophysics Data System (ADS)

    Yao, Yabing; Zhang, Ruisheng; Yang, Fan; Yuan, Yongna; Hu, Rongjing; Zhao, Zhili

    As a significant problem in complex networks, link prediction aims to find the missing and future links between two unconnected nodes by estimating the existence likelihood of potential links. It plays an important role in understanding the evolution mechanism of networks and has broad applications in practice. In order to improve prediction performance, a variety of structural similarity-based methods that rely on different topological features have been put forward. As one topological feature, the path information between node pairs is utilized to calculate the node similarity. However, many path-dependent methods neglect the different contributions of paths for a pair of nodes. In this paper, a local weighted path (LWP) index is proposed to differentiate the contributions between paths. The LWP index considers the effect of the link degrees of intermediate links and the connectivity influence of intermediate nodes on paths to quantify the path weight in the prediction procedure. The experimental results on 12 real-world networks show that the LWP index outperforms other seven prediction baselines.

  20. Effects of eHealth Literacy on General Practitioner Consultations: A Mediation Analysis

    PubMed Central

    Fitzpatrick, Mary Anne; Hess, Alexandra; Sudbury-Riley, Lynn; Hartung, Uwe

    2017-01-01

    Background Most evidence (not all) points in the direction that individuals with a higher level of health literacy will less frequently utilize the health care system than individuals with lower levels of health literacy. The underlying reasons of this effect are largely unclear, though people’s ability to seek health information independently at the time of wide availability of such information on the Internet has been cited in this context. Objective We propose and test two potential mediators of the negative effect of eHealth literacy on health care utilization: (1) health information seeking and (2) gain in empowerment by information seeking. Methods Data were collected in New Zealand, the United Kingdom, and the United States using a Web-based survey administered by a company specialized on providing online panels. Combined, the three samples resulted in a total of 996 baby boomers born between 1946 and 1965 who had used the Internet to search for and share health information in the previous 6 months. Measured variables include eHealth literacy, Internet health information seeking, the self-perceived gain in empowerment by that information, and the number of consultations with one’s general practitioner (GP). Path analysis was employed for data analysis. Results We found a bundle of indirect effect paths showing a positive relationship between health literacy and health care utilization: via health information seeking (Path 1), via gain in empowerment (Path 2), and via both (Path 3). In addition to the emergence of these indirect effects, the direct effect of health literacy on health care utilization disappeared. Conclusions The indirect paths from health literacy via information seeking and empowerment to GP consultations can be interpreted as a dynamic process and an expression of the ability to find, process, and understand relevant information when that is necessary. PMID:28512081

  1. Statistical Analysis of the Links between Blocking and Nor'easters

    NASA Astrophysics Data System (ADS)

    Booth, J. F.; Pfahl, S.

    2015-12-01

    Nor'easters can be loosely defined as extratropical cyclones that develop as they progress northward along the eastern coast of North America. The path makes it possible for these storms to generate storm surge along the coastline and/or heavy precipitation or snow inland. In the present analysis, the path of the storms is investigated relative to the behavior of upstream blocking events over the North Atlantic Ocean. For this analysis, two separate Lagrangian tracking methods are used to identify the extratropical cyclone paths and the blocking events. Using the cyclone paths, Nor'easters are identified and blocking statistics are calculated for the days prior to, during and following the occurrence of the Nor'easters. The path, strength and intensification rates of the cyclones are compared with the strength and location of the blocks. In the event that a Nor'easter occurs, the likelihood of the presence of block at the southeast tip of Greenland is statistically significantly increased, i.e., the presence of a block concurrent with a Nor'easter happens more often than by random coincidence. However no significant link between the strength of the storms and the strength of the block is identified. These results suggest that the presence of the block mainly affects the path of the Nor'easters. On the other hand, in the event of blocking at the southeast tip of Greenland, the likelihood of a Nor'easter, as opposed to a different type of storm is no greater than what one might expect from randomly sampling cyclone tracks. The results confirm a long held understanding in forecast meteorology that upstream blocking is a necessary but not sufficient condition for generating a Nor'easter.

  2. Health Literacy Scale and Causal Model of Childhood Overweight.

    PubMed

    Intarakamhang, Ungsinun; Intarakamhang, Patrawut

    2017-01-28

    WHO focuses on developing health literacy (HL) referring to cognitive and social skills. Our objectives were to develop a scale for evaluating the HL level of Thai childhood overweight, and develop a path model of health behavior (HB) for preventing obesity. A cross-sectional study. This research used a mixed method. Overall, 2,000 school students were aged 9 to 14 yr collected by stratified random sampling from all parts of Thailand in 2014. Data were analyzed by CFA, LISREL. Reliability of HL and HB scale ranged 0.62 to 0.82 and factor loading ranged 0.33 to 0.80, the subjects had low level of HL (60.0%) and fair level of HB (58.4%), and the path model of HB, could be influenced by HL from three paths. Path 1 started from the health knowledge and understanding that directly influenced the eating behavior (effect sized - β was 0.13, P<0.05. Path 2 the health knowledge and understanding that influenced managing their health conditions, media literacy, and making appropriate health-related decision β=0.07, 0.98, and 0.05, respectively. Path 3 the accessing the information and services that influenced communicating for added skills, media literacy, and making appropriate health-related decision β=0.63, 0.93, 0.98, and 0.05. Finally, basic level of HL measured from health knowledge and understanding and accessing the information and services that influenced HB through interactive, and critical level β= 0.76, 0.97, and 0.55, respectively. HL Scale for Thai childhood overweight should be implemented as a screening tool developing HL by the public policy for health promotion.

  3. Common path endoscopic probes for optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Singh, Kanwarpal; Gardecki, Joseph A.; Tearney, Guillermo J.

    2017-02-01

    Background: Dispersion imbalance and polarization mismatch between the reference and sample arm signals can lead to image quality degradation in optical coherence tomography (OCT). One approach to reduce these image artifacts is to employ a common-path geometry in fiber-based probes. In this work, we report an 800 um diameter all-fiber common-path monolithic probe for coronary artery imaging where the reference signal is generated using an inline fiber partial reflector. Methods: Our common-path probe was designed for swept-source based Fourier domain OCT at 1310 nm wavelength. A face of a coreless fiber was coated with gold and spliced to a standard SMF-28 single mode fiber creating an inline partial reflector, which acted as a reference surface. The other face of the coreless fiber was shaped into a ball lens for focusing. The optical elements were assembled within a 560 µm diameter drive shaft, which was attached to a rotary junction. The drive shaft was placed inside a transparent sheath having an outer diameter of 800 µm. Results: With a source input power of 30mW, the inline common-path probe achieved a sensitivity of 104 dB. Images of human finger skin showed the characteristic layers of skin as well as features such as sweat ducts. Images of coronary arteries ex vivo obtained with this probe enabled visualization of the characteristic architectural morphology of the normal artery wall and known features of atherosclerotic plaque. Conclusion: In this work, we have demonstrated a common path OCT probe for cardiovascular imaging. The probe is easy to fabricate, will reduce system complexity and overall cost. We believe that this design will be helpful in endoscopic applications that require high resolution and a compact form factor.

  4. Laser-Induced Damage to Thin Film Dielectric Coatings.

    DTIC Science & Technology

    1980-10-01

    magnify and reimage the laser spot in the diagnostic Path B. Location [5] (see Figure (9)) is the equi- valent focal plane in Path B to that in Path A at...the thin film sample, (3] . The object distance is between the focal plane and the lens at [6) and the image distance is betv en the lens [6] and the...the equivalent focal plane in the diagnostic path and positioned so that the peak of the beam spatial profile falls on the pinhole. The diameter of the

  5. Broad-band Lg Attenuation Tomography in Eastern Eurasia and The Resolution, Uncertainty and Data Predication

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Xu, X.

    2017-12-01

    The broad band Lg 1/Q tomographic models in eastern Eurasia are inverted from source- and site-corrected path 1/Q data. The path 1/Q are measured between stations (or events) by the two-station (TS), reverse two-station (RTS) and reverse two-event (RTE) methods, respectively. Because path 1/Q are computed using logarithm of the product of observed spectral ratios and simplified 1D geometrical spreading correction, they are subject to "modeling errors" dominated by uncompensated 3D structural effects. We have found in Chen and Xie [2017] that these errors closely follow normal distribution after the long-tailed outliers are screened out (similar to teleseismic travel time residuals). We thus rigorously analyze the statistics of these errors collected from repeated samplings of station (and event) pairs from 1.0 to 10.0Hz and reject about 15% outliers at each frequency band. The resultant variance of Δ/Q decreases with frequency as 1/f2. The 1/Q tomography using screened data is now a stochastic inverse problem with solutions approximate the means of Gaussian random variables and the model covariance matrix is that of Gaussian variables with well-known statistical behavior. We adopt a new SVD based tomographic method to solve for 2D Q image together with its resolution and covariance matrices. The RTS and RTE yield the most reliable 1/Q data free of source and site effects, but the path coverage is rather sparse due to very strict recording geometry. The TS absorbs the effects of non-unit site response ratios into 1/Q data. The RTS also yields site responses, which can then be corrected from the path 1/Q of TS to make them also free of site effect. The site corrected TS data substantially improve path coverage, allowing able to solve for 1/Q tomography up to 6.0Hz. The model resolution and uncertainty are first quantitively accessed by spread functions (fulfilled by resolution matrix) and covariance matrix. The reliably retrieved Q models correlate well with the distinct tectonic blocks featured by the most recent major deformations and vary with frequencies. With the 1/Q tomographic model and its covariance matrix, we can formally estimate the uncertainty of any path-specific Lg 1/Q prediction. This new capability significantly benefits source estimation for which reliable uncertainty estimate is especially important.

  6. Computing thermal Wigner densities with the phase integration method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beutier, J.; Borgis, D.; Vuilleumier, R.

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta andmore » coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.« less

  7. Numerical tilting compensation in microscopy based on wavefront sensing using transport of intensity equation method

    NASA Astrophysics Data System (ADS)

    Hu, Junbao; Meng, Xin; Wei, Qi; Kong, Yan; Jiang, Zhilong; Xue, Liang; Liu, Fei; Liu, Cheng; Wang, Shouyu

    2018-03-01

    Wide-field microscopy is commonly used for sample observations in biological research and medical diagnosis. However, the tilting error induced by the oblique location of the image recorder or the sample, as well as the inclination of the optical path often deteriorates the imaging quality. In order to eliminate the tilting in microscopy, a numerical tilting compensation technique based on wavefront sensing using transport of intensity equation method is proposed in this paper. Both the provided numerical simulations and practical experiments prove that the proposed technique not only accurately determines the tilting angle with simple setup and procedures, but also compensates the tilting error for imaging quality improvement even in the large tilting cases. Considering its simple systems and operations, as well as image quality improvement capability, it is believed the proposed method can be applied for tilting compensation in the optical microscopy.

  8. Computing thermal Wigner densities with the phase integration method.

    PubMed

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  9. Hot gas path component having near wall cooling features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miranda, Carlos Miguel; Kottilingam, Srikanth Chandrudu; Lacy, Benjamin Paul

    A method for providing micro-channels in a hot gas path component includes forming a first micro-channel in an exterior surface of a substrate of the hot gas path component. A second micro-channel is formed in the exterior surface of the hot gas path component such that it is separated from the first micro-channel by a surface gap having a first width. The method also includes disposing a braze sheet onto the exterior surface of the hot gas path component such that the braze sheet covers at least of portion of the first and second micro-channels, and heating the braze sheetmore » to bond it to at least a portion of the exterior surface of the hot gas path component.« less

  10. Finding False Paths in Sequential Circuits

    NASA Astrophysics Data System (ADS)

    Matrosova, A. Yu.; Andreeva, V. V.; Chernyshov, S. V.; Rozhkova, S. V.; Kudin, D. V.

    2018-02-01

    Method of finding false paths in sequential circuits is developed. In contrast with heuristic approaches currently used abroad, the precise method based on applying operations on Reduced Ordered Binary Decision Diagrams (ROBDDs) extracted from the combinational part of a sequential controlling logic circuit is suggested. The method allows finding false paths when transfer sequence length is not more than the given value and obviates the necessity of investigation of combinational circuit equivalents of the given lengths. The possibilities of using of the developed method for more complicated circuits are discussed.

  11. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  12. Microvolume protein concentration determination using the NanoDrop 2000c spectrophotometer.

    PubMed

    Desjardins, Philippe; Hansen, Joel B; Allen, Michael

    2009-11-04

    Traditional spectrophotometry requires placing samples into cuvettes or capillaries. This is often impractical due to the limited sample volumes often used for protein analysis. The Thermo Scientific NanoDrop 2000c Spectrophotometer solves this issue with an innovative sample retention system that holds microvolume samples between two measurement surfaces using the surface tension properties of liquids, enabling the quantification of samples in volumes as low as 0.5-2 microL. The elimination of cuvettes or capillaries allows real time changes in path length, which reduces the measurement time while greatly increasing the dynamic range of protein concentrations that can be measured. The need for dilutions is also eliminated, and preparations for sample quantification are relatively easy as the measurement surfaces can be simply wiped with laboratory wipe. This video article presents modifications to traditional protein concentration determination methods for quantification of microvolume amounts of protein using A280 absorbance readings or the BCA colorimetric assay.

  13. A Method to Analyze and Optimize the Load Sharing of Split Path Transmissions

    NASA Technical Reports Server (NTRS)

    Krantz, Timothy L.

    1996-01-01

    Split-path transmissions are promising alternatives to the common planetary transmissions for rotorcraft. Heretofore, split-path designs proposed for or used in rotorcraft have featured load-sharing devices that add undesirable weight and complexity to the designs. A method was developed to analyze and optimize the load sharing in split-path transmissions without load-sharing devices. The method uses the clocking angle as a design parameter to optimize for equal load sharing. In addition, the clocking angle tolerance necessary to maintain acceptable load sharing can be calculated. The method evaluates the effects of gear-shaft twisting and bending, tooth bending, Hertzian deformations within bearings, and movement of bearing supports on load sharing. It was used to study the NASA split-path test gearbox and the U.S. Army's Comanche helicopter main rotor gearbox. Acceptable load sharing was found to be achievable and maintainable by using proven manufacturing processes. The analytical results compare favorably to available experimental data.

  14. Quantum mechanical free energy profiles with post-quantization restraints: Binding free energy of the water dimer over a broad range of temperatures

    NASA Astrophysics Data System (ADS)

    Bishop, Kevin P.; Roy, Pierre-Nicholas

    2018-03-01

    Free energy calculations are a crucial part of understanding chemical systems but are often computationally expensive for all but the simplest of systems. Various enhanced sampling techniques have been developed to improve the efficiency of these calculations in numerical simulations. However, the majority of these approaches have been applied using classical molecular dynamics. There are many situations where nuclear quantum effects impact the system of interest and a classical description fails to capture these details. In this work, path integral molecular dynamics has been used in conjunction with umbrella sampling, and it has been observed that correct results are only obtained when the umbrella sampling potential is applied to a single path integral bead post quantization. This method has been validated against a Lennard-Jones benchmark system before being applied to the more complicated water dimer system over a broad range of temperatures. Free energy profiles are obtained, and these are utilized in the calculation of the second virial coefficient as well as the change in free energy from the separated water monomers to the dimer. Comparisons to experimental and ground state calculation values from the literature are made for the second virial coefficient at higher temperature and the dissociation energy of the dimer in the ground state.

  15. Quantum mechanical free energy profiles with post-quantization restraints: Binding free energy of the water dimer over a broad range of temperatures.

    PubMed

    Bishop, Kevin P; Roy, Pierre-Nicholas

    2018-03-14

    Free energy calculations are a crucial part of understanding chemical systems but are often computationally expensive for all but the simplest of systems. Various enhanced sampling techniques have been developed to improve the efficiency of these calculations in numerical simulations. However, the majority of these approaches have been applied using classical molecular dynamics. There are many situations where nuclear quantum effects impact the system of interest and a classical description fails to capture these details. In this work, path integral molecular dynamics has been used in conjunction with umbrella sampling, and it has been observed that correct results are only obtained when the umbrella sampling potential is applied to a single path integral bead post quantization. This method has been validated against a Lennard-Jones benchmark system before being applied to the more complicated water dimer system over a broad range of temperatures. Free energy profiles are obtained, and these are utilized in the calculation of the second virial coefficient as well as the change in free energy from the separated water monomers to the dimer. Comparisons to experimental and ground state calculation values from the literature are made for the second virial coefficient at higher temperature and the dissociation energy of the dimer in the ground state.

  16. SSAGES: Software Suite for Advanced General Ensemble Simulations.

    PubMed

    Sidky, Hythem; Colón, Yamil J; Helfferich, Julian; Sikora, Benjamin J; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S; Reid, Daniel R; Sevgen, Emre; Thapar, Vikram; Webb, Michael A; Whitmer, Jonathan K; de Pablo, Juan J

    2018-01-28

    Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques-including adaptive biasing force, string methods, and forward flux sampling-that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.

  17. Method for Veterbi decoding of large constraint length convolutional codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)

    1988-01-01

    A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.

  18. Terrain classification in navigation of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Dodds, David R.

    1991-03-01

    In this paper we describe a method of path planning that integrates terrain classification (by means of fractals) the certainty grid method of spatial representation Kehtarnavaz Griswold collision-zones Dubois Prade fuzzy temporal and spatial knowledge and non-point sized qualitative navigational planning. An initially planned (" end-to-end" ) path is piece-wise modified to accommodate known and inferred moving obstacles and includes attention to time-varying multiple subgoals which may influence a section of path at a time after the robot has begun traversing that planned path.

  19. An optically passive method that doubles the rate of 2-Ghz timing fiducials

    NASA Astrophysics Data System (ADS)

    Boni, R.; Kendrick, J.; Sorce, C.

    2017-08-01

    Solid-state optical comb-pulse generators provide a convenient and accurate method to include timing fiducials in a streak camera image for time base correction. Commercially available vertical-cavity surface-emitting lasers (VCSEL's) emitting in the visible currently in use can be modulated up to 2 GHz. An optically passive method is presented to interleave a time-delayed path of the 2-GHz comb with itself, producing a 4-GHz comb. This technique can be applied to VCSEL's with higher modulation rates. A fiber-delivered, randomly polarized 2-GHz VCSEL comb is polarization split into s-polarization and p-polarization paths. One path is time delayed relative to the other by twice the 2-GHz rate with +/-1-ps accuracy; the two paths then recombine at the fiber-coupled output. High throughput (>=90%) is achieved by carefully using polarization beam-splitting cubes, a total internal reflection beam-path-steering prism, and antireflection coatings. The glass path-length delay block and turning prism are optically contacted together. The beam polarizer cubes that split and recombine the paths are precision aligned and permanently cemented into place. We expect the palm-sized, inline fiber-coupled, comb-rate-doubling device to maintain its internal alignment indefinitely.

  20. Sampling-Based Coverage Path Planning for Complex 3D Structures

    DTIC Science & Technology

    2012-09-01

    one such task, in which a single robot must sweep its end effector over the entirety of a known workspace. For two-dimensional environments, optimal...structures. First, we introduce a new algorithm for planning feasible coverage paths. It is more computationally efficient in problems of complex geometry...iteratively shortens and smooths a feasible coverage path; robot configurations are adjusted without violating any coverage con- straints. Third, we propose

  1. Method and apparatus for executing a shift in a hybrid transmission

    DOEpatents

    Gupta, Pinaki; Kaminsky, Lawrence A; Demirovic, Besim

    2013-09-03

    A method for executing a transmission shift in a hybrid transmission including first and second electric machines includes executing a shift-through-neutral sequence from an initial transmission state to a target transmission state including executing an intermediate shift to neutral. Upon detecting a change in an output torque request while executing the shift-through-neutral sequence, possible recovery shift paths are identified. Available ones of the possible recovery shift paths are identified and a shift cost for each said available recovery shift path is evaluated. The available recovery shift path having a minimum shift cost is selected as a preferred recovery shift path and is executed to achieve a non-neutral transmission state.

  2. Safe Maritime Autonomous Path Planning in a High Sea State

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Quadrelli, Marco; Huntsberger, Terrance L.

    2014-01-01

    This paper presents a path planning method for sea surface vehicles that prevents capsizing and bow-diving in a high sea-state. A key idea is to use response amplitude operators (RAOs) or, in control terminology, the transfer functions from a sea state to a vessel's motion, in order to find a set of speeds and headings that results in excessive pitch and roll oscillations. This information is translated to arithmetic constraints on the ship's velocity, which are passed to a model predictive control (MPC)-based path planner to find a safe and optimal path that achieves specified goals. An obstacle avoidance capability is also added to the path planner. The proposed method is demonstrated by simulations.

  3. Constraining Thermal Histories by Monte Carlo Simulation of Mg-Fe Isotopic Profiles in Olivine

    NASA Astrophysics Data System (ADS)

    Sio, C. K. I.; Dauphas, N.

    2016-12-01

    In thermochronology, random time-temperature (t-T) paths are generated and used as inputs to model fission track data. This random search method is used to identify a range of acceptable thermal histories that can describe the data. We have extended this modeling approach to magmatic systems. This approach utilizes both the chemical and stable isotope profiles measured in crystals as model constraints. Specifically, the isotopic profiles are used to determine the relative contribution of crystal growth vs. diffusion in generating chemical profiles, and to detect changes in melt composition. With this information, tighter constraints can be placed on the thermal evolution of magmatic bodies. We use an olivine phenocryst from the Kilauea Iki lava lake, HI, to demonstrate proof of concept. We treat this sample as one with little geologic context, then compare our modeling results to the known thermal history experienced by that sample. To complete forward modeling, we use MELTS to estimate the boundary condition, initial and quench temperatures. We also assume a simple relationship between crystal growth and cooling rate. Another important parameter is the isotopic effect for diffusion (i.e., the relative diffusivity of the light vs. heavy isotope of an element). The isotopic effects for Mg and Fe diffusion in olivine have been estimated based on natural samples; experiments to better constrain these parameters are underway. We find that 40% of the random t-T paths can be used to fit the Mg-Fe chemical profiles. However, only a few can be used to simultaneously fit the Mg-Fe isotopic profiles. These few t-T paths are close to the independently determined t-T history of the sample. This modeling approach can be further extended other igneous and metamorphic systems where data exist for diffusion rates, crystal growth rates, and isotopic effects for diffusion.

  4. A Novel Low-Power, High-Performance, Zero-Maintenance Closed-Path Trace Gas Eddy Covariance System with No Water Vapor Dilution or Spectroscopic Corrections

    NASA Astrophysics Data System (ADS)

    Sargent, S.; Somers, J. M.

    2015-12-01

    Trace-gas eddy covariance flux measurement can be made with open-path or closed-path analyzers. Traditional closed-path trace-gas analyzers use multipass absorption cells that behave as mixing volumes, requiring high sample flow rates to achieve useful frequency response. The high sample flow rate and the need to keep the multipass cell extremely clean dictates the use of a fine-pore filter that may clog quickly. A large-capacity filter cannot be used because it would degrade the EC system frequency response. The high flow rate also requires a powerful vacuum pump, which will typically consume on the order of 1000 W. The analyzer must measure water vapor for spectroscopic and dilution corrections. Open-path analyzers are available for methane, but not for nitrous oxide. The currently available methane analyzers have low power consumption, but are very large. Their large size degrades frequency response and disturbs the air flow near the sonic anemometer. They require significant maintenance to keep the exposed multipass optical surfaces clean. Water vapor measurements for dilution and spectroscopic corrections require a separate water vapor analyzer. A new closed-path eddy covariance system for measuring nitrous oxide or methane fluxes provides an elegant solution. The analyzer (TGA200A, Campbell Scientific, Inc.) uses a thermoelectrically-cooled interband cascade laser. Its small sample-cell volume and unique sample-cell configuration (200 ml, 1.5 m single pass) provide excellent frequency response with a low-power scroll pump (240 W). A new single-tube Nafion® dryer removes most of the water vapor, and attenuates fluctuations in the residual water vapor. Finally, a vortex intake assembly eliminates the need for an intake filter without adding volume that would degrade system frequency response. Laboratory testing shows the system attenuates the water vapor dilution term by more than 99% and achieves a half-power band width of 3.5 Hz.

  5. Modeling of optical quadrature microscopy for imaging mouse embryos

    NASA Astrophysics Data System (ADS)

    Warger, William C., II; DiMarzio, Charles A.

    2008-02-01

    Optical quadrature microscopy (OQM) has been shown to provide the optical path difference through a mouse embryo, and has led to a novel method to count the total number of cells further into development than current non-toxic imaging techniques used in the clinic. The cell counting method has the potential to provide an additional quantitative viability marker for blastocyst transfer during in vitro fertilization. OQM uses a 633 nm laser within a modified Mach-Zehnder interferometer configuration to measure the amplitude and phase of the signal beam that travels through the embryo. Four cameras preceded by multiple beamsplitters record the four interferograms that are used within a reconstruction algorithm to produce an image of the complex electric field amplitude. Here we present a model for the electric field through the primary optical components in the imaging configuration and the reconstruction algorithm to calculate the signal to noise ratio when imaging mouse embryos. The model includes magnitude and phase errors in the individual reference and sample paths, fixed pattern noise, and noise within the laser and detectors. This analysis provides the foundation for determining the imaging limitations of OQM and the basis to optimize the cell counting method in order to introduce additional quantitative viability markers.

  6. Automatic analysis of ciliary beat frequency using optical flow

    NASA Astrophysics Data System (ADS)

    Figl, Michael; Lechner, Manuel; Werther, Tobias; Horak, Fritz; Hummel, Johann; Birkfellner, Wolfgang

    2012-02-01

    Ciliary beat frequency (CBF) can be a useful parameter for diagnosis of several diseases, as e.g. primary ciliary dyskinesia. (PCD). CBF computation is usually done using manual evaluation of high speed video sequences, a tedious, observer dependent, and not very accurate procedure. We used the OpenCV's pyramidal implementation of the Lukas-Kanade algorithm for optical flow computation and applied this to certain objects to follow the movements. The objects were chosen by their contrast applying the corner detection by Shi and Tomasi. Discrimination between background/noise and cilia by a frequency histogram allowed to compute the CBF. Frequency analysis was done using the Fourier transform in matlab. The correct number of Fourier summands was found by the slope in an approximation curve. The method showed to be usable to distinguish between healthy and diseased samples. However there remain difficulties in automatically identifying the cilia, and also in finding enough high contrast cilia in the image. Furthermore the some of the higher contrast cilia are lost (and sometimes found) by the method, an easy way to distinguish the correct sub-path of a point's path have yet to be found in the case where the slope methods doesn't work.

  7. Calculating Path-Dependent Travel Time Prediction Variance and Covariance for the SALSA3D Global Tomographic P-Velocity Model with a Distributed Parallel Multi-Core Computer

    NASA Astrophysics Data System (ADS)

    Hipp, J. R.; Encarnacao, A.; Ballard, S.; Young, C. J.; Phillips, W. S.; Begnaud, M. L.

    2011-12-01

    Recently our combined SNL-LANL research team has succeeded in developing a global, seamless 3D tomographic P-velocity model (SALSA3D) that provides superior first P travel time predictions at both regional and teleseismic distances. However, given the variable data quality and uneven data sampling associated with this type of model, it is essential that there be a means to calculate high-quality estimates of the path-dependent variance and covariance associated with the predicted travel times of ray paths through the model. In this paper, we show a methodology for accomplishing this by exploiting the full model covariance matrix. Our model has on the order of 1/2 million nodes, so the challenge in calculating the covariance matrix is formidable: 0.9 TB storage for 1/2 of a symmetric matrix, necessitating an Out-Of-Core (OOC) blocked matrix solution technique. With our approach the tomography matrix (G which includes Tikhonov regularization terms) is multiplied by its transpose (GTG) and written in a blocked sub-matrix fashion. We employ a distributed parallel solution paradigm that solves for (GTG)-1 by assigning blocks to individual processing nodes for matrix decomposition update and scaling operations. We first find the Cholesky decomposition of GTG which is subsequently inverted. Next, we employ OOC matrix multiply methods to calculate the model covariance matrix from (GTG)-1 and an assumed data covariance matrix. Given the model covariance matrix we solve for the travel-time covariance associated with arbitrary ray-paths by integrating the model covariance along both ray paths. Setting the paths equal gives variance for that path. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Path Searching Based Fault Automated Recovery Scheme for Distribution Grid with DG

    NASA Astrophysics Data System (ADS)

    Xia, Lin; Qun, Wang; Hui, Xue; Simeng, Zhu

    2016-12-01

    Applying the method of path searching based on distribution network topology in setting software has a good effect, and the path searching method containing DG power source is also applicable to the automatic generation and division of planned islands after the fault. This paper applies path searching algorithm in the automatic division of planned islands after faults: starting from the switch of fault isolation, ending in each power source, and according to the line load that the searching path traverses and the load integrated by important optimized searching path, forming optimized division scheme of planned islands that uses each DG as power source and is balanced to local important load. Finally, COBASE software and distribution network automation software applied are used to illustrate the effectiveness of the realization of such automatic restoration program.

  9. a Comparison of Morphological Taxonomy and Next Generation DNA Sequencing for the Assessment of Zooplankton Diversity

    NASA Astrophysics Data System (ADS)

    Harvey, J.; Fisher, J. L.; Johnson, S.; Morgan, S.; Peterson, W. T.; Satterthwaite, E. V.; Vrijenhoek, R. C.

    2016-02-01

    Our ability to accurately characterize the diversity of planktonic organisms is affected by both the methods we use to collect water samples and our approaches to assessing sample contents. Plankton nets collect organisms from high volumes of water, but integrate sample contents along the net's path. In contrast, plankton pumps collect water from discrete depths. Autonomous underwater vehicles (AUVs) can collect water samples with pinpoint accuracy from physical features such as upwelling fronts or biological features such as phytoplankton blooms, but sample volumes are necessarily much smaller than those possible with nets. Characterization of plankton diversity and abundances in water samples may also vary with the assessment method we apply. Morphological taxonomy provides visual identification and enumeration of organisms via microscopy, but is labor intensive. Next generation DNA sequencing (NGS) shows great promise for assessing plankton diversity in water samples but accurate assessment of relative abundances may not be possible in all cases. Comparison of morphological taxonomy to molecular approaches is necessary to identify areas of overlap and also areas of disagreement between these methods. We have compared morphological taxonomic assessments to mitochondrial COI and nuclear 28S ribosomal RNA NGS results for plankton net samples collected in Monterey bay, California. We have made a similar comparison for plankton pump samples, and have also applied our NGS methods to targeted, small volume water samples collected by an AUV. Our goal is to communicate current results and lessons learned regarding application of traditional taxonomy and novel molecular approaches to the study of plankton diversity in spatially and temporally variable, coastal marine environments.

  10. A new method for photon transport in Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Sato, T.; Ogawa, K.

    1999-12-01

    Monte Carlo methods are used to evaluate data methods such as scatter and attenuation compensation in single photon emission CT (SPECT), treatment planning in radiation therapy, and in many industrial applications. In Monte Carlo simulation, photon transport requires calculating the distance from the location of the emitted photon to the nearest boundary of each uniform attenuating medium along its path of travel, and comparing this distance with the length of its path generated at emission. Here, the authors propose a new method that omits the calculation of the location of the exit point of the photon from each voxel and of the distance between the exit point and the original position. The method only checks the medium of each voxel along the photon's path. If the medium differs from that in the voxel from which the photon was emitted, the authors calculate the location of the entry point in the voxel, and the length of the path is compared with the mean free path length generated by a random number. Simulations using the MCAT phantom show that the ratios of the calculation time were 1.0 for the voxel-based method, and 0.51 for the proposed method with a 256/spl times/256/spl times/256 matrix image, thereby confirming the effectiveness of the algorithm.

  11. HAI: A novel airborne multi-channel hygrometer for fast multi-phase H2O quantification: Performance of the HAI instrument during the first flights on the German HALO aircraft

    NASA Astrophysics Data System (ADS)

    Buchholz, B.; Ebert, V.; Kraemer, M.; Afchine, A.

    2014-12-01

    Common gas phase H2O measurements on fast airborne platforms e.g. using backward facing or "Rosemount"-inlets can lead to a high risk of ice and droplets contamination. In addition, currently no single hygrometer exists that allows a simultaneous, high-speed measurement of all phases (gas, liquid, ice) with the same detection principle. In the rare occasions multi-phase measurements are realized, gas-and condensed-phase observations rely on different methods, instruments and calibration strategies so that precision and accuracy levels are quite difficult to quantify. This is effectively avoided by the novel TDLAS instrument, HAI, Hygrometer for Atmospheric Investigation, which allows a simultaneous, high speed, multi-phase detection without any sensor calibration in a unique "2+2" channel concept. Hai combines two independent wavelength channels, at 1.4 µm and at 2.6 µm, for a wide dynamic range from 1 to 30 000 ppmv, with a simultaneous closed path (extractive) and open path detection. Thus, "Total", i.e. gas-phase plus condensed-phase water is measured by sampling via a forward facing inlet into "closed-path" extractive cells. A selective, sampling-free, high speed gas phase detection is realized via a dual-wavelength "open-path" cell placed outside of the aircraft fuselage. All channels can be sampled with 120 Hz (measurement cycle time Dt=1.6 ms) allowing an unprecedented spatial resolution of 30 cm at 900 km/h. The evaluation of the individual multi-channel raw-data is done post flight, without any channel interdependencies, in calibration-free mode, thus allowing fast, accurate and precise multi-phase water detection in flight. The performance could be shown in more than 200 net flights hours in three scientific flight campaigns (TACTS, ESMVal, ML-CIRRUS) on the new German HALO aircraft. In addition the level of the accuracy of the calibration free evaluation was evaluated at the German national primary water vapor standard.

  12. Finding Out Critical Points For Real-Time Path Planning

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    1989-03-01

    Path planning for a mobile robot is a classic topic, but the path planning under real-time environment is a different issue. The system sources including sampling time, processing time, processes communicating time, and memory space are very limited for this type of application. This paper presents a method which abstracts the world representation from the sensory data and makes the decision as to which point will be a potentially critical point to span the world map by using incomplete knowledge about physical world and heuristic rule. Without any previous knowledge or map of the workspace, the robot will determine the world map by roving through the workspace. The computational complexity for building and searching such a map is not more than O( n2 ) The find-path problem is well-known in robotics. Given an object with an initial location and orientation, a goal location and orientation, and a set of obstacles located in space, the problem is to find a continuous path for the object from the initial position to the goal position which avoids collisions with obstacles along the way. There are a lot of methods to find a collision-free path in given environment. Techniques for solving this problem can be classified into three approaches: 1) the configuration space approach [1],[2],[3] which represents the polygonal obstacles by vertices in a graph. The idea is to determine those parts of the free space which a reference point of the moving object can occupy without colliding with any obstacles. A path is then found for the reference point through this truly free space. Dealing with rotations turns out to be a major difficulty with the approach, requiring complex geometric algorithms which are computationally expensive. 2) the direct representation of the free space using basic shape primitives such as convex polygons [4] and overlapping generalized cones [5]. 3) the combination of technique 1 and 2 [6] by which the space is divided into the primary convex region, overlap region and obstacle region, then obstacle boundaries with attribute values are represented by the vertices of the hypergraph. The primary convex region and overlap region are represented by hyperedges, the centroids of overlap form the critical points. The difficulty is generating segment graph and estimating of minimum path width. The all techniques mentioned above need previous knowledge about the world to make path planning and the computational cost is not low. They are not available in an unknow and uncertain environment. Due to limited system resources such as CPU time, memory size and knowledge about the special application in an intelligent system (such as mobile robot), it is necessary to use algorithms that provide the good decision which is feasible with the available resources in real time rather than the best answer that could be achieved in unlimited time with unlimited resources. A real-time path planner should meet following requirements: - Quickly abstract the representation of the world from the sensory data without any previous knowledge about the robot environment. - Easily update the world model to spell out the global-path map and to reflect changes in the robot environment. - Must make a decision of where the robot must go and which direction the range sensor should point to in real time with limited resources. The method presented here assumes that the data from range sensors has been processed by signal process unite. The path planner will guide the scan of range sensor, find critical points, make decision where the robot should go and which point is poten- tial critical point, generate the path map and monitor the robot moves to the given point. The program runs recursively until the goal is reached or the whole workspace is roved through.

  13. Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods

    DOE PAGES

    Liu, Peigui; Elshall, Ahmed S.; Ye, Ming; ...

    2016-02-05

    Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamicmore » integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. As a result, the thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.« less

  14. Preparing Future Scholars for Academia and Beyond: A Mixed Method Investigation of Doctoral Students' Preparedness for Multiple Career Paths

    ERIC Educational Resources Information Center

    Cason, Jennifer

    2016-01-01

    This action research study is a mixed methods investigation of doctoral students' preparedness for multiple career paths. PhD students face two challenges preparing for multiple career paths: lack of preparation and limited engagement in conversations about the value of their research across multiple audiences. This study focuses on PhD students'…

  15. Analysis and elimination of a bias in targeted molecular dynamics simulations of conformational transitions: application to calmodulin.

    PubMed

    Ovchinnikov, Victor; Karplus, Martin

    2012-07-26

    The popular targeted molecular dynamics (TMD) method for generating transition paths in complex biomolecular systems is revisited. In a typical TMD transition path, the large-scale changes occur early and the small-scale changes tend to occur later. As a result, the order of events in the computed paths depends on the direction in which the simulations are performed. To identify the origin of this bias, and to propose a method in which the bias is absent, variants of TMD in the restraint formulation are introduced and applied to the complex open ↔ closed transition in the protein calmodulin. Due to the global best-fit rotation that is typically part of the TMD method, the simulated system is guided implicitly along the lowest-frequency normal modes, until the large spatial scales associated with these modes are near the target conformation. The remaining portion of the transition is described progressively by higher-frequency modes, which correspond to smaller-scale rearrangements. A straightforward modification of TMD that avoids the global best-fit rotation is the locally restrained TMD (LRTMD) method, in which the biasing potential is constructed from a number of TMD potentials, each acting on a small connected portion of the protein sequence. With a uniform distribution of these elements, transition paths that lack the length-scale bias are obtained. Trajectories generated by steered MD in dihedral angle space (DSMD), a method that avoids best-fit rotations altogether, also lack the length-scale bias. To examine the importance of the paths generated by TMD, LRTMD, and DSMD in the actual transition, we use the finite-temperature string method to compute the free energy profile associated with a transition tube around a path generated by each algorithm. The free energy barriers associated with the paths are comparable, suggesting that transitions can occur along each route with similar probabilities. This result indicates that a broad ensemble of paths needs to be calculated to obtain a full description of conformational changes in biomolecules. The breadth of the contributing ensemble suggests that energetic barriers for conformational transitions in proteins are offset by entropic contributions that arise from a large number of possible paths.

  16. DOAS (differential optical absorption spectroscopy) urban pollution measurements

    NASA Astrophysics Data System (ADS)

    Stevens, Robert K.; Vossler, T. L.

    1991-05-01

    During July and August of 1990, a differential optical absorption spectrometer (DOAS) made by OPSIS Inc. was used to measure gaseous air pollutants over three separate open paths in Atlanta, GA. Over path 1 (1099 m) and path 2 (1824 m), ozone (03), sulfur dioxide (SO2) nitrogen dioxide (NO2), nitrous acid (HNO2) formaldehyde (HCHO), benzene, toluene, and o-xylene were measured. Nitric oxide (NO) and ammonia (NH3) were monitored over path 3 (143 m). The data quality and data capture depended on the compound being measured and the path over which it was measured. Data quality criteria for each compound were chosen such that the average relative standard deviation would be less than 25%. Data capture ranged from 43% for o-xylene for path 1 to 95% for ozone for path 2. Benzene, toluene, and o-xylene concentrations measured over path 2, which crossed over an interstate highway, were higher than concentrations measured over path 1, implicating emissions from vehicles on the highway as a significant source of these compounds. Federal Reference Method (FRN) instruments were located near the DOAS light receivers and measurements of 03, NO2, and NO were made concurrently with the DOAS. Correlation coefficients greater than 0.85 were obtained between the DOAS and FRM's; however, there was a difference between the mean values obtained by the two methods for 03 and NO. A gas chromatograph for measuring volatile organic compounds was operated next to the FRN's. Correlation coefficients of about 0.66 were obtained between the DOAS and GC measurements of benzene and o- xylene. However, the correlation coefficient between the DOAS and GC measurements of toluene averaged only 0.15 for the two DOAS measurement paths. The lack of correlation and other factors indicate the possibility of a localized source of toluene near the GC. In general, disagreements between the two measurement methods could be caused by atmospheric inhomogeneities or interferences in the DOAS and other methods.

  17. Free energy surface of an intrinsically disordered protein: comparison between temperature replica exchange molecular dynamics and bias-exchange metadynamics.

    PubMed

    Zerze, Gül H; Miller, Cayla M; Granata, Daniele; Mittal, Jeetain

    2015-06-09

    Intrinsically disordered proteins (IDPs), which are expected to be largely unstructured under physiological conditions, make up a large fraction of eukaryotic proteins. Molecular dynamics simulations have been utilized to probe structural characteristics of these proteins, which are not always easily accessible to experiments. However, exploration of the conformational space by brute force molecular dynamics simulations is often limited by short time scales. Present literature provides a number of enhanced sampling methods to explore protein conformational space in molecular simulations more efficiently. In this work, we present a comparison of two enhanced sampling methods: temperature replica exchange molecular dynamics and bias exchange metadynamics. By investigating both the free energy landscape as a function of pertinent order parameters and the per-residue secondary structures of an IDP, namely, human islet amyloid polypeptide, we found that the two methods yield similar results as expected. We also highlight the practical difference between the two methods by describing the path that we followed to obtain both sets of data.

  18. Quadcopter Path Following Control Design Using Output Feedback with Command Generator Tracker LOS Based At Square Path

    NASA Astrophysics Data System (ADS)

    Nugraha, A. T.; Agustinah, T.

    2018-01-01

    Quadcopter an unstable system, underactuated and nonlinear in quadcopter control research developments become an important focus of attention. In this study, following the path control method for position on the X and Y axis, used structure-Generator Tracker Command (CGT) is tested. Attitude control and position feedback quadcopter is compared using the optimal output. The addition of the H∞ performance optimal output feedback control is used to maintain the stability and robustness of quadcopter. Iterative numerical techniques Linear Matrix Inequality (LMI) is used to find the gain controller. The following path control problems is solved using the method of LQ regulators with output feedback. Simulations show that the control system can follow the paths that have been defined in the form of a reference signal square shape. The result of the simulation suggest that the method which used can bring the yaw angle at the expected value algorithm. Quadcopter can do automatically following path with cross track error mean X=0.5 m and Y=0.2 m.

  19. A human factors framework and study of the effect of nursing workload on patient safety and employee quality of working life

    PubMed Central

    Holden, Richard J.; Scanlon, Matthew C.; Patel, Neal R.; Kaushal, Rainu; Escoto, Kamisha Hamilton; Brown, Roger L.; Alper, Samuel J.; Arnold, Judi M.; Shalaby, Theresa M.; Murkowski, Kathleen; Karsh, Ben-Tzion

    2010-01-01

    Backgrounds Nursing workload is increasingly thought to contribute to both nurses’ quality of working life and quality/safety of care. Prior studies lack a coherent model for conceptualizing and measuring the effects of workload in health care. In contrast, we conceptualized a human factors model for workload specifying workload at three distinct levels of analysis and having multiple nurse and patient outcomes. Methods To test this model, we analyzed results from a cross-sectional survey of a volunteer sample of nurses in six units of two academic tertiary care pediatric hospitals. Results Workload measures were generally correlated with outcomes of interest. A multivariate structural model revealed that: the unit-level measure of staffing adequacy was significantly related to job dissatisfaction (path loading = .31) and burnout (path loading = .45); the task-level measure of mental workload related to interruptions, divided attention, and being rushed was associated with burnout (path loading = .25) and medication error likelihood (path loading = 1.04). Job-level workload was not uniquely and significantly associated with any outcomes. Discussion The human factors engineering model of nursing workload was supported by data from two pediatric hospitals. The findings provided a novel insight into specific ways that different types of workload could affect nurse and patient outcomes. These findings suggest further research and yield a number of human factors design suggestions. PMID:21228071

  20. Can a Point-of-Care Troponin I Assay be as Good as a Central Laboratory Assay? A MIDAS Investigation

    PubMed Central

    Diercks, Deborah; Birkhahn, Robert; Singer, Adam J.; Hollander, Judd E.; Nowak, Richard; Safdar, Basmah; Miller, Chadwick D.; Peberdy, Mary; Counselman, Francis; Chandra, Abhinav; Kosowsky, Joshua; Neuenschwander, James; Schrock, Jon; Lee-Lewandrowski, Elizabeth; Arnold, William; Nagurney, John

    2016-01-01

    Background We aimed to compare the diagnostic accuracy of the Alere Triage Cardio3 Tropinin I (TnI) assay (Alere, Inc., USA) and the PathFast cTnI-II (Mitsubishi Chemical Medience Corporation, Japan) against the central laboratory assay Singulex Erenna TnI assay (Singulex, USA). Methods Using the Markers in the Diagnosis of Acute Coronary Syndromes (MIDAS) study population, we evaluated the ability of three different assays to identify patients with acute myocardial infarction (AMI). The MIDAS dataset, described elsewhere, is a prospective multicenter dataset of emergency department (ED) patients with suspected acute coronary syndrome (ACS) and a planned objective myocardial perfusion evaluation. Myocardial infarction (MI) was diagnosed by central adjudication. Results The C-statistic with 95% confidence intervals (CI) for diagnosing MI by using a common population (n=241) was 0.95 (0.91-0.99), 0.95 (0.91-0.99), and 0.93 (0.89-0.97) for the Triage, Singulex, and PathFast assays, respectively. Of samples with detectable troponin, the absolute values had high Pearson (RP) and Spearman (RS) correlations and were RP =0.94 and RS=0.94 for Triage vs Singulex, RP =0.93 and RS=0.85 for Triage vs PathFast, and RP =0.89 and RS=0.73 for PathFast vs Singulex. Conclusions In a single comparative population of ED patients with suspected ACS, the Triage Cardio3 TnI, PathFast, and Singulex TnI assays provided similar diagnostic performance for MI. PMID:27374704

  1. Patch-based frame interpolation for old films via the guidance of motion paths

    NASA Astrophysics Data System (ADS)

    Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi

    2018-04-01

    Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.

  2. ECG fiducial point extraction using switching Kalman filter.

    PubMed

    Akhbari, Mahsa; Ghahjaverestan, Nasim Montazeri; Shamsollahi, Mohammad B; Jutten, Christian

    2018-04-01

    In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry's model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called "switch" is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Comparison of micrometeorological methods using open-path optical instruments for measuring methane emission from agricultural sites

    USDA-ARS?s Scientific Manuscript database

    In this study, we evaluated the accuracies of two relatively new micrometeorological methods using open-path tunable diode laser absorption spectrometers: vertical radial plume mapping method (US EPA OTM-10) and the backward Lagragian stochastic method (Wintrax®). We have evaluated the accuracy of t...

  4. Evolutionistic or revolutionary paths? A PACS maturity model for strategic situational planning.

    PubMed

    van de Wetering, Rogier; Batenburg, Ronald; Lederman, Reeva

    2010-07-01

    While many hospitals are re-evaluating their current Picture Archiving and Communication System (PACS), few have a mature strategy for PACS deployment. Furthermore, strategies for implementation, strategic and situational planning methods for the evolution of PACS maturity are scarce in the scientific literature. Consequently, in this paper we propose a strategic planning method for PACS deployment. This method builds upon a PACS maturity model (PMM), based on the elaboration of the strategic alignment concept and the maturity growth path concept previously developed in the PACS domain. First, we review the literature on strategic planning for information systems and information technology and PACS maturity. Secondly, the PMM is extended by applying four different strategic perspectives of the Strategic Alignment Framework whereupon two types of growth paths (evolutionistic and revolutionary) are applied that focus on a roadmap for PMM. This roadmap builds a path to get from one level of maturity and evolve to the next. An extended method for PACS strategic planning is developed. This method defines eight distinctive strategies for PACS strategic situational planning that allow decision-makers in hospitals to decide which approach best suits their hospitals' current situation and future ambition and what in principle is needed to evolve through the different maturity levels. The proposed method allows hospitals to strategically plan for PACS maturation. It is situational in that the required investments and activities depend on the alignment between the hospital strategy and the selected growth path. The inclusion of both strategic alignment and maturity growth path concepts make the planning method rigorous, and provide a framework for further empirical research and clinical practice.

  5. Flow-controlled magnetic particle manipulation

    DOEpatents

    Grate, Jay W [West Richland, WA; Bruckner-Lea, Cynthia J [Richland, WA; Holman, David A [Las Vegas, NV

    2011-02-22

    Inventive methods and apparatus are useful for collecting magnetic materials in one or more magnetic fields and resuspending the particles into a dispersion medium, and optionally repeating collection/resuspension one or more times in the same or a different medium, by controlling the direction and rate of fluid flow through a fluid flow path. The methods provide for contacting derivatized particles with test samples and reagents, removal of excess reagent, washing of magnetic material, and resuspension for analysis, among other uses. The methods are applicable to a wide variety of chemical and biological materials that are susceptible to magnetic labeling, including, for example, cells, viruses, oligonucleotides, proteins, hormones, receptor-ligand complexes, environmental contaminants and the like.

  6. Radar prediction of absolute rain fade distributions for earth-satellite paths and general methods for extrapolation of fade statistics to other locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1982-01-01

    The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.

  7. Detecting eye movements in dynamic environments.

    PubMed

    Reimer, Bryan; Sodhi, Manbir

    2006-11-01

    To take advantage of the increasing number of in-vehicle devices, automobile drivers must divide their attention between primary (driving) and secondary (operating in-vehicle device) tasks. In dynamic environments such as driving, however, it is not easy to identify and quantify how a driver focuses on the various tasks he/she is simultaneously engaged in, including the distracting tasks. Measures derived from the driver's scan path have been used as correlates of driver attention. This article presents a methodology for analyzing eye positions, which are discrete samples of a subject's scan path, in order to categorize driver eye movements. Previous methods of analyzing eye positions recorded in a dynamic environment have relied completely on the manual identification of the focus of visual attention from a point of regard superimposed on a video of a recorded scene, failing to utilize information regarding movement structure in the raw recorded eye positions. Although effective, these methods are too time consuming to be easily used when the large data sets that would be required to identify subtle differences between drivers, under different road conditions, and with different levels of distraction are processed. The aim of the methods presented in this article are to extend the degree of automation in the processing of eye movement data by proposing a methodology for eye movement analysis that extends automated fixation identification to include smooth and saccadic movements. By identifying eye movements in the recorded eye positions, a method of reducing the analysis of scene video to a finite search space is presented. The implementation of a software tool for the eye movement analysis is described, including an example from an on-road test-driving sample.

  8. Numerical and experimental study on the wave attenuation in bone--FDTD simulation of ultrasound propagation in cancellous bone.

    PubMed

    Nagatani, Yoshiki; Mizuno, Katsunori; Saeki, Takashi; Matsukawa, Mami; Sakaguchi, Takefumi; Hosoi, Hiroshi

    2008-11-01

    In cancellous bone, longitudinal waves often separate into fast and slow waves depending on the alignment of bone trabeculae in the propagation path. This interesting phenomenon becomes an effective tool for the diagnosis of osteoporosis because wave propagation behavior depends on the bone structure. Since the fast wave mainly propagates in trabeculae, this wave is considered to reflect the structure of trabeculae. For a new diagnosis method using the information of this fast wave, therefore, it is necessary to understand the generation mechanism and propagation behavior precisely. In this study, the generation process of fast wave was examined by numerical simulations using elastic finite-difference time-domain (FDTD) method and experimental measurements. As simulation models, three-dimensional X-ray computer tomography (CT) data of actual bone samples were used. Simulation and experimental results showed that the attenuation of fast wave was always higher in the early state of propagation, and they gradually decreased as the wave propagated in bone. This phenomenon is supposed to come from the complicated propagating paths of fast waves in cancellous bone.

  9. Spectral gap optimization of order parameters for sampling complex molecular systems

    PubMed Central

    Tiwary, Pratyush; Berne, B. J.

    2016-01-01

    In modern-day simulations of many-body systems, much of the computational complexity is shifted to the identification of slowly changing molecular order parameters called collective variables (CVs) or reaction coordinates. A vast array of enhanced-sampling methods are based on the identification and biasing of these low-dimensional order parameters, whose fluctuations are important in driving rare events of interest. Here, we describe a new algorithm for finding optimal low-dimensional CVs for use in enhanced-sampling biasing methods like umbrella sampling, metadynamics, and related methods, when limited prior static and dynamic information is known about the system, and a much larger set of candidate CVs is specified. The algorithm involves estimating the best combination of these candidate CVs, as quantified by a maximum path entropy estimate of the spectral gap for dynamics viewed as a function of that CV. The algorithm is called spectral gap optimization of order parameters (SGOOP). Through multiple practical examples, we show how this postprocessing procedure can lead to optimization of CV and several orders of magnitude improvement in the convergence of the free energy calculated through metadynamics, essentially giving the ability to extract useful information even from unsuccessful metadynamics runs. PMID:26929365

  10. Career paths in physicians' postgraduate training - an eight-year follow-up study.

    PubMed

    Buddeberg-Fischer, Barbara; Stamm, Martina; Klaghofer, Richard

    2010-10-06

    To date, there are hardly any studies on the choice of career path in medical school graduates. The present study aimed to investigate what career paths can be identified in the course of postgraduate training of physicians; what factors have an influence on the choice of a career path; and in what way the career paths are correlated with career-related factors as well as with work-life balance aspirations. The data reported originates from five questionnaire surveys of the prospective SwissMedCareer Study, beginning in 2001 (T1, last year of medical school). The study sample consisted of 358 physicians (197 females, 55%; 161 males, 45%) participating at each assessment from T2 (2003, first year of residency) to T5 (2009, seventh year of residency), answering the question: What career do you aspire to have? Furthermore, personal characteristics, chosen specialty, career motivation, mentoring experience, work-life balance as well as workload, career success and career satisfaction were assessed. Career paths were analysed with cluster analysis, and differences between clusters analysed with multivariate methods. The cluster analysis revealed four career clusters which discriminated distinctly between each other: (1) career in practice, (2) hospital career, (3) academic career, and (4) changing career goal. From T3 (third year of residency) to T5, respondents in Cluster 1-3 were rather stable in terms of their career path aspirations, while those assigned to Cluster 4 showed a high fluctuation in their career plans. Physicians in Cluster 1 showed high values in extraprofessional concerns and often consider part-time work. Cluster 2 and 3 were characterised by high instrumentality, intrinsic and extrinsic career motivation, career orientation and high career success. No cluster differences were seen in career satisfaction. In Cluster 1 and 4, females were overrepresented. Trainees should be supported to stay on the career path that best suits his/her personal and professional profile. Attention should be paid to the subgroup of physicians in Cluster 4 switching from one to another career goal in the course of their postgraduate training.

  11. Feynman path integral application on deriving black-scholes diffusion equation for european option pricing

    NASA Astrophysics Data System (ADS)

    Utama, Briandhika; Purqon, Acep

    2016-08-01

    Path Integral is a method to transform a function from its initial condition to final condition through multiplying its initial condition with the transition probability function, known as propagator. At the early development, several studies focused to apply this method for solving problems only in Quantum Mechanics. Nevertheless, Path Integral could also apply to other subjects with some modifications in the propagator function. In this study, we investigate the application of Path Integral method in financial derivatives, stock options. Black-Scholes Model (Nobel 1997) was a beginning anchor in Option Pricing study. Though this model did not successfully predict option price perfectly, especially because its sensitivity for the major changing on market, Black-Scholes Model still is a legitimate equation in pricing an option. The derivation of Black-Scholes has a high difficulty level because it is a stochastic partial differential equation. Black-Scholes equation has a similar principle with Path Integral, where in Black-Scholes the share's initial price is transformed to its final price. The Black-Scholes propagator function then derived by introducing a modified Lagrange based on Black-Scholes equation. Furthermore, we study the correlation between path integral analytical solution and Monte-Carlo numeric solution to find the similarity between this two methods.

  12. AEDT sensor path methods using BADA4

    DOT National Transportation Integrated Search

    2017-06-01

    This report documents the development and use of sensor path data processing in the Federal Aviation Administration's (FAAs) Aviation Environmental Design Tool (AEDT). The methods are primarily intended to assist analysts with using AEDT to determ...

  13. Method and apparatus for dispensing compressed natural gas and liquified natural gas to natural gas powered vehicles

    DOEpatents

    Bingham, Dennis A.; Clark, Michael L.; Wilding, Bruce M.; Palmer, Gary L.

    2005-05-31

    A fueling facility and method for dispensing liquid natural gas (LNG), compressed natural gas (CNG) or both on-demand. The fueling facility may include a source of LNG, such as cryogenic storage vessel. A low volume high pressure pump is coupled to the source of LNG to produce a stream of pressurized LNG. The stream of pressurized LNG may be selectively directed through an LNG flow path or to a CNG flow path which includes a vaporizer configured to produce CNG from the pressurized LNG. A portion of the CNG may be drawn from the CNG flow path and introduced into the CNG flow path to control the temperature of LNG flowing therethrough. Similarly, a portion of the LNG may be drawn from the LNG flow path and introduced into the CNG flow path to control the temperature of CNG flowing therethrough.

  14. Teleconnection Paths via Climate Network Direct Link Detection.

    PubMed

    Zhou, Dong; Gozolchiani, Avi; Ashkenazy, Yosef; Havlin, Shlomo

    2015-12-31

    Teleconnections describe remote connections (typically thousands of kilometers) of the climate system. These are of great importance in climate dynamics as they reflect the transportation of energy and climate change on global scales (like the El Niño phenomenon). Yet, the path of influence propagation between such remote regions, and weighting associated with different paths, are only partially known. Here we propose a systematic climate network approach to find and quantify the optimal paths between remotely distant interacting locations. Specifically, we separate the correlations between two grid points into direct and indirect components, where the optimal path is found based on a minimal total cost function of the direct links. We demonstrate our method using near surface air temperature reanalysis data, on identifying cross-latitude teleconnections and their corresponding optimal paths. The proposed method may be used to quantify and improve our understanding regarding the emergence of climate patterns on global scales.

  15. Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grutzik, Scott Joseph; Reedy, Jr., E. D.

    Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less

  16. Crack Path Selection in Thermally Loaded Borosilicate/Steel Bibeam Specimen

    DOE PAGES

    Grutzik, Scott Joseph; Reedy, Jr., E. D.

    2017-08-04

    Here, we have developed a novel specimen for studying crack paths in glass. Under certain conditions, the specimen reaches a state where the crack must select between multiple paths satisfying the K II = 0 condition. This path selection is a simple but challenging benchmark case for both analytical and numerical methods of predicting crack propagation. We document the development of the specimen, using an uncracked and instrumented test case to study the effect of adhesive choice and validate the accuracy of both a simple beam theory model and a finite element model. In addition, we present preliminary fracture testmore » results and provide a comparison to the path predicted by two numerical methods (mesh restructuring and XFEM). The directional stability of the crack path and differences in kink angle predicted by various crack kinking criteria is analyzed with a finite element model.« less

  17. Lateral position detection and control for friction stir systems

    DOEpatents

    Fleming, Paul; Lammlein, David H.; Cook, George E.; Wilkes, Don Mitchell; Strauss, Alvin M.; Delapp, David R.; Hartman, Daniel A.

    2012-06-05

    An apparatus and computer program are disclosed for processing at least one workpiece using a rotary tool with rotating member for contacting and processing the workpiece. The methods include oscillating the rotary tool laterally with respect to a selected propagation path for the rotating member with respect to the workpiece to define an oscillation path for the rotating member. The methods further include obtaining force signals or parameters related to the force experienced by the rotary tool at least while the rotating member is disposed at the extremes of the oscillation. The force signals or parameters associated with the extremes can then be analyzed to determine a lateral position of the selected path with respect to a target path and a lateral offset value can be determined based on the lateral position. The lateral distance between the selected path and the target path can be decreased based on the lateral offset value.

  18. Predictor laws for pictorial flight displays

    NASA Technical Reports Server (NTRS)

    Grunwald, A. J.

    1985-01-01

    Two predictor laws are formulated and analyzed: (1) a circular path law based on constant accelerations perpendicular to the path and (2) a predictor law based on state transition matrix computations. It is shown that for both methods the predictor provides the essential lead zeros for the path-following task. However, in contrast to the circular path law, the state transition matrix law furnishes the system with additional zeros that entirely cancel out the higher-frequency poles of the vehicle dynamics. On the other hand, the circular path law yields a zero steady-state error in following a curved trajectory with a constant radius. A combined predictor law is suggested that utilizes the advantages of both methods. A simple analysis shows that the optimal prediction time mainly depends on the level of precision required in the path-following task, and guidelines for determining the optimal prediction time are given.

  19. Fast tracking of wind speed with a differential absorption LiDAR system: first results of an experimental campaign at Stromboli volcano

    NASA Astrophysics Data System (ADS)

    Parracino, Stefano; Santoro, Simone; Maio, Giovanni; Nuvoli, Marcello; Aiuppa, Alessandro; Fiorani, Luca

    2017-04-01

    Carbon dioxide (CO2) is considered a precursor gas of volcanic eruptions by volcanologists. Monitoring the anomalous release of this parameter, we can retrieve useful information for the mitigation of volcanic hazards, such as for air traffic security. From a dataset collected during the Stromboli volcano field campaign, an assessment of the wind speed, in both horizontal and vertical paths, performing a fast tracking of this parameter was retrieved. This was determined with a newly designed shot-per-shot differential absorption LiDAR system operated in the near-infrared spectral region due to the simultaneous reconstruction of CO2 concentrations and wind speeds, using the same sample of LiDAR returns. A correlation method was used for the wind speed retrieval in which the transport of the spatial inhomogeneities of the aerosol backscattering coefficient, along the optical path of the system, was analyzed.

  20. Socialization of Culture and Coping with Discrimination Among American Indian Families: Examining Cultural Correlates of Youth Outcomes

    PubMed Central

    Yasui, Miwa; Dishion, Thomas J.; Stormshak, Elizabeth; Ball, Alison

    2016-01-01

    Objective The current study examines the interrelations between observed parental cultural socialization and socialization of coping with discrimination, and youth outcomes among a sample of 92 American Indian adolescents and their parents in a rural reservation. Method Path analysis is used to examine the relationships among observed parental socialization (cultural socialization and socialization of coping with discrimination), and youth-reported perceived discrimination, ethnic identity and depression. Results Findings reveal that higher levels of observed parental cultural socialization and socialization of coping with discrimination predict lower levels of depression as reported by youth 1 year later. Path analyses also show that observed parental cultural socialization and socialization of coping with discrimination are positively associated with youth ethnic identity. Conclusions These findings point to the importance of integrating familial socialization of culture and coping with discrimination in fostering resilience among American Indian youth. PMID:28503256

  1. Wavefront division digital holography

    NASA Astrophysics Data System (ADS)

    Zhang, Wenhui; Cao, Liangcai; Li, Rujia; Zhang, Hua; Zhang, Hao; Jiang, Qiang; Jin, Guofan

    2018-05-01

    Digital holography (DH), mostly Mach-Zehnder configuration based, belongs to non-common path amplitude splitting interference imaging whose stability and fringe contrast are environmental sensitive. This paper presents a wavefront division DH configuration with both high stability and high-contrast fringes benefitting from quasi common path wavefront-splitting interference. In our proposal, two spherical waves with similar curvature coming from the same wavefront are used, which makes full use of the physical sampling capacity of the detectors. The interference fringe spacing can be adjusted flexibly for both in-line and off-axis mode due to the independent modulation to these two waves. Only a few optical elements, including the mirror-beam splitter interference component, are used without strict alignments, which makes it robust and easy-to-implement. The proposed wavefront division DH promotes interference imaging physics into the practical and miniaturized a step forward. The feasibility of this method is proved by the imaging of a resolution target and a water flea.

  2. Path Integral Metadynamics.

    PubMed

    Quhe, Ruge; Nava, Marco; Tiwary, Pratyush; Parrinello, Michele

    2015-04-14

    We develop a new efficient approach for the simulation of static properties of quantum systems using path integral molecular dynamics in combination with metadynamics. We use the isomorphism between a quantum system and a classical one in which a quantum particle is mapped into a ring polymer. A history dependent biasing potential is built as a function of the elastic energy of the isomorphic polymer. This enhances fluctuations in the shape and size of the necklace in a controllable manner and allows escaping deep energy minima in a limited computer time. In this way, we are able to sample high free energy regions and cross barriers, which would otherwise be insurmountable with unbiased methods. This substantially improves the ability of finding the global free energy minimum as well as exploring other metastable states. The performance of the new technique is demonstrated by illustrative applications on model potentials of varying complexity.

  3. Ultraviolet absorption hygrometer

    DOEpatents

    Gersh, M.E.; Bien, F.; Bernstein, L.S.

    1986-12-09

    An ultraviolet absorption hygrometer is provided including a source of pulsed ultraviolet radiation for providing radiation in a first wavelength region where water absorbs significantly and in a second proximate wavelength region where water absorbs weakly. Ultraviolet radiation in the first and second regions which has been transmitted through a sample path of atmosphere is detected. The intensity of the radiation transmitted in each of the first and second regions is compared and from this comparison the amount of water in the sample path is determined. 5 figs.

  4. Ultrasonic standing wave preparation of a liquid cell for glucose measurements in urine by midinfrared spectroscopy and potential application to smart toilets.

    PubMed

    Yamamoto, Naoyuki; Kawashima, Natsumi; Kitazaki, Tomoya; Mori, Keita; Kang, Hanyue; Nishiyama, Akira; Wada, Kenji; Ishimaru, Ichiro

    2018-05-01

    Smart toilets could be used to monitor different components of urine in daily life for early detection of lifestyle-related diseases and prompt provision of treatment. For analysis of biological samples such as urine by midinfrared spectroscopy, thin-film samples like liquid cells are needed because of the strong absorption of midinfrared light by water. Conventional liquid cells or fixed cells are prepared based on the liquid membrane method and solution technique, but these are not quantitative and are difficult to set up and clean. We generated an ultrasonic standing wave reflection plane in a sample and produced an ultrasonic liquid cell. In this cell, the thickness of the optical path length was adjustable, as in the conventional method. The reflection plane could be generated at an arbitrary depth and internal reflected light could be detected by changing the frequency of the ultrasonic wave. We could generate refractive index boundaries using the density difference created by the ultrasonic standing wave. Creation of the reflection plane in the sample was confirmed by optical coherence tomography. Using the proposed method and midinfrared spectroscopy, we discriminated between normal urine samples spiked with glucose at different concentrations and obtained a high correlation coefficient. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  5. Hydromechanical behavior of heterogeneous carbonate rock under proportional triaxial loadings

    NASA Astrophysics Data System (ADS)

    Dautriat, JéRéMie; Gland, Nicolas; Dimanov, Alexandre; Raphanel, Jean

    2011-01-01

    The influence of stress paths representative of reservoir conditions on the poromechanical behavior and coupled directional permeabilities evolution of a heterogeneous carbonate has been studied. Our experimental methodology is based on performing confined compression tests keeping constant a stress path coefficient K = Δσr/Δσa ratio of the radial and axial stress magnitudes, commonly assumed to be representative of reservoir stress state evolution during production. The experiments are performed in a triaxial cell specially designed to measure the permeability in two orthogonal directions, along and transverse to the direction of maximum stress. The tested rock is a heterogeneous bioclastic carbonate, the Estaillades limestone, with a bimodal porosity, of mean value around 28% and a moderate permeability of mean value 125 mdarcy. Microstructural analyses of initial and deformed samples have been performed combining X-ray tomography and microtomography, scanning electron microscopy (SEM) observations, and mercury injection porosimetry. The microstructural heterogeneity, observable by SEM, is characterized by the arrangement of the micrograins of calcite in either dense or microporous aggregates surrounded by larger pores. The spatial distribution of the two kinds of aggregates is responsible for important density fluctuations throughout the samples, recorded by X-ray tomography, which characterizes the mesoheterogeneity. We show that this mesoheterogeneity is a source of a large directional variability of permeability for a given specimen and also from sample to sample. In addition, the fluctuation of the porosity in the tested set of samples, from 24% to 31%, is an expression of the macroheterogeneity. Macroscopic mechanical data and the stress path dependency of porosity and permeability have been measured in the elastic, brittle, and compaction regimes. No significant effect of the stress path on the evolution of directional permeabilities is observed in the elastic regime. At failure, according to the selected stress path, either a limited or a drastic permeability decrease takes place. From the postmortem observations at different scales, we clearly show the impact of the mesoheterogeneities on the localization of compaction, and we identify the precursor of the shear-enhanced compaction and pore collapse mechanisms (for K ≥ 0.25) as an intense microcracking affecting only the denser aggregates. Applying an effective medium theory adapted to our observations, we propose a porosity scaling to normalize the pressures at failure. It is then found that the normalized critical pressures evolve linearly with the stress path coefficient. Consequently, we put forward a new definition of the yield cap for this type of carbonate, which is parameterized by the stress path coefficient.

  6. An engineering optimization method with application to STOL-aircraft approach and landing trajectories

    NASA Technical Reports Server (NTRS)

    Jacob, H. G.

    1972-01-01

    An optimization method has been developed that computes the optimal open loop inputs for a dynamical system by observing only its output. The method reduces to static optimization by expressing the inputs as series of functions with parameters to be optimized. Since the method is not concerned with the details of the dynamical system to be optimized, it works for both linear and nonlinear systems. The method and the application to optimizing longitudinal landing paths for a STOL aircraft with an augmented wing are discussed. Noise, fuel, time, and path deviation minimizations are considered with and without angle of attack, acceleration excursion, flight path, endpoint, and other constraints.

  7. Quantum free energy landscapes from ab initio path integral metadynamics: Double proton transfer in the formic acid dimer is concerted but not correlated.

    PubMed

    Ivanov, Sergei D; Grant, Ian M; Marx, Dominik

    2015-09-28

    With the goal of computing quantum free energy landscapes of reactive (bio)chemical systems in multi-dimensional space, we combine the metadynamics technique for sampling potential energy surfaces with the ab initio path integral approach to treating nuclear quantum motion. This unified method is applied to the double proton transfer process in the formic acid dimer (FAD), in order to study the nuclear quantum effects at finite temperatures without imposing a one-dimensional reaction coordinate or reducing the dimensionality. Importantly, the ab initio path integral metadynamics technique allows one to treat the hydrogen bonds and concomitant proton transfers in FAD strictly independently and thus provides direct access to the much discussed issue of whether the double proton transfer proceeds via a stepwise or concerted mechanism. The quantum free energy landscape we compute for this H-bonded molecular complex reveals that the two protons move in a concerted fashion from initial to product state, yet world-line analysis of the quantum correlations demonstrates that the protons are as quantum-uncorrelated at the transition state as they are when close to the equilibrium structure.

  8. Associations among physical symptoms, fear of cancer recurrence, and emotional well-being among Chinese American breast cancer survivors: a path model.

    PubMed

    Cho, Dalnim; Chu, Qiao; Lu, Qian

    2018-06-01

    Most existing studies on fear of cancer recurrence (FCR) are exploratory without theoretical underpinnings and have been conducted among non-Hispanic Whites. Based on theoretical models, we hypothesized that more physical symptoms (pain and fatigue) would be associated with higher FCR, which, in turn would be related to lower emotional well-being among Chinese American breast cancer survivors. Participants were 77 Chinese American women who were diagnosed with breast cancer of stages 0-III. A cross-sectional path analysis was conducted with a bootstrapping method. The final model showed that indirect paths from pain interference to emotional well-being and from fatigue to emotional well-being via FCR were significant. That is, higher levels of pain interference and fatigue were associated with higher FCR, which was further related to lower emotional well-being. To our best knowledge, this is the first theory-driven study that investigates FCR experiences among Chinese American breast cancer survivors. Our study might provide a more comprehensive understanding of FCR as it simultaneously shows predictors and a psychological consequence of FCR. Results need to be replicated in large, racially/ethnically diverse samples and longitudinal studies.

  9. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    NASA Astrophysics Data System (ADS)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  10. Frequency shift measurement in shock-compressed materials

    DOEpatents

    Moore, David S.; Schmidt, Stephen C.

    1985-01-01

    A method for determining molecular vibrational frequencies in shock-compressed transparent materials. A single laser beam pulse is directed into a sample material while the material is shock-compressed from a direction opposite that of the incident laser beam. A Stokes beam produced by stimulated Raman scattering is emitted back along the path of the incident laser beam, that is, in the opposite direction to that of the incident laser beam. The Stokes beam is separated from the incident beam and its frequency measured. The difference in frequency between the Stokes beam and the incident beam is representative of the characteristic frequency of the Raman active mode of the sample. Both the incident beam and the Stokes beam pass perpendicularly through the shock front advancing through the sample, thereby minimizing adverse effects of refraction.

  11. Energy landscapes and properties of biomolecules.

    PubMed

    Wales, David J

    2005-11-09

    Thermodynamic and dynamic properties of biomolecules can be calculated using a coarse-grained approach based upon sampling stationary points of the underlying potential energy surface. The superposition approximation provides an overall partition function as a sum of contributions from the local minima, and hence functions such as internal energy, entropy, free energy and the heat capacity. To obtain rates we must also sample transition states that link the local minima, and the discrete path sampling method provides a systematic means to achieve this goal. A coarse-grained picture is also helpful in locating the global minimum using the basin-hopping approach. Here we can exploit a fictitious dynamics between the basins of attraction of local minima, since the objective is to find the lowest minimum, rather than to reproduce the thermodynamics or dynamics.

  12. Frequency shift measurement in shock-compressed materials

    DOEpatents

    Moore, D.S.; Schmidt, S.C.

    1984-02-21

    A method is disclosed for determining molecular vibrational frequencies in shock-compressed transparent materials. A single laser beam pulse is directed into a sample material while the material is shock-compressed from a direction opposite that of the incident laser beam. A Stokes beam produced by stimulated Raman scattering is emitted back along the path of the incident laser beam, that is, in the opposite direction to that of the incident laser beam. The Stokes beam is separated from the incident beam and its frequency measured. The difference in frequency between the Stokes beam and the incident beam is representative of the characteristic frequency of the Raman active mode of the sample. Both the incident beam and the Stokes beam pass perpendicularly through the stock front advancing through the sample, thereby minimizing adverse effects of refraction.

  13. On computing the global time-optimal motions of robotic manipulators in the presence of obstacles

    NASA Technical Reports Server (NTRS)

    Shiller, Zvi; Dubowsky, Steven

    1991-01-01

    A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.

  14. Rotational-path decomposition based recursive planning for spacecraft attitude reorientation

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2018-02-01

    The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.

  15. Remote atmospheric probing by ground to ground line of sight optical methods

    NASA Technical Reports Server (NTRS)

    Lawrence, R. S.

    1969-01-01

    The optical effects arising from refractive-index variations in the clear air are qualitatively described, and the possibilities are discussed of using those effects for remotely sensing the physical properties of the atmosphere. The effects include scintillations, path length fluctuations, spreading of a laser beam, deflection of the beam, and depolarization. The physical properties that may be measured include the average temperature along the path, the vertical temperature gradient, and the distribution along the path of the strength of turbulence and the transverse wind velocity. Line-of-sight laser beam methods are clearly effective in measuring the average properties, but less effective in measuring distributions along the path. Fundamental limitations to the resolution are pointed out and experiments are recommended to investigate the practicality of the methods.

  16. The application of compressive sampling in rapid ultrasonic computerized tomography (UCT) technique of steel tube slab (STS)

    PubMed Central

    Jiang, Baofeng; Jia, Pengjiao; Zhao, Wen; Wang, Wentao

    2018-01-01

    This paper explores a new method for rapid structural damage inspection of steel tube slab (STS) structures along randomly measured paths based on a combination of compressive sampling (CS) and ultrasonic computerized tomography (UCT). In the measurement stage, using fewer randomly selected paths rather than the whole measurement net is proposed to detect the underlying damage of a concrete-filled steel tube. In the imaging stage, the ℓ1-minimization algorithm is employed to recover the information of the microstructures based on the measurement data related to the internal situation of the STS structure. A numerical concrete tube model, with the various level of damage, was studied to demonstrate the performance of the rapid UCT technique. Real-world concrete-filled steel tubes in the Shenyang Metro stations were detected using the proposed UCT technique in a CS framework. Both the numerical and experimental results show the rapid UCT technique has the capability of damage detection in an STS structure with a high level of accuracy and with fewer required measurements, which is more convenient and efficient than the traditional UCT technique. PMID:29293593

  17. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images †

    PubMed Central

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-01-01

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the “navigation via classification” task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications. PMID:28604624

  18. Convolutional Neural Network-Based Robot Navigation Using Uncalibrated Spherical Images.

    PubMed

    Ran, Lingyan; Zhang, Yanning; Zhang, Qilin; Yang, Tao

    2017-06-12

    Vision-based mobile robot navigation is a vibrant area of research with numerous algorithms having been developed, the vast majority of which either belong to the scene-oriented simultaneous localization and mapping (SLAM) or fall into the category of robot-oriented lane-detection/trajectory tracking. These methods suffer from high computational cost and require stringent labelling and calibration efforts. To address these challenges, this paper proposes a lightweight robot navigation framework based purely on uncalibrated spherical images. To simplify the orientation estimation, path prediction and improve computational efficiency, the navigation problem is decomposed into a series of classification tasks. To mitigate the adverse effects of insufficient negative samples in the "navigation via classification" task, we introduce the spherical camera for scene capturing, which enables 360° fisheye panorama as training samples and generation of sufficient positive and negative heading directions. The classification is implemented as an end-to-end Convolutional Neural Network (CNN), trained on our proposed Spherical-Navi image dataset, whose category labels can be efficiently collected. This CNN is capable of predicting potential path directions with high confidence levels based on a single, uncalibrated spherical image. Experimental results demonstrate that the proposed framework outperforms competing ones in realistic applications.

  19. Implant Dentistry: Monitoring of Bacteria Along the Transmucosal Passage of the Healing Screw in Absence of Functional Load

    PubMed Central

    MEYNARDI, F.; PASQUALINI, M.E.; ROSSI, F.; DAL CARLO, L.; NARDONE, M.; BAGGI, L.

    2016-01-01

    SUMMARY Purpose To assess the changes in bacterial profile along the transmucosal path of healing screws placed immediately after insertion of two-piece endosseus implants during the 4-month osseointegration phase, in absence of functional load. Materials and methods Two site-specific samples were collected at the peri-implant mucosa of the healing screws of 80 two-piece implants, for a total of 640 samples. Implants placement was performed following a single protocol with flapless technique, in order to limit bacterial contamination of the surgical site. Identical healing screws (5 mm diameter/4 mm height) were used for each of the 80 implants. During the 4 months of the study, the patients followed a standard oral care regimen with no special hygiene maneuvers at the collection sites. Results The present research documents that during the 4-month period prior to application of function load the bacterial profile of all sites exhibited a clear prevalence of cocci at the interface between implant neck and osteoalveolar crest margin. Conclusions A potentially pathogenic bacterial flora developed only along the peri-implant transmucosal path. PMID:28280528

  20. Common-path conoscopic interferometry for enhanced picosecond ultrasound detection

    NASA Astrophysics Data System (ADS)

    Liu, Liwang; Guillet, Yannick; Audoin, Bertrand

    2018-05-01

    We report on a common-path implementation of conoscopic interferometry in picosecond pump-probe reflectometry for simple and efficient detection of picosecond ultrasounds. The interferometric configuration proposed here is greatly simplified, involving only the insertion of a birefringent crystal in a standard reflectometry setup. Our approach is demonstrated by the optical detection of coherent acoustic phonons propagating through thin metal films under two representative geometries, one a particular case where the crystal slab is part of a sample as substrate of a metal film, and the other a more general case where the crystal slab is independent of the sample as part of the detection system. We first illustrate the former with a 300 nm thin film of polycrystalline titanium, deposited by physical vapor deposition on top of a 1 mm-thick uniaxial (0001) sapphire crystal. A signal-to-noise ratio (SNR) enhancement of more than 15 dB is achieved compared to conventional reflectometry. Next, the general case is demonstrated with a 900 nm-tungsten film sputtered on a silicon wafer substrate. More echoes can be discriminated by using the reported approach compared to standard reflectometry, which confirms the improvement in SNR and suggests broad applications for the reported method.

  1. Arctic curves in path models from the tangent method

    NASA Astrophysics Data System (ADS)

    Di Francesco, Philippe; Lapa, Matthew F.

    2018-04-01

    Recently, Colomo and Sportiello introduced a powerful method, known as the tangent method, for computing the arctic curve in statistical models which have a (non- or weakly-) intersecting lattice path formulation. We apply the tangent method to compute arctic curves in various models: the domino tiling of the Aztec diamond for which we recover the celebrated arctic circle; a model of Dyck paths equivalent to the rhombus tiling of a half-hexagon for which we find an arctic half-ellipse; another rhombus tiling model with an arctic parabola; the vertically symmetric alternating sign matrices, where we find the same arctic curve as for unconstrained alternating sign matrices. The latter case involves lattice paths that are non-intersecting but that are allowed to have osculating contact points, for which the tangent method was argued to still apply. For each problem we estimate the large size asymptotics of a certain one-point function using LU decomposition of the corresponding Gessel–Viennot matrices, and a reformulation of the result amenable to asymptotic analysis.

  2. Extended charge banking model of dual path shocks for implantable cardioverter defibrillators

    PubMed Central

    Dosdall, Derek J; Sweeney, James D

    2008-01-01

    Background Single path defibrillation shock methods have been improved through the use of the Charge Banking Model of defibrillation, which predicts the response of the heart to shocks as a simple resistor-capacitor (RC) circuit. While dual path defibrillation configurations have significantly reduced defibrillation thresholds, improvements to dual path defibrillation techniques have been limited to experimental observations without a practical model to aid in improving dual path defibrillation techniques. Methods The Charge Banking Model has been extended into a new Extended Charge Banking Model of defibrillation that represents small sections of the heart as separate RC circuits, uses a weighting factor based on published defibrillation shock field gradient measures, and implements a critical mass criteria to predict the relative efficacy of single and dual path defibrillation shocks. Results The new model reproduced the results from several published experimental protocols that demonstrated the relative efficacy of dual path defibrillation shocks. The model predicts that time between phases or pulses of dual path defibrillation shock configurations should be minimized to maximize shock efficacy. Discussion Through this approach the Extended Charge Banking Model predictions may be used to improve dual path and multi-pulse defibrillation techniques, which have been shown experimentally to lower defibrillation thresholds substantially. The new model may be a useful tool to help in further improving dual path and multiple pulse defibrillation techniques by predicting optimal pulse durations and shock timing parameters. PMID:18673561

  3. A high-throughput robotic sample preparation system and HPLC-MS/MS for measuring urinary anatabine, anabasine, nicotine and major nicotine metabolites.

    PubMed

    Wei, Binnian; Feng, June; Rehmani, Imran J; Miller, Sharyn; McGuffey, James E; Blount, Benjamin C; Wang, Lanqing

    2014-09-25

    Most sample preparation methods characteristically involve intensive and repetitive labor, which is inefficient when preparing large numbers of samples from population-scale studies. This study presents a robotic system designed to meet the sampling requirements for large population-scale studies. Using this robotic system, we developed and validated a method to simultaneously measure urinary anatabine, anabasine, nicotine and seven major nicotine metabolites: 4-Hydroxy-4-(3-pyridyl)butanoic acid, cotinine-N-oxide, nicotine-N-oxide, trans-3'-hydroxycotinine, norcotinine, cotinine and nornicotine. We analyzed robotically prepared samples using high-performance liquid chromatography (HPLC) coupled with triple quadrupole mass spectrometry in positive electrospray ionization mode using scheduled multiple reaction monitoring (sMRM) with a total runtime of 8.5 min. The optimized procedure was able to deliver linear analyte responses over a broad range of concentrations. Responses of urine-based calibrators delivered coefficients of determination (R(2)) of >0.995. Sample preparation recovery was generally higher than 80%. The robotic system was able to prepare four 96-well plate (384 urine samples) per day, and the overall method afforded an accuracy range of 92-115%, and an imprecision of <15.0% on average. The validation results demonstrate that the method is accurate, precise, sensitive, robust, and most significantly labor-saving for sample preparation, making it efficient and practical for routine measurements in large population-scale studies such as the National Health and Nutrition Examination Survey (NHANES) and the Population Assessment of Tobacco and Health (PATH) study. Published by Elsevier B.V.

  4. SSAGES: Software Suite for Advanced General Ensemble Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian

    Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods, and that facilitates implementation of new techniquesmore » as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite.« less

  5. Faster protein folding using enhanced conformational sampling of molecular dynamics simulation.

    PubMed

    Kamberaj, Hiqmet

    2018-05-01

    In this study, we applied swarm particle-like molecular dynamics (SPMD) approach to enhance conformational sampling of replica exchange simulations. In particular, the approach showed significant improvement in sampling efficiency of conformational phase space when combined with replica exchange method (REM) in computer simulation of peptide/protein folding. First we introduce the augmented dynamical system of equations, and demonstrate the stability of the algorithm. Then, we illustrate the approach by using different fully atomistic and coarse-grained model systems, comparing them with the standard replica exchange method. In addition, we applied SPMD simulation to calculate the time correlation functions of the transitions in a two dimensional surface to demonstrate the enhancement of transition path sampling. Our results showed that folded structure can be obtained in a shorter simulation time using the new method when compared with non-augmented dynamical system. Typically, in less than 0.5 ns using replica exchange runs assuming that native folded structure is known and within simulation time scale of 40 ns in the case of blind structure prediction. Furthermore, the root mean square deviations from the reference structures were less than 2Å. To demonstrate the performance of new method, we also implemented three simulation protocols using CHARMM software. Comparisons are also performed with standard targeted molecular dynamics simulation method. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. SSAGES: Software Suite for Advanced General Ensemble Simulations

    NASA Astrophysics Data System (ADS)

    Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian; Sikora, Benjamin J.; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z.; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J.; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S.; Reid, Daniel R.; Sevgen, Emre; Thapar, Vikram; Webb, Michael A.; Whitmer, Jonathan K.; de Pablo, Juan J.

    2018-01-01

    Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.

  7. A Comparison of Risk Sensitive Path Planning Methods for Aircraft Emergency Landing

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Plaunt, Christian; Smith, David E.; Smith, Tristan

    2009-01-01

    Determining the best site to land a damaged aircraft presents some interesting challenges for standard path planning techniques. There are multiple possible locations to consider, the space is 3-dimensional with dynamics, the criteria for a good path is determined by overall risk rather than distance or time, and optimization really matters, since an improved path corresponds to greater expected survival rate. We have investigated a number of different path planning methods for solving this problem, including cell decomposition, visibility graphs, probabilistic road maps (PRMs), and local search techniques. In their pure form, none of these techniques have proven to be entirely satisfactory - some are too slow or unpredictable, some produce highly non-optimal paths or do not find certain types of paths, and some do not cope well with the dynamic constraints when controllability is limited. In the end, we are converging towards a hybrid technique that involves seeding a roadmap with a layered visibility graph, using PRM to extend that roadmap, and using local search to further optimize the resulting paths. We describe the techniques we have investigated, report on our experiments with these techniques, and discuss when and why various techniques were unsatisfactory.

  8. Predictors of science, technology, engineering, and mathematics choice options: A meta-analytic path analysis of the social-cognitive choice model by gender and race/ethnicity.

    PubMed

    Lent, Robert W; Sheu, Hung-Bin; Miller, Matthew J; Cusick, Megan E; Penn, Lee T; Truong, Nancy N

    2018-01-01

    We tested the interest and choice portion of social-cognitive career theory (SCCT; Lent, Brown, & Hackett, 1994) in the context of science, technology, engineering, and mathematics (STEM) domains. Data from 143 studies (including 196 independent samples) conducted over a 30-year period (1983 through 2013) were subjected to meta-analytic path analyses. The interest/choice model was found to fit the data well over all samples as well as within samples composed primarily of women and men and racial/ethnic minority and majority persons. The model also accounted for large portions of the variance in interests and choice goals within each path analysis. Despite the general predictive utility of SCCT across gender and racial/ethnic groups, we did find that several parameter estimates differed by group. We present both the group similarities and differences and consider their implications for future research, intervention, and theory refinement. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Trajectory generation for an on-road autonomous vehicle

    NASA Astrophysics Data System (ADS)

    Horst, John; Barbera, Anthony

    2006-05-01

    We describe an algorithm that generates a smooth trajectory (position, velocity, and acceleration at uniformly sampled instants of time) for a car-like vehicle autonomously navigating within the constraints of lanes in a road. The technique models both vehicle paths and lane segments as straight line segments and circular arcs for mathematical simplicity and elegance, which we contrast with cubic spline approaches. We develop the path in an idealized space, warp the path into real space and compute path length, generate a one-dimensional trajectory along the path length that achieves target speeds and positions, and finally, warp, translate, and rotate the one-dimensional trajectory points onto the path in real space. The algorithm moves a vehicle in lane safely and efficiently within speed and acceleration maximums. The algorithm functions in the context of other autonomous driving functions within a carefully designed vehicle control hierarchy.

  10. Ensuring critical event sequences in high consequence computer based systems as inspired by path expressions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kidd, M.E.C.

    1997-02-01

    The goal of our work is to provide a high level of confidence that critical software driven event sequences are maintained in the face of hardware failures, malevolent attacks and harsh or unstable operating environments. This will be accomplished by providing dynamic fault management measures directly to the software developer and to their varied development environments. The methodology employed here is inspired by previous work in path expressions. This paper discusses the perceived problems, a brief overview of path expressions, the proposed methods, and a discussion of the differences between the proposed methods and traditional path expression usage and implementation.

  11. Chord-length and free-path distribution functions for many-body systems

    NASA Astrophysics Data System (ADS)

    Lu, Binglin; Torquato, S.

    1993-04-01

    We study fundamental morphological descriptors of disordered media (e.g., heterogeneous materials, liquids, and amorphous solids): the chord-length distribution function p(z) and the free-path distribution function p(z,a). For concreteness, we will speak in the language of heterogeneous materials composed of two different materials or ``phases.'' The probability density function p(z) describes the distribution of chord lengths in the sample and is of great interest in stereology. For example, the first moment of p(z) is the ``mean intercept length'' or ``mean chord length.'' The chord-length distribution function is of importance in transport phenomena and problems involving ``discrete free paths'' of point particles (e.g., Knudsen diffusion and radiative transport). The free-path distribution function p(z,a) takes into account the finite size of a simple particle of radius a undergoing discrete free-path motion in the heterogeneous material and we show that it is actually the chord-length distribution function for the system in which the ``pore space'' is the space available to a finite-sized particle of radius a. Thus it is shown that p(z)=p(z,0). We demonstrate that the functions p(z) and p(z,a) are related to another fundamentally important morphological descriptor of disordered media, namely, the so-called lineal-path function L(z) studied by us in previous work [Phys. Rev. A 45, 922 (1992)]. The lineal path function gives the probability of finding a line segment of length z wholly in one of the ``phases'' when randomly thrown into the sample. We derive exact series representations of the chord-length and free-path distribution functions for systems of spheres with a polydispersivity in size in arbitrary dimension D. For the special case of spatially uncorrelated spheres (i.e., fully penetrable spheres) we evaluate exactly the aforementioned functions, the mean chord length, and the mean free path. We also obtain corresponding analytical formulas for the case of mutually impenetrable (i.e., spatially correlated) polydispersed spheres.

  12. Two betweenness centrality measures based on Randomized Shortest Paths

    PubMed Central

    Kivimäki, Ilkka; Lebichot, Bertrand; Saramäki, Jari; Saerens, Marco

    2016-01-01

    This paper introduces two new closely related betweenness centrality measures based on the Randomized Shortest Paths (RSP) framework, which fill a gap between traditional network centrality measures based on shortest paths and more recent methods considering random walks or current flows. The framework defines Boltzmann probability distributions over paths of the network which focus on the shortest paths, but also take into account longer paths depending on an inverse temperature parameter. RSP’s have previously proven to be useful in defining distance measures on networks. In this work we study their utility in quantifying the importance of the nodes of a network. The proposed RSP betweenness centralities combine, in an optimal way, the ideas of using the shortest and purely random paths for analysing the roles of network nodes, avoiding issues involving these two paradigms. We present the derivations of these measures and how they can be computed in an efficient way. In addition, we show with real world examples the potential of the RSP betweenness centralities in identifying interesting nodes of a network that more traditional methods might fail to notice. PMID:26838176

  13. The Structure of the UPPS-R-Child Impulsivity Scale and its Relations with Substance Use Outcomes Among Treatment-Seeking Adolescents

    PubMed Central

    Prisciandaro, James J.; Kutty Falls, Sandhya; Magid, Viktoriya

    2016-01-01

    Background A youth version of the UPPS Impulsivity Scale (UPPS-R-C) was previously shown to predict drinking initiation among pre-adolescents. The goals of the current study were to confirm the structure of the UPPS-R-C using a sample of treatment-seeking adolescents and to examine the scales’ relations with alcohol use, marijuana use, and problems related to substance use. Method Participants (N = 120; ages 12–18; M = 15.7) completed questionnaires at treatment intake. Confirmatory factor analysis (CFA) of the UPPS-R-C was conducted using a 5-factor model with factors corresponding to negative urgency, positive urgency, lack of perseverance, lack of premeditation, and sensation seeking. Relations between UPPS-R-C factors and binge drinking, marijuana use, and problems resulting from substance use were examined using path analysis. Results CFA suggested the 5-factor model provided adequate fit to the data. The hypothesized path model was partially supported, positive urgency was associated with frequency of binge drinking, and both negative urgency and frequency of binge drinking was associated with problems due to substance use. Other hypothesized paths were not significant. Although not hypothesized, negative urgency was associated with frequency of marijuana use and lack of perseverance was associated with problems due to use. Conclusions Results suggest that the UPPS-R-C can be used with a treatment-seeking sample of adolescents. Furthermore, negative urgency, positive urgency, and lack of perseverance may be indicative of more severe substance use problems in a treatment setting. PMID:26905208

  14. Novel Electrosorption-Enhanced Solid-Phase Microextraction Device for Ultrafast In Vivo Sampling of Ionized Pharmaceuticals in Fish.

    PubMed

    Qiu, Junlang; Wang, Fuxin; Zhang, Tianlang; Chen, Le; Liu, Yuan; Zhu, Fang; Ouyang, Gangfeng

    2018-01-02

    Decreasing the tedious sample preparation duration is one of the most important concerns for the environmental analytical chemistry especially for in vivo experiments. However, due to the slow mass diffusion paths for most of the conventional methods, ultrafast in vivo sampling remains challenging. Herein, for the first time, we report an ultrafast in vivo solid-phase microextraction (SPME) device based on electrosorption enhancement and a novel custom-made CNT@PPY@pNE fiber for in vivo sampling of ionized acidic pharmaceuticals in fish. This sampling device exhibited an excellent robustness, reproducibility, matrix effect-resistant capacity, and quantitative ability. Importantly, the extraction kinetics of the targeted ionized pharmaceuticals were significantly accelerated using the device, which significantly improved the sensitivity of the SPME in vivo sampling method (limits of detection ranged from 0.12 ng·g -1 to 0.25 ng·g -1 ) and shorten the sampling time (only 1 min). The proposed approach was successfully applied to monitor the concentrations of ionized pharmaceuticals in living fish, which demonstrated that the device and fiber were suitable for ultrafast in vivo sampling and continuous monitoring. In addition, the bioconcentration factor (BCF) values of the pharmaceuticals were derived in tilapia (Oreochromis mossambicus) for the first time, based on the data of ultrafast in vivo sampling. Therefore, we developed and validated an effective and ultrafast SPME sampling device for in vivo sampling of ionized analytes in living organisms and this state-of-the-art method provides an alternative technique for future in vivo studies.

  15. Predicting active-layer soil thickness using topographic variables at a small watershed scale

    PubMed Central

    Li, Aidi; Tan, Xing; Wu, Wei; Liu, Hongbin; Zhu, Jie

    2017-01-01

    Knowledge about the spatial distribution of active-layer (AL) soil thickness is indispensable for ecological modeling, precision agriculture, and land resource management. However, it is difficult to obtain the details on AL soil thickness by using conventional soil survey method. In this research, the objective is to investigate the possibility and accuracy of mapping the spatial distribution of AL soil thickness through random forest (RF) model by using terrain variables at a small watershed scale. A total of 1113 soil samples collected from the slope fields were randomly divided into calibration (770 soil samples) and validation (343 soil samples) sets. Seven terrain variables including elevation, aspect, relative slope position, valley depth, flow path length, slope height, and topographic wetness index were derived from a digital elevation map (30 m). The RF model was compared with multiple linear regression (MLR), geographically weighted regression (GWR) and support vector machines (SVM) approaches based on the validation set. Model performance was evaluated by precision criteria of mean error (ME), mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2). Comparative results showed that RF outperformed MLR, GWR and SVM models. The RF gave better values of ME (0.39 cm), MAE (7.09 cm), and RMSE (10.85 cm) and higher R2 (62%). The sensitivity analysis demonstrated that the DEM had less uncertainty than the AL soil thickness. The outcome of the RF model indicated that elevation, flow path length and valley depth were the most important factors affecting the AL soil thickness variability across the watershed. These results demonstrated the RF model is a promising method for predicting spatial distribution of AL soil thickness using terrain parameters. PMID:28877196

  16. A diffusion tensor imaging tractography algorithm based on Navier-Stokes fluid mechanics.

    PubMed

    Hageman, Nathan S; Toga, Arthur W; Narr, Katherine L; Shattuck, David W

    2009-03-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color images of the DTI dataset.

  17. A Diffusion Tensor Imaging Tractography Algorithm Based on Navier-Stokes Fluid Mechanics

    PubMed Central

    Hageman, Nathan S.; Toga, Arthur W.; Narr, Katherine; Shattuck, David W.

    2009-01-01

    We introduce a fluid mechanics based tractography method for estimating the most likely connection paths between points in diffusion tensor imaging (DTI) volumes. We customize the Navier-Stokes equations to include information from the diffusion tensor and simulate an artificial fluid flow through the DTI image volume. We then estimate the most likely connection paths between points in the DTI volume using a metric derived from the fluid velocity vector field. We validate our algorithm using digital DTI phantoms based on a helical shape. Our method segmented the structure of the phantom with less distortion than was produced using implementations of heat-based partial differential equation (PDE) and streamline based methods. In addition, our method was able to successfully segment divergent and crossing fiber geometries, closely following the ideal path through a digital helical phantom in the presence of multiple crossing tracts. To assess the performance of our algorithm on anatomical data, we applied our method to DTI volumes from normal human subjects. Our method produced paths that were consistent with both known anatomy and directionally encoded color (DEC) images of the DTI dataset. PMID:19244007

  18. A star recognition method based on the Adaptive Ant Colony algorithm for star sensors.

    PubMed

    Quan, Wei; Fang, Jiancheng

    2010-01-01

    A new star recognition method based on the Adaptive Ant Colony (AAC) algorithm has been developed to increase the star recognition speed and success rate for star sensors. This method draws circles, with the center of each one being a bright star point and the radius being a special angular distance, and uses the parallel processing ability of the AAC algorithm to calculate the angular distance of any pair of star points in the circle. The angular distance of two star points in the circle is solved as the path of the AAC algorithm, and the path optimization feature of the AAC is employed to search for the optimal (shortest) path in the circle. This optimal path is used to recognize the stellar map and enhance the recognition success rate and speed. The experimental results show that when the position error is about 50″, the identification success rate of this method is 98% while the Delaunay identification method is only 94%. The identification time of this method is up to 50 ms.

  19. User's guide to Monte Carlo methods for evaluating path integrals

    NASA Astrophysics Data System (ADS)

    Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan

    2018-04-01

    We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.

  20. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles

    PubMed Central

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently. PMID:28255297

  1. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles.

    PubMed

    Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.

  2. Optimization of magnet end-winding geometry

    NASA Astrophysics Data System (ADS)

    Reusch, Michael F.; Weissenburger, Donald W.; Nearing, James C.

    1994-03-01

    A simple, almost entirely analytic, method for the optimization of stress-reduced magnet-end winding paths for ribbon-like superconducting cable is presented. This technique is based on characterization of these paths as developable surfaces, i.e., surfaces whose intrinsic geometry is flat. The method is applicable to winding mandrels of arbitrary geometry. Computational searches for optimal winding paths are easily implemented via the technique. Its application to the end configuration of cylindrical Superconducting Super Collider (SSC)-type magnets is discussed. The method may be useful for other engineering problems involving the placement of thin sheets of material.

  3. Development of an automated flow injection analysis system for determination of phosphate in nutrient solutions.

    PubMed

    Karadağ, Sevinç; Görüşük, Emine M; Çetinkaya, Ebru; Deveci, Seda; Dönmez, Koray B; Uncuoğlu, Emre; Doğu, Mustafa

    2018-01-25

    A fully automated flow injection analysis (FIA) system was developed for determination of phosphate ion in nutrient solutions. This newly developed FIA system is a portable, rapid and sensitive measuring instrument that allows on-line analysis and monitoring of phosphate ion concentration in nutrient solutions. The molybdenum blue method, which is widely used in FIA phosphate analysis, was adapted to the developed FIA system. The method is based on the formation of ammonium Mo(VI) ion by reaction of ammonium molybdate with the phosphate ion present in the medium. The Mo(VI) ion then reacts with ascorbic acid and is reduced to the spectrometrically measurable Mo(V) ion. New software specific for flow analysis was developed in the LabVIEW development environment to control all the components of the FIA system. The important factors affecting the analytical signal were identified as reagent flow rate, injection volume and post-injection flow path length, and they were optimized using Box-Behnken experimental design and response surface methodology. The optimum point for the maximum analytical signal was calculated as 0.50 mL min -1 reagent flow rate, 100 µL sample injection volume and 60 cm post-injection flow path length. The proposed FIA system had a sampling frequency of 100 samples per hour over a linear working range of 3-100 mg L -1 (R 2  = 0.9995). The relative standard deviation (RSD) was 1.09% and the limit of detection (LOD) was 0.34 mg L -1 . Various nutrient solutions from a tomato-growing hydroponic greenhouse were analyzed with the developed FIA system and the results were found to be in good agreement with vanadomolybdate chemical method findings. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  4. Seismic Waveform Tomography of the Iranian Region

    NASA Astrophysics Data System (ADS)

    Maggi, A.; Priestley, K.; Jackson, J.

    2001-05-01

    Surprisingly little is known about the detailed velocity structure of Iran, despite the region's importance in the tectonics of the Middle East. Previous studies have concentrated mainly on fundamental mode surface wave dispersion measurements along isolated paths (e.g.~Asudeh, 1982; Cong & Mitchell, 1998; Ritzwoller et.~al, 1998), and the propagation characteristics of crust and upper mantle body waves (e.g. Hearn & Ni 1994; Rodgers et.~al 1997). We use the partitioned waveform inversion method of Nolet (1990) on several hundred regional waveforms crossing the Iranian region to produce a 3-D seismic velocity map for the crust and upper mantle of the area. The method consists of using long period seismograms from earthquakes with well determined focal mechanisms and depths to constrain 1-D path-averaged shear wave models along regional paths. The constraints imposed on the 1-D models by the seismograms are then combined with independent constraints from other methods (e.g.~Moho depths from reciever function analysis etc.), to solve for the 3-D seismic velocity structure of the region. A dense coverage of fundamental mode rayleigh waves at a period of 100~s ensures good resolution of lithospheric scale structure. We also use 20~s period fundamental mode rayleigh waves and some Pnl wavetrains to make estimates of crustal thickness variations and average crustal velocities. A few deeper events give us some coverage of higher mode rayleigh waves and mantle S waves, which sample to the base of the upper mantle. Our crustal thickness estimates range from 45~km in the southern Zagros mountains, to 40~km in central Iran and 35~km towards the north of the region. We also find inconsistencies between the 1-D models required to fit the vertical and the tranverse seismograms, indicating the presence of anisotropy.

  5. Coarse-grained representation of the quasi adiabatic propagator path integral for the treatment of non-Markovian long-time bath memory

    NASA Astrophysics Data System (ADS)

    Richter, Martin; Fingerhut, Benjamin P.

    2017-06-01

    The description of non-Markovian effects imposed by low frequency bath modes poses a persistent challenge for path integral based approaches like the iterative quasi-adiabatic propagator path integral (iQUAPI) method. We present a novel approximate method, termed mask assisted coarse graining of influence coefficients (MACGIC)-iQUAPI, that offers appealing computational savings due to substantial reduction of considered path segments for propagation. The method relies on an efficient path segment merging procedure via an intermediate coarse grained representation of Feynman-Vernon influence coefficients that exploits physical properties of system decoherence. The MACGIC-iQUAPI method allows us to access the regime of biological significant long-time bath memory on the order of hundred propagation time steps while retaining convergence to iQUAPI results. Numerical performance is demonstrated for a set of benchmark problems that cover bath assisted long range electron transfer, the transition from coherent to incoherent dynamics in a prototypical molecular dimer and excitation energy transfer in a 24-state model of the Fenna-Matthews-Olson trimer complex where in all cases excellent agreement with numerically exact reference data is obtained.

  6. Microvolume Protein Concentration Determination using the NanoDrop 2000c Spectrophotometer

    PubMed Central

    Desjardins, Philippe; Hansen, Joel B.; Allen, Michael

    2009-01-01

    Traditional spectrophotometry requires placing samples into cuvettes or capillaries. This is often impractical due to the limited sample volumes often used for protein analysis. The Thermo Scientific NanoDrop 2000c Spectrophotometer solves this issue with an innovative sample retention system that holds microvolume samples between two measurement surfaces using the surface tension properties of liquids, enabling the quantification of samples in volumes as low as 0.5-2 μL. The elimination of cuvettes or capillaries allows real time changes in path length, which reduces the measurement time while greatly increasing the dynamic range of protein concentrations that can be measured. The need for dilutions is also eliminated, and preparations for sample quantification are relatively easy as the measurement surfaces can be simply wiped with laboratory wipe. This video article presents modifications to traditional protein concentration determination methods for quantification of microvolume amounts of protein using A280 absorbance readings or the BCA colorimetric assay. PMID:19890248

  7. Path Planning Algorithms for the Adaptive Sensor Fleet

    NASA Technical Reports Server (NTRS)

    Stoneking, Eric; Hosler, Jeff

    2005-01-01

    The Adaptive Sensor Fleet (ASF) is a general purpose fleet management and planning system being developed by NASA in coordination with NOAA. The current mission of ASF is to provide the capability for autonomous cooperative survey and sampling of dynamic oceanographic phenomena such as current systems and algae blooms. Each ASF vessel is a software model that represents a real world platform that carries a variety of sensors. The OASIS platform will provide the first physical vessel, outfitted with the systems and payloads necessary to execute the oceanographic observations described in this paper. The ASF architecture is being designed for extensibility to accommodate heterogenous fleet elements, and is not limited to using the OASIS platform to acquire data. This paper describes the path planning algorithms developed for the acquisition phase of a typical ASF task. Given a polygonal target region to be surveyed, the region is subdivided according to the number of vessels in the fleet. The subdivision algorithm seeks a solution in which all subregions have equal area and minimum mean radius. Once the subregions are defined, a dynamic programming method is used to find a minimum-time path for each vessel from its initial position to its assigned region. This path plan includes the effects of water currents as well as avoidance of known obstacles. A fleet-level planning algorithm then shuffles the individual vessel assignments to find the overall solution which puts all vessels in their assigned regions in the minimum time. This shuffle algorithm may be described as a process of elimination on the sorted list of permutations of a cost matrix. All these path planning algorithms are facilitated by discretizing the region of interest onto a hexagonal tiling.

  8. Daily Movements and Microhabitat Selection of Hantavirus Reservoirs and Other Sigmodontinae Rodent Species that Inhabit a Protected Natural Area of Argentina.

    PubMed

    Maroli, Malena; Vadell, María Victoria; Iglesias, Ayelén; Padula, Paula Julieta; Gómez Villafañe, Isabel Elisa

    2015-09-01

    Abundance, distribution, movement patterns, and habitat selection of a reservoir species influence the dispersal of zoonotic pathogens, and hence, the risk for humans. Movements and microhabitat use of rodent species, and their potential role in the transmission of hantavirus were studied in Otamendi Natural Reserve, Buenos Aires, Argentina. Movement estimators and qualitative characteristics of rodent paths were determined by means of a spool and line device method. Sampling was conducted during November and December 2011, and March, April, June, October, and December 2012. Forty-six Oxymycterus rufus, 41 Akodon azarae, 10 Scapteromys aquaticus and 5 Oligoryzomys flavescens were captured. Movement patterns and distances varied according to sex, habitat type, reproductive season, and body size among species. O. flavescens, reservoir of the etiologic agent of hantavirus pulmonary syndrome in the region, moved short distances, had the most linear paths and did not share paths with other species. A. azarae had an intermediate linearity index, its movements were longer in the highland grassland than in the lowland marsh and the salty grassland, and larger individuals traveled longer distances. O. rufus had the most tortuous paths and the males moved more during the non-breeding season. S. aquaticus movements were associated with habitat type with longer distances traveled in the lowland marsh than in the salty grassland. Hantavirus antibodies were detected in 20% of A. azarae and were not detected in any other species. Seropositive individuals were captured during the breeding season and 85% of them were males. A. azarae moved randomly and shared paths with all the other species, which could promote hantavirus spillover events.

  9. Measurement of infrared optical constants with visible photons

    NASA Astrophysics Data System (ADS)

    Paterova, Anna; Yang, Hongzhi; An, Chengwu; Kalashnikov, Dmitry; Krivitsky, Leonid

    2018-04-01

    We demonstrate a new scheme for infrared spectroscopy with visible light sources and detectors. The technique relies on the nonlinear interference of correlated photons, produced via spontaneous parametric down conversion in a nonlinear crystal. Visible and infrared photons are split into two paths and the infrared photons interact with the sample under study. The photons are reflected back to the crystal, resembling a conventional Michelson interferometer. Interference of the visible photons is observed and it is dependent on the phases of all three interacting photons: pump, visible and infrared. The transmission coefficient and the refractive index of the sample in the infrared range can be inferred from the interference pattern of visible photons. The method does not require the use of potentially expensive and inefficient infrared detectors and sources, it can be applied to a broad variety of samples, and it does not require a priori knowledge of sample properties in the visible range.

  10. Surface characterization of graphene based materials

    NASA Astrophysics Data System (ADS)

    Pisarek, M.; Holdynski, M.; Krawczyk, M.; Nowakowski, R.; Roguska, A.; Malolepszy, A.; Stobinski, L.; Jablonski, A.

    2016-12-01

    In the present study, two kind of samples were used: (i) a monolayer graphene film with a thickness of 0.345 nm deposited by the CVD method on Cu foil, (ii) graphene flakes obtained by modified Hummers method and followed by reduction of graphene oxide. The inelastic mean free path (IMFP), characterizing electron transport in graphene/Cu sample and reduced graphene oxide material, which determines the sampling depth of XPS and AES were evaluated from relative Elastic Peak Electron Spectroscopy (EPES) measurements with the Au standard in the energy range 0.5-2 keV. The measured IMFPs were compared with IMFPs resulting from experimental optical data published in the literature for the graphite sample. The EPES IMFP values at 0.5 and 1.5 keV was practically identical to that calculated from optical data for graphite (less than 4% deviation). For energies 1 and 2 keV, the EPES IMFPs for rGO were deviated up to 14% from IMFPs calculated using the optical data by Tanuma et al. [1]. Before EPES measurements all samples were characterized by various techniques like: FE-SEM, AFM, XPS, AES and REELS to visualize the surface morphology/topography and identify the chemical composition.

  11. Multiple-wavelength spectroscopic quantitation of light-absorbing species in scattering media

    DOEpatents

    Nathel, Howard; Cartland, Harry E.; Colston, Jr., Billy W.; Everett, Matthew J.; Roe, Jeffery N.

    2000-01-01

    An oxygen concentration measurement system for blood hemoglobin comprises a multiple-wavelength low-coherence optical light source that is coupled by single mode fibers through a splitter and combiner and focused on both a target tissue sample and a reference mirror. Reflections from both the reference mirror and from the depths of the target tissue sample are carried back and mixed to produce interference fringes in the splitter and combiner. The reference mirror is set such that the distance traversed in the reference path is the same as the distance traversed into and back from the target tissue sample at some depth in the sample that will provide light attenuation information that is dependent on the oxygen in blood hemoglobin in the target tissue sample. Two wavelengths of light are used to obtain concentrations. The method can be used to measure total hemoglobin concentration [Hb.sub.deoxy +Hb.sub.oxy ] or total blood volume in tissue and in conjunction with oxygen saturation measurements from pulse oximetry can be used to absolutely quantify oxyhemoglobin [HbO.sub.2 ] in tissue. The apparatus and method provide a general means for absolute quantitation of an absorber dispersed in a highly scattering medium.

  12. Mechanical response of stainless steel subjected to biaxial load path changes: Cruciform experiments and multi-scale modeling

    DOE PAGES

    Upadhyay, Manas V.; Patra, Anirban; Wen, Wei; ...

    2018-05-08

    In this paper, we propose a multi-scale modeling approach that can simulate the microstructural and mechanical behavior of metal or alloy parts with complex geometries subjected to multi-axial load path changes. The model is used to understand the biaxial load path change behavior of 316L stainless steel cruciform samples. At the macroscale, a finite element approach is used to simulate the cruciform geometry and numerically predict the gauge stresses, which are difficult to obtain analytically. At each material point in the finite element mesh, the anisotropic viscoplastic self-consistent model is used to simulate the role of texture evolution on themore » mechanical response. At the single crystal level, a dislocation density based hardening law that appropriately captures the role of multi-axial load path changes on slip activity is used. The combined approach is experimentally validated using cruciform samples subjected to uniaxial load and unload followed by different biaxial reloads in the angular range [27º, 90º]. Polycrystalline yield surfaces before and after load path changes are generated using the full-field elasto-viscoplastic fast Fourier transform model to study the influence of the deformation history and reloading direction on the mechanical response, including the Bauschinger effect, of these cruciform samples. Results reveal that the Bauschinger effect is strongly dependent on the first loading direction and strain, intergranular and macroscopic residual stresses after first load, and the reloading angle. The microstructural origins of the mechanical response are discussed.« less

  13. Mechanical response of stainless steel subjected to biaxial load path changes: Cruciform experiments and multi-scale modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhyay, Manas V.; Patra, Anirban; Wen, Wei

    In this paper, we propose a multi-scale modeling approach that can simulate the microstructural and mechanical behavior of metal or alloy parts with complex geometries subjected to multi-axial load path changes. The model is used to understand the biaxial load path change behavior of 316L stainless steel cruciform samples. At the macroscale, a finite element approach is used to simulate the cruciform geometry and numerically predict the gauge stresses, which are difficult to obtain analytically. At each material point in the finite element mesh, the anisotropic viscoplastic self-consistent model is used to simulate the role of texture evolution on themore » mechanical response. At the single crystal level, a dislocation density based hardening law that appropriately captures the role of multi-axial load path changes on slip activity is used. The combined approach is experimentally validated using cruciform samples subjected to uniaxial load and unload followed by different biaxial reloads in the angular range [27º, 90º]. Polycrystalline yield surfaces before and after load path changes are generated using the full-field elasto-viscoplastic fast Fourier transform model to study the influence of the deformation history and reloading direction on the mechanical response, including the Bauschinger effect, of these cruciform samples. Results reveal that the Bauschinger effect is strongly dependent on the first loading direction and strain, intergranular and macroscopic residual stresses after first load, and the reloading angle. The microstructural origins of the mechanical response are discussed.« less

  14. An automated integration-free path-integral method based on Kleinert's variational perturbation theory

    NASA Astrophysics Data System (ADS)

    Wong, Kin-Yiu; Gao, Jiali

    2007-12-01

    Based on Kleinert's variational perturbation (KP) theory [Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets, 3rd ed. (World Scientific, Singapore, 2004)], we present an analytic path-integral approach for computing the effective centroid potential. The approach enables the KP theory to be applied to any realistic systems beyond the first-order perturbation (i.e., the original Feynman-Kleinert [Phys. Rev. A 34, 5080 (1986)] variational method). Accurate values are obtained for several systems in which exact quantum results are known. Furthermore, the computed kinetic isotope effects for a series of proton transfer reactions, in which the potential energy surfaces are evaluated by density-functional theory, are in good accordance with experiments. We hope that our method could be used by non-path-integral experts or experimentalists as a "black box" for any given system.

  15. Network of dedicated processors for finding lowest-cost map path

    NASA Technical Reports Server (NTRS)

    Eberhardt, Silvio P. (Inventor)

    1991-01-01

    A method and associated apparatus are disclosed for finding the lowest cost path of several variable paths. The paths are comprised of a plurality of linked cost-incurring areas existing between an origin point and a destination point. The method comprises the steps of connecting a purality of nodes together in the manner of the cost-incurring areas; programming each node to have a cost associated therewith corresponding to one of the cost-incurring areas; injecting a signal into one of the nodes representing the origin point; propagating the signal through the plurality of nodes from inputs to outputs; reducing the signal in magnitude at each node as a function of the respective cost of the node; and, starting at one of the nodes representing the destination point and following a path having the least reduction in magnitude of the signal from node to node back to one of the nodes representing the origin point whereby the lowest cost path from the origin point to the destination point is found.

  16. Compensation of high order harmonic long quantum-path attosecond chirp

    NASA Astrophysics Data System (ADS)

    Guichard, R.; Caillat, J.; Lévêque, C.; Risoud, F.; Maquet, A.; Taïeb, R.; Zaïr, A.

    2017-12-01

    We propose a method to compensate for the extreme ultra violet (XUV) attosecond chirp associated with the long quantum-path in the high harmonic generation process. Our method employs an isolated attosecond pulse (IAP) issued from the short trajectory contribution in a primary target to assist the infrared driving field to produce high harmonics from the long trajectory in a secondary target. In our simulations based on the resolution of the time-dependent Schrödinger equation, the resulting high harmornics present a clear phase compensation of the long quantum-path contribution, near to Fourier transform limited attosecond XUV pulse. Employing time-frequency analysis of the high harmonic dipole, we found that the compensation is not a simple far-field photonic interference between the IAP and the long-path harmonic emission, but a coherent phase transfer from the weak IAP to the long quantum-path electronic wavepacket. Our approach opens the route to utilizing the long quantum-path for the production and applications of attosecond pulses.

  17. Photoacoustic sensor for medical diagnostics

    NASA Astrophysics Data System (ADS)

    Wolff, Marcus; Groninga, Hinrich G.; Harde, Hermann

    2004-03-01

    The development of new optical sensor technologies has a major impact on the progress of diagnostic methods. Of the permanently increasing number of non-invasive breath tests, the 13C-Urea Breath Test (UBT) for the detection of Helicobacter pylori is the most prominent. However, many recent developments, like the detection of cancer by breath test, go beyond gastroenterological applications. We present a new detection scheme for breath analysis that employs an especially compact and simple set-up. Photoacoustic Spectroscopy (PAS) represents an offset-free technique that allows for short absorption paths and small sample cells. Using a single-frequency diode laser and taking advantage of acoustical resonances of the sample cell, we performed extremely sensitive and selective measurements. The smart data processing method contributes to the extraordinary sensitivity and selectivity as well. Also, the reasonable acquisition cost and low operational cost make this detection scheme attractive for many biomedical applications. The experimental set-up and data processing method, together with exemplary isotope-selective measurements on carbon dioxide, are presented.

  18. A variational dynamic programming approach to robot-path planning with a distance-safety criterion

    NASA Technical Reports Server (NTRS)

    Suh, Suk-Hwan; Shin, Kang G.

    1988-01-01

    An approach to robot-path planning is developed by considering both the traveling distance and the safety of the robot. A computationally-efficient algorithm is developed to find a near-optimal path with a weighted distance-safety criterion by using a variational calculus and dynamic programming (VCDP) method. The algorithm is readily applicable to any factory environment by representing the free workspace as channels. A method for deriving these channels is also proposed. Although it is developed mainly for two-dimensional problems, this method can be easily extended to a class of three-dimensional problems. Numerical examples are presented to demonstrate the utility and power of this method.

  19. Mobile robot dynamic path planning based on improved genetic algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Zhou, Heng; Wang, Ying

    2017-08-01

    In dynamic unknown environment, the dynamic path planning of mobile robots is a difficult problem. In this paper, a dynamic path planning method based on genetic algorithm is proposed, and a reward value model is designed to estimate the probability of dynamic obstacles on the path, and the reward value function is applied to the genetic algorithm. Unique coding techniques reduce the computational complexity of the algorithm. The fitness function of the genetic algorithm fully considers three factors: the security of the path, the shortest distance of the path and the reward value of the path. The simulation results show that the proposed genetic algorithm is efficient in all kinds of complex dynamic environments.

  20. Mobile mapping and eddy covariance flux measurements of NH3 emissions from cattle feedlots with a portable laser-based open-path sensor

    NASA Astrophysics Data System (ADS)

    Tao, L.; Sun, K.; Pan, D.; Golston, L.; Stanton, L. G.; Ham, J. M.; Shonkwiler, K. B.; Nash, C.; Zondlo, M. A.

    2014-12-01

    Ammonia (NH3) is the dominant alkaline species in the atmosphere and an important compound in the global nitrogen cycle. There is a large uncertainty in NH3 emission inventory from agriculture, which is the largest source of NH3, including livestock farming and fertilizer applications. In recent years, a quantum cascade laser (QCL)-based open-path sensor has been developed to provide high-resolution, fast-response and high-sensitivity NH3 measurements. It has a detection limit of 150 pptv with a sample rate up to 20 Hz. This sensor has been integrated into a mobile platform mounted on the roof of a car to perform measurement of multiple trace gases. We have also used the sensor for eddy covariance (EC) flux measurements. The mobile sensing method provides high spatial resolution and fast mapping of measured gases. Meanwhile, the EC flux method offers accurate flux measurements and resolves the diurnal variability of NH3emissions. During the DISCOVER-AQ and FRAPPÉ field campaigns in 2014, this mobile platform was used to study NH3 emissions from cattle feedlot near Fort Morgan, Colorado. This specific feedlot was mapped multiple times in different days to study the variability of its plume characteristics. At the same time, we set up another open-path NH3 sensor with LICOR open-path sensors to perform EC flux measurements of NH3, CH4 and CO2 simultaneously in the same cattle feedlot as shown in Fig. 1. NH3/CH4 emission flux ratio show a strong temperature dependence from EC flux measurements. The median value of measured NH3 and CH4 emission flux ratio is 0.60 ppmv/ppmv. In contrast, the median value of ΔNH3/ΔCH4 ratios measured from mobile platform is 0.53 ppmv/ppmv for the same farm. The combination of mobile mapping and EC flux measurements with the same open-path sensors greatly improves understanding of NH3 emissions both spatially and temporally.

  1. Development of a parallel demodulation system used for extrinsic Fabry-Perot interferometer and fiber Bragg grating sensors.

    PubMed

    Jiang, Junfeng; Liu, Tiegen; Zhang, Yimo; Liu, Lina; Zha, Ying; Zhang, Fan; Wang, Yunxin; Long, Pin

    2006-01-20

    A parallel demodulation system for extrinsic Fabry-Perot interferometer (EFPI) and fiber Bragg grating (FBG) sensors is presented, which is based on a Michelson interferometer and combines the methods of low-coherence interference and a Fourier-transform spectrum. The parallel demodulation theory is modeled with Fourier-transform spectrum technology, and a signal separation method with an EFPI and FBG is proposed. The design of an optical path difference scanning and sampling method without a reference light is described. Experiments show that the parallel demodulation system has good spectrum demodulation and low-coherence interference demodulation performance. It can realize simultaneous strain and temperature measurements while keeping the whole system configuration less complex.

  2. An acoustic thermometer for air refractive index estimation in long distance interferometric measurements

    NASA Astrophysics Data System (ADS)

    Pisani, Marco; Astrua, Milena; Zucco, Massimo

    2018-02-01

    We present a method to measure the temperature along the path of an optical interferometer based on the propagation of acoustic waves. It exploits the high sensitivity of the speed of sound to air temperature. In particular, it takes advantage of a technique where the generation of acoustic waves is synchronous with the amplitude modulation of a laser source. A photodetector converts the laser light into an electronic signal used as a reference, while the incoming acoustic waves are focused on a microphone and generate the measuring signal. Under this condition, the phase difference between the two signals substantially depends on the temperature of the air volume interposed between the sources and the receivers. A comparison with traditional temperature sensors highlighted the limit of the latter in the case of fast temperature variations and the advantage of a measurement integrated along the optical path instead of a sampling measurement. The capability of the acoustic method to compensate for the interferometric distance measurements due to air temperature variations has been demonstrated to the level of 0.1 °C corresponding to 10-7 on the refractive index of air. We applied the method indoor for distances up to 27 m, outdoor at 78 m and finally tested the acoustic thermometer over a distance of 182 m.

  3. The role of atomic absorption spectrometry in geochemical exploration

    USGS Publications Warehouse

    Viets, J.G.; O'Leary, R. M.

    1992-01-01

    In this paper we briefly describe the principles of atomic absorption spectrometry (AAS) and the basic hardware components necessary to make measurements of analyte concentrations. Then we discuss a variety of methods that have been developed for the introduction of analyte atoms into the light path of the spectrophotometer. This section deals with sample digestion, elimination of interferences, and optimum production of ground-state atoms, all critical considerations when choosing an AAS method. Other critical considerations are cost, speed, simplicity, precision, and applicability of the method to the wide range of materials sampled in geochemical exploration. We cannot attempt to review all of the AAS methods developed for geological materials but instead will restrict our discussion to some of those appropriate for geochemical exploration. Our background and familiarity are reflected in the methods we discuss, and we have no doubt overlooked many good methods. Our discussion should therefore be considered a starting point in finding the right method for the problem, rather than the end of the search. Finally, we discuss the future of AAS relative to other instrumental techniques and the promising new directions for AAS in geochemical exploration. ?? 1992.

  4. Paths from socioemotional behavior in middle childhood to personality in middle adulthood.

    PubMed

    Pulkkinen, Lea; Kokko, Katja; Rantanen, Johanna

    2012-09-01

    Continuity in individual differences from socioemotional behavior in middle childhood to personality characteristics in middle adulthood was examined on the assumption that they share certain temperament-related elements. Socioemotional characteristics were measured using teacher ratings at ages 8 (N = 369; 53% males) and 14 (95% of the initial sample). Personality was assessed at age 42 (63% of the initial sample; 50% males) using a shortened version of the NEO Personality Inventory (NEO-PI); the Karolinska Scales of Personality (KSP); and the Adult Temperament Questionnaire (ATQ). Three models were tested using structural equation modeling. The results confirmed paths (a) from behavioral activity to adult Extraversion and Openness (NEO-PI), sociability (KSP), and surgency (ATQ); (b) from well-controlled behavior to adult conformity (KSP) and Conscientiousness (NEO-PI); and (c) from negative emotionality to adult aggression (KSP). The paths were significant only for one gender, and more frequently for males than for females. The significant male paths from behavioral activity to all indicators of adult activity and from well-controlled behavior to adult conformity started at age 8, whereas significant female paths from behavioral activity to adult sociability and from well-controlled behavior to adult Conscientiousness started at age 14. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  5. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    PubMed Central

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  6. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  7. Apparatus for sampling and characterizing aerosols

    DOEpatents

    Dunn, Patrick F.; Herceg, Joseph E.; Klocksieben, Robert H.

    1986-01-01

    Apparatus for sampling and characterizing aerosols having a wide particle size range at relatively low velocities may comprise a chamber having an inlet and an outlet, the chamber including: a plurality of vertically stacked, successive particle collection stages; each collection stage includes a separator plate and a channel guide mounted transverse to the separator plate, defining a labyrinthine flow path across the collection stage. An opening in each separator plate provides a path for the aerosols from one collection stage to the next. Mounted within each collection stage are one or more particle collection frames.

  8. Cassette less SOFC stack and method of assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meinhardt, Kerry D

    2014-11-18

    A cassette less SOFC assembly and a method for creating such an assembly. The SOFC stack is characterized by an electrically isolated stack current path which allows welded interconnection between frame portions of the stack. In one embodiment electrically isolating a current path comprises the step of sealing a interconnect plate to a interconnect plate frame with an insulating seal. This enables the current path portion to be isolated from the structural frame an enables the cell frame to be welded together.

  9. Fokker-Planck Equations of Stochastic Acceleration: A Study of Numerical Methods

    NASA Astrophysics Data System (ADS)

    Park, Brian T.; Petrosian, Vahe

    1996-03-01

    Stochastic wave-particle acceleration may be responsible for producing suprathermal particles in many astrophysical situations. The process can be described as a diffusion process through the Fokker-Planck equation. If the acceleration region is homogeneous and the scattering mean free path is much smaller than both the energy change mean free path and the size of the acceleration region, then the Fokker-Planck equation reduces to a simple form involving only the time and energy variables. in an earlier paper (Park & Petrosian 1995, hereafter Paper 1), we studied the analytic properties of the Fokker-Planck equation and found analytic solutions for some simple cases. In this paper, we study the numerical methods which must be used to solve more general forms of the equation. Two classes of numerical methods are finite difference methods and Monte Carlo simulations. We examine six finite difference methods, three fully implicit and three semi-implicit, and a stochastic simulation method which uses the exact correspondence between the Fokker-Planck equation and the it5 stochastic differential equation. As discussed in Paper I, Fokker-Planck equations derived under the above approximations are singular, causing problems with boundary conditions and numerical overflow and underflow. We evaluate each method using three sample equations to test its stability, accuracy, efficiency, and robustness for both time-dependent and steady state solutions. We conclude that the most robust finite difference method is the fully implicit Chang-Cooper method, with minor extensions to account for the escape and injection terms. Other methods suffer from stability and accuracy problems when dealing with some Fokker-Planck equations. The stochastic simulation method, although simple to implement, is susceptible to Poisson noise when insufficient test particles are used and is computationally very expensive compared to the finite difference method.

  10. Mechanical and hydraulic properties of Nankai accretionary prism sediments: Effect of stress path

    NASA Astrophysics Data System (ADS)

    Kitajima, Hiroko; Chester, Frederick M.; Biscontin, Giovanna

    2012-10-01

    We have conducted triaxial deformation experiments along different loading paths on prism sediments from the Nankai Trough. Different load paths of isotropic loading, uniaxial strain loading, triaxial compression (at constant confining pressure, Pc), undrained Pc reduction, drained Pc reduction, and triaxial unloading at constant Pc, were used to understand the evolution of mechanical and hydraulic properties under complicated stress states and loading histories in accretionary subduction zones. Five deformation experiments were conducted on three sediment core samples for the Nankai prism, specifically from older accreted sediments at the forearc basin, underthrust slope sediments beneath the megasplay fault, and overthrust Upper Shikoku Basin sediments along the frontal thrust. Yield envelopes for each sample were constructed based on the stress paths of Pc-reduction using the modified Cam-clay model, and in situ stress states of the prism were constrained using the results from the other load paths and accounting for horizontal stress. Results suggest that the sediments in the vicinity of the megasplay fault and frontal thrust are highly overconsolidated, and thus likely to deform brittle rather than ductile. The porosity of sediments decreases as the yield envelope expands, while the reduction in permeability mainly depends on the effective mean stress before yield, and the differential stress after yield. An improved understanding of sediment yield strength and hydromechanical properties along different load paths is necessary to treat accurately the coupling of deformation and fluid flow in accretionary subduction zones.

  11. Aerosol mass spectrometry systems and methods

    DOEpatents

    Fergenson, David P.; Gard, Eric E.

    2013-08-20

    A system according to one embodiment includes a particle accelerator that directs a succession of polydisperse aerosol particles along a predetermined particle path; multiple tracking lasers for generating beams of light across the particle path; an optical detector positioned adjacent the particle path for detecting impingement of the beams of light on individual particles; a desorption laser for generating a beam of desorbing light across the particle path about coaxial with a beam of light produced by one of the tracking lasers; and a controller, responsive to detection of a signal produced by the optical detector, that controls the desorption laser to generate the beam of desorbing light. Additional systems and methods are also disclosed.

  12. Robotics virtual rail system and method

    DOEpatents

    Bruemmer, David J [Idaho Falls, ID; Few, Douglas A [Idaho Falls, ID; Walton, Miles C [Idaho Falls, ID

    2011-07-05

    A virtual track or rail system and method is described for execution by a robot. A user, through a user interface, generates a desired path comprised of at least one segment representative of the virtual track for the robot. Start and end points are assigned to the desired path and velocities are also associated with each of the at least one segment of the desired path. A waypoint file is generated including positions along the virtual track representing the desired path with the positions beginning from the start point to the end point including the velocities of each of the at least one segment. The waypoint file is sent to the robot for traversing along the virtual track.

  13. Field determination of biomass burning emission ratios and factors via open-path FTIR spectroscopy and fire radiative power assessment: headfire, backfire and residual smouldering combustion in African savannahs

    NASA Astrophysics Data System (ADS)

    Wooster, M. J.; Freeborn, P. H.; Archibald, S.; Oppenheimer, C.; Roberts, G. J.; Smith, T. E. L.; Govender, N.; Burton, M.; Palumbo, I.

    2011-11-01

    Biomass burning emissions factors are vital to quantifying trace gas release from vegetation fires. Here we evaluate emissions factors for a series of savannah fires in Kruger National Park (KNP), South Africa using ground-based open path Fourier transform infrared (FTIR) spectroscopy and an IR source separated by 150-250 m distance. Molecular abundances along the extended open path are retrieved using a spectral forward model coupled to a non-linear least squares fitting approach. We demonstrate derivation of trace gas column amounts for horizontal paths transecting the width of the advected plume, and find for example that CO mixing ratio changes of ~0.01 μmol mol-1 [10 ppbv] can be detected across the relatively long optical paths used here. Though FTIR spectroscopy can detect dozens of different chemical species present in vegetation fire smoke, we focus our analysis on five key combustion products released preferentially during the pyrolysis (CH2O), flaming (CO2) and smoldering (CO, CH4, NH3) processes. We demonstrate that well constrained emissions ratios for these gases to both CO2 and CO can be derived for the backfire, headfire and residual smouldering combustion (RSC) stages of these savannah fires, from which stage-specific emission factors can then be calculated. Headfires and backfires often show similar emission ratios and emission factors, but those of the RSC stage can differ substantially. The timing of each fire stage was identified via airborne optical and thermal IR imagery and ground-observer reports, with the airborne IR imagery also used to derive estimates of fire radiative energy (FRE), allowing the relative amount of fuel burned in each stage to be calculated and "fire averaged" emission ratios and emission factors to be determined. These "fire averaged" metrics are dominated by the headfire contribution, since the FRE data indicate that the vast majority of the fuel is burned in this stage. Our fire averaged emission ratios and factors for CO2 and CH4 agree well with those from prior studies conducted in the same area using e.g. airborne plume sampling. We also concur with past suggestions that emission factors for formaldehyde in this environment appear substantially underestimated in widely used databases, but see no evidence to support suggestions by Sinha et al. (2003) of a major overestimation in the emission factor of ammonia in works such as Andreae and Merlet (2001) and Akagi et al. (2011). We also measure somewhat higher CO and NH3 emission ratios and factors than are usually reported for this environment, which is interpreted to result from the OP-FTIR ground-based technique sampling a greater proportion of smoke from smouldering processes than is generally the case with methods such as airborne sampling. Finally, our results suggest that the contribution of burning animal (elephant) dung can be a significant factor in the emissions characteristics of certain KNP fires, and that the ability of remotely sensed fire temperatures to provide information useful in tailoring modified combustion efficiency (MCE) and emissions factor estimates maybe rather limited, at least until the generally available precision of such temperature estimates can be substantially improved. One limitation of the OP-FTIR method is its ability to sample only near-ground level smoke, which may limit application at more intense fires where the majority of smoke is released into a vertically rising convection column. Nevertheless, even in such cases the method potentially enables a much better assessment of the emissions contribution of the RSC stage than is typically conducted currently.

  14. Knowledge Monitoring, Goal Orientations, Self-Efficacy, and Academic Performance: A Path Analysis

    ERIC Educational Resources Information Center

    Al-Harthy, Ibrahim S.; Was, Christopher A.

    2013-01-01

    The purpose of this study was to examine the relationship between knowledge monitoring and motivation as defined by self-efficacy and goal orientations. A path model was proposed to hypothesize the causal relations among predictors of the students' total score in the Educational Psychology course. The sample consisted of undergraduate students…

  15. Terrain Following Control Based on an Optimized Spline Model of Aircraft Motion

    DTIC Science & Technology

    1975-11-01

    constraints, a smooth path through the final data points may not satisfy the norma acceleration constraints between sample points. This latter assertion is...for the reference path in the table. Sae copromise betwen the two effects is required. The accelerations given in Table 7-2 are those measured at the

  16. Robust new NIRS coupled with multivariate methods for the detection and quantification of tallow adulteration in clarified butter samples.

    PubMed

    Mabood, Fazal; Abbas, Ghulam; Jabeen, Farah; Naureen, Zakira; Al-Harrasi, Ahmed; Hamaed, Ahmad M; Hussain, Javid; Al-Nabhani, Mahmood; Al Shukaili, Maryam S; Khan, Alamgir; Manzoor, Suryyia

    2018-03-01

    Cows' butterfat may be adulterated with animal fat materials like tallow which causes increased serum cholesterol and triglycerides levels upon consumption. There is no reliable technique to detect and quantify tallow adulteration in butter samples in a feasible way. In this study a highly sensitive near-infrared (NIR) spectroscopy combined with chemometric methods was developed to detect as well as quantify the level of tallow adulterant in clarified butter samples. For this investigation the pure clarified butter samples were intentionally adulterated with tallow at the following percentage levels: 1%, 3%, 5%, 7%, 9%, 11%, 13%, 15%, 17% and 20% (wt/wt). Altogether 99 clarified butter samples were used including nine pure samples (un-adulterated clarified butter) and 90 clarified butter samples adulterated with tallow. Each sample was analysed by using NIR spectroscopy in the reflection mode in the range 10,000-4000 cm -1 , at 2 cm -1 resolution and using the transflectance sample accessory which provided a total path length of 0.5 mm. Chemometric models including principal components analysis (PCA), partial least-squares discriminant analysis (PLSDA), and partial least-squares regressions (PLSR) were applied for statistical treatment of the obtained NIR spectral data. The PLSDA model was employed to differentiate pure butter samples from those adulterated with tallow. The employed model was then externally cross-validated by using a test set which included 30% of the total butter samples. The excellent performance of the model was proved by the low RMSEP value of 1.537% and the high correlation factor of 0.95. This newly developed method is robust, non-destructive, highly sensitive, and economical with very minor sample preparation and good ability to quantify less than 1.5% of tallow adulteration in clarified butter samples.

  17. Dissolution Dynamic Nuclear Polarization capability study with fluid path

    NASA Astrophysics Data System (ADS)

    Malinowski, Ronja M.; Lipsø, Kasper W.; Lerche, Mathilde H.; Ardenkjær-Larsen, Jan H.

    2016-11-01

    Signal enhancement by hyperpolarization is a way of overcoming the low sensitivity in magnetic resonance; MRI in particular. One of the most well-known methods, dissolution Dynamic Nuclear Polarization, has been used clinically in cancer patients. One way of ensuring a low bioburden of the hyperpolarized product is by use of a closed fluid path that constitutes a barrier to contamination. The fluid path can be filled with the pharmaceuticals, i.e. imaging agent and solvents, in a clean room, and then stored or immediately used at the polarizer. In this study, we present a method of filling the fluid path that allows it to be reused. The filling method has been investigated in terms of reproducibility at two extrema, high dose for patient use and low dose for rodent studies, using [1-13C]pyruvate as example. We demonstrate that the filling method allows high reproducibility of six quality control parameters with standard deviations 3-10 times smaller than the acceptance criteria intervals in clinical studies.

  18. Designing the Alluvial Riverbeds in Curved Paths

    NASA Astrophysics Data System (ADS)

    Macura, Viliam; Škrinár, Andrej; Štefunková, Zuzana; Muchová, Zlatica; Majorošová, Martina

    2017-10-01

    The paper presents the method of determining the shape of the riverbed in curves of the watercourse, which is based on the method of Ikeda (1975) developed for a slightly curved path in sandy riverbed. Regulated rivers have essentially slightly and smoothly curved paths; therefore, this methodology provides the appropriate basis for river restoration. Based on the research in the experimental reach of the Holeška Brook and several alluvial mountain streams the methodology was adjusted. The method also takes into account other important characteristics of bottom material - the shape and orientation of the particles, settling velocity and drag coefficients. Thus, the method is mainly meant for the natural sand-gravel material, which is heterogeneous and the particle shape of the bottom material is very different from spherical. The calculation of the river channel in the curved path provides the basis for the design of optimal habitat, but also for the design of foundations of armouring of the bankside of the channel. The input data is adapted to the conditions of design practice.

  19. Graph drawing using tabu search coupled with path relinking.

    PubMed

    Dib, Fadi K; Rodgers, Peter

    2018-01-01

    Graph drawing, or the automatic layout of graphs, is a challenging problem. There are several search based methods for graph drawing which are based on optimizing an objective function which is formed from a weighted sum of multiple criteria. In this paper, we propose a new neighbourhood search method which uses a tabu search coupled with path relinking to optimize such objective functions for general graph layouts with undirected straight lines. To our knowledge, before our work, neither of these methods have been previously used in general multi-criteria graph drawing. Tabu search uses a memory list to speed up searching by avoiding previously tested solutions, while the path relinking method generates new solutions by exploring paths that connect high quality solutions. We use path relinking periodically within the tabu search procedure to speed up the identification of good solutions. We have evaluated our new method against the commonly used neighbourhood search optimization techniques: hill climbing and simulated annealing. Our evaluation examines the quality of the graph layout (objective function's value) and the speed of layout in terms of the number of evaluated solutions required to draw a graph. We also examine the relative scalability of each method. Our experimental results were applied to both random graphs and a real-world dataset. We show that our method outperforms both hill climbing and simulated annealing by producing a better layout in a lower number of evaluated solutions. In addition, we demonstrate that our method has greater scalability as it can layout larger graphs than the state-of-the-art neighbourhood search methods. Finally, we show that similar results can be produced in a real world setting by testing our method against a standard public graph dataset.

  20. Graph drawing using tabu search coupled with path relinking

    PubMed Central

    Rodgers, Peter

    2018-01-01

    Graph drawing, or the automatic layout of graphs, is a challenging problem. There are several search based methods for graph drawing which are based on optimizing an objective function which is formed from a weighted sum of multiple criteria. In this paper, we propose a new neighbourhood search method which uses a tabu search coupled with path relinking to optimize such objective functions for general graph layouts with undirected straight lines. To our knowledge, before our work, neither of these methods have been previously used in general multi-criteria graph drawing. Tabu search uses a memory list to speed up searching by avoiding previously tested solutions, while the path relinking method generates new solutions by exploring paths that connect high quality solutions. We use path relinking periodically within the tabu search procedure to speed up the identification of good solutions. We have evaluated our new method against the commonly used neighbourhood search optimization techniques: hill climbing and simulated annealing. Our evaluation examines the quality of the graph layout (objective function’s value) and the speed of layout in terms of the number of evaluated solutions required to draw a graph. We also examine the relative scalability of each method. Our experimental results were applied to both random graphs and a real-world dataset. We show that our method outperforms both hill climbing and simulated annealing by producing a better layout in a lower number of evaluated solutions. In addition, we demonstrate that our method has greater scalability as it can layout larger graphs than the state-of-the-art neighbourhood search methods. Finally, we show that similar results can be produced in a real world setting by testing our method against a standard public graph dataset. PMID:29746576

  1. True eddy accumulation and eddy covariance methods and instruments intercomparison for fluxes of CO2, CH4 and H2O above the Hainich Forest

    NASA Astrophysics Data System (ADS)

    Siebicke, Lukas

    2017-04-01

    The eddy covariance (EC) method is state-of-the-art in directly measuring vegetation-atmosphere exchange of CO2 and H2O at ecosystem scale. However, the EC method is currently limited to a small number of atmospheric tracers by the lack of suitable fast-response analyzers or poor signal-to-noise ratios. High resource and power demands may further restrict the number of spatial sampling points. True eddy accumulation (TEA) is an alternative method for direct and continuous flux observations. Key advantages are the applicability to a wider range of air constituents such as greenhouse gases, isotopes, volatile organic compounds and aerosols using slow-response analyzers. In contrast to relaxed eddy accumulation (REA), true eddy accumulation (Desjardins, 1977) has the advantage of being a direct method which does not require proxies. True Eddy Accumulation has the potential to overcome above mentioned limitations of eddy covariance but has hardly ever been successfully demonstrated in practice in the past. This study presents flux measurements using an innovative approach to true eddy accumulation by directly, continuously and automatically measuring trace gas fluxes using a flow-through system. We merge high-frequency flux contributions from TEA with low-frequency covariances from the same sensors. We show flux measurements of CO2, CH4 and H2O by TEA and EC above an old-growth forest at the ICOS flux tower site "Hainich" (DE-Hai). We compare and evaluate the performance of the two direct turbulent flux measurement methods eddy covariance and true eddy accumulation using side-by-side trace gas flux observations. We further compare performance of seven instrument complexes, i.e. combinations of sonic anemometers and trace gas analyzers. We compare gas analyzers types of open-path, enclosed-path and closed-path design. We further differentiate data from two gas analysis technologies: infrared gas analysis (IRGA) and laser spectrometry (open path and CRDS closed-path laser spectrometers). We present results of CO2 and H2O fluxes from the following six instruments, i.e. combinations of sonic anemometers/gas analyzers (and methods): METEK-uSonic3/Picarro-G2301 (TEA), METEK-uSonic3/LI-7500 (EC), Gill-R3/LI-6262 (EC), Gill-R3/LI-7200 (EC), Gill-HS/LI-7200 (EC), Gill-R3/LGR-FGGA (EC). Further, we present results of much more difficult to measure CH4 fluxes from the following three instruments, i.e. combinations of sonic anemometers/gas analyzers (and methods): METEK-uSonic3/Picarro-G2301 (TEA), Gill-R3/LI-7700 (EC), Gill-R3/LGR-FGGA (EC). We observed that CO2, CH4 and H2O fluxes from the side-by-side measurements by true eddy accumulation and eddy covariance methods correlated well. Secondly, the difference between the TEA and EC methods using the same sonic anemometer but different gas analyzer was often smaller than the mismatch of the various side-by-side eddy covariance measurements using different sonic anemometers and gas analyzers. Signal-to-noise ratios of CH4 fluxes from the true eddy accumulation system system were superior to both eddy covariance sensors (open-path LI-7700 and closed-path CRDS LGR-FGGA sensors). We conclude that our novel implementation of the true eddy accumulation method demonstrated high signal-to-noise ratios, applicability to slow-response gas analyzers, small power consumption and direct proxy-free ecosystem-scale trace gas flux measurements of CO2, CH4 and H2O. The current results suggest that true eddy accumulation would be suitable and should be applied as the method-of-choice for direct flux measurements of a large number of atmospheric constituents beyond CO2 and H2O, including isotopes, aerosols, volatile organic compounds and other trace gases for which eddy covariance might not be a viable alternative. We will further develop true eddy accumulation as a novel approach using multiplexed systems for spatially distributed flux measurements.

  2. Traffic engineering and regenerator placement in GMPLS networks with restoration

    NASA Astrophysics Data System (ADS)

    Yetginer, Emre; Karasan, Ezhan

    2002-07-01

    In this paper we study regenerator placement and traffic engineering of restorable paths in Generalized Multipro-tocol Label Switching (GMPLS) networks. Regenerators are necessary in optical networks due to transmission impairments. We study a network architecture where there are regenerators at selected nodes and we propose two heuristic algorithms for the regenerator placement problem. Performances of these algorithms in terms of required number of regenerators and computational complexity are evaluated. In this network architecture with sparse regeneration, offline computation of working and restoration paths is studied with bandwidth reservation and path rerouting as the restoration scheme. We study two approaches for selecting working and restoration paths from a set of candidate paths and formulate each method as an Integer Linear Programming (ILP) prob-lem. Traffic uncertainty model is developed in order to compare these methods based on their robustness with respect to changing traffic patterns. Traffic engineering methods are compared based on number of additional demands due to traffic uncertainty that can be carried. Regenerator placement algorithms are also evaluated from a traffic engineering point of view.

  3. Generalized causal mediation and path analysis: Extensions and practical considerations.

    PubMed

    Albert, Jeffrey M; Cho, Jang Ik; Liu, Yiying; Nelson, Suchitra

    2018-01-01

    Causal mediation analysis seeks to decompose the effect of a treatment or exposure among multiple possible paths and provide casually interpretable path-specific effect estimates. Recent advances have extended causal mediation analysis to situations with a sequence of mediators or multiple contemporaneous mediators. However, available methods still have limitations, and computational and other challenges remain. The present paper provides an extended causal mediation and path analysis methodology. The new method, implemented in the new R package, gmediation (described in a companion paper), accommodates both a sequence (two stages) of mediators and multiple mediators at each stage, and allows for multiple types of outcomes following generalized linear models. The methodology can also handle unsaturated models and clustered data. Addressing other practical issues, we provide new guidelines for the choice of a decomposition, and for the choice of a reference group multiplier for the reduction of Monte Carlo error in mediation formula computations. The new method is applied to data from a cohort study to illuminate the contribution of alternative biological and behavioral paths in the effect of socioeconomic status on dental caries in adolescence.

  4. Field Assessment and Groundwater Modeling of Pesticide Distribution in the Faga`alu Watershed in Tutuila, American Samoa

    NASA Astrophysics Data System (ADS)

    Welch, E.; Dulai, H.; El-Kadi, A. I.; Shuler, C. K.

    2017-12-01

    To examine contaminant transport paths, groundwater and surface water interactions were investigated as a vector of pesticide migration on the island Tutuila in American Samoa. During a field campaign in summer 2016, water from wells, springs, and streams was collected across the island to analyze for selected pesticides. In addition, a detailed watershed-study, involving sampling along the mountain to ocean gradient was conducted in Faga`alu, a U.S. Coral Reef Task Force priority watershed that drains into the Pago Pago Harbor. Samples were screened at the University of Hawai`i for multiple agricultural chemicals using the ELISA method. The pesticides analyzed include glyphosate, azoxystrobin, imidacloprid and DDT/DDE. Field data was integrated into a MODFLOW-based groundwater model of the Faga`alu watershed to reconstruct flow paths, solute concentrations, and dispersion of the analytes. In combination with land-use maps, these tools were used to identify potential pesticide sources and their contaminant contributions. Across the island, pesticide concentrations were well below EPA regulated limits and azoxystrobin was absent. Glyphosate had detectable amounts in 56% of collected groundwater and 62% of collected stream samples. Respectively, 72% and 36% had imidacloprid detected and 98% and 97% had DDT/DDE detected. The highest observed concentration of glyphosate was 0.3 ppb, of imidacloprid was 0.17 ppb, and of DDT was 3.7 ppb. The persistence and ubiquity of DDT/DDE in surface and groundwater since its last island-wide application decades ago is notable. Groundwater flow paths modeled by MODFLOW imply that glyphosate sources match documented agricultural land-use areas. Groundwater-derived pesticide fluxes to the reef in Faga`alu are 977 mg/d of glyphosate and 1642 mg/d of DDT/DDE. Our study shows that pesticides are transported not only via surface runoff, but also via groundwater through the stream's base flow and are exiting the aquifer via submarine groundwater discharge (SGD) in the coastal region as well.

  5. Intercomparison of Open-Path Trace Gas Measurements with Two Dual Frequency Comb Spectrometers

    PubMed Central

    Waxman, Eleanor M.; Cossel, Kevin C.; Truong, Gar-Wing; Giorgetta, Fabrizio R.; Swann, William C.; Coburn, Sean; Wright, Robert J.; Rieker, Gregory B.; Coddington, Ian; Newbury, Nathan R.

    2017-01-01

    We present the first quantitative intercomparison between two open-path dual comb spectroscopy (DCS) instruments which were operated across adjacent 2-km open-air paths over a two-week period. We used DCS to measure the atmospheric absorption spectrum in the near infrared from 6021 to 6388 cm−1 (1565 to 1661 nm), corresponding to a 367 cm−1 bandwidth, at 0.0067 cm−1 sample spacing. The measured absorption spectra agree with each other to within 5×10−4 without any external calibration of either instrument. The absorption spectra are fit to retrieve concentrations for carbon dioxide (CO2), methane (CH4), water (H2O), and deuterated water (HDO). The retrieved dry mole fractions agree to 0.14% (0.57 ppm) for CO2, 0.35% (7 ppb) for CH4, and 0.40% (36 ppm) for H2O over the two-week measurement campaign, which included 23 °C outdoor temperature variations and periods of strong atmospheric turbulence. This agreement is at least an order of magnitude better than conventional active-source open-path instrument intercomparisons and is particularly relevant to future regional flux measurements as it allows accurate comparisons of open-path DCS data across locations and time. We additionally compare the open-path DCS retrievals to a WMO-calibrated cavity ringdown point sensor located along the path with good agreement. Short-term and long-term differences between the two systems are attributed, respectively, to spatial sampling discrepancies and to inaccuracies in the current spectral database used to fit the DCS data. Finally, the two-week measurement campaign yields diurnal cycles of CO2 and CH4 that are consistent with the presence of local sources of CO2 and absence of local sources of CH4. PMID:29276547

  6. Evaluation of an improved technique for lumen path definition and lumen segmentation of atherosclerotic vessels in CT angiography.

    PubMed

    van Velsen, Evert F S; Niessen, Wiro J; de Weert, Thomas T; de Monyé, Cécile; van der Lugt, Aad; Meijering, Erik; Stokking, Rik

    2007-07-01

    Vessel image analysis is crucial when considering therapeutical options for (cardio-) vascular diseases. Our method, VAMPIRE (Vascular Analysis using Multiscale Paths Inferred from Ridges and Edges), involves two parts: a user defines a start- and endpoint upon which a lumen path is automatically defined, and which is used for initialization; the automatic segmentation of the vessel lumen on computed tomographic angiography (CTA) images. Both parts are based on the detection of vessel-like structures by analyzing intensity, edge, and ridge information. A multi-observer evaluation study was performed to compare VAMPIRE with a conventional method on the CTA data of 15 patients with carotid artery stenosis. In addition to the start- and endpoint, the two radiologists required on average 2.5 (SD: 1.9) additional points to define a lumen path when using the conventional method, and 0.1 (SD: 0.3) when using VAMPIRE. The segmentation results were quantitatively evaluated using Similarity Indices, which were slightly lower between VAMPIRE and the two radiologists (respectively 0.90 and 0.88) compared with the Similarity Index between the radiologists (0.92). The evaluation shows that the improved definition of a lumen path requires minimal user interaction, and that using this path as initialization leads to good automatic lumen segmentation results.

  7. Spatially Correlated Sparse MIMO Channel Path Delay Estimation in Scattering Environments Based on Signal Subspace Tracking

    PubMed Central

    Chargé, Pascal; Bazzi, Oussama; Ding, Yuehua

    2018-01-01

    A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit–receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit–receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit–receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods. PMID:29734797

  8. Spatially Correlated Sparse MIMO Channel Path Delay Estimation in Scattering Environments Based on Signal Subspace Tracking.

    PubMed

    Mohydeen, Ali; Chargé, Pascal; Wang, Yide; Bazzi, Oussama; Ding, Yuehua

    2018-05-06

    A parametric scheme for spatially correlated sparse multiple-input multiple-output (MIMO) channel path delay estimation in scattering environments is presented in this paper. In MIMO outdoor communication scenarios, channel impulse responses (CIRs) of different transmit⁻receive antenna pairs are often supposed to be sparse due to a few significant scatterers, and share a common sparse pattern, such that path delays are assumed to be equal for every transmit⁻receive antenna pair. In some existing works, an exact common support condition is exploited, where the path delays are considered equal for every transmit⁻receive antenna pair, meanwhile ignoring the influence of scattering. A more realistic channel model is proposed in this paper, where due to scatterers in the environment, the received signals are modeled as clusters of multi-rays around a nominal or mean time delay at different antenna elements, resulting in a non-strictly exact common support phenomenon. A method for estimating the channel mean path delays is then derived based on the subspace approach, and the tracking of the effective dimension of the signal subspace that changes due to the wireless environment. The proposed method shows an improved channel mean path delays estimation performance in comparison with the conventional estimation methods.

  9. Method and system for modulation of gain suppression in high average power laser systems

    DOEpatents

    Bayramian, Andrew James [Manteca, CA

    2012-07-31

    A high average power laser system with modulated gain suppression includes an input aperture associated with a first laser beam extraction path and an output aperture associated with the first laser beam extraction path. The system also includes a pinhole creation laser having an optical output directed along a pinhole creation path and an absorbing material positioned along both the first laser beam extraction path and the pinhole creation path. The system further includes a mechanism operable to translate the absorbing material in a direction crossing the first laser beam extraction laser path and a controller operable to modulate the second laser beam.

  10. Phase computations and phase models for discrete molecular oscillators.

    PubMed

    Suvak, Onder; Demir, Alper

    2012-06-11

    Biochemical oscillators perform crucial functions in cells, e.g., they set up circadian clocks. The dynamical behavior of oscillators is best described and analyzed in terms of the scalar quantity, phase. A rigorous and useful definition for phase is based on the so-called isochrons of oscillators. Phase computation techniques for continuous oscillators that are based on isochrons have been used for characterizing the behavior of various types of oscillators under the influence of perturbations such as noise. In this article, we extend the applicability of these phase computation methods to biochemical oscillators as discrete molecular systems, upon the information obtained from a continuous-state approximation of such oscillators. In particular, we describe techniques for computing the instantaneous phase of discrete, molecular oscillators for stochastic simulation algorithm generated sample paths. We comment on the accuracies and derive certain measures for assessing the feasibilities of the proposed phase computation methods. Phase computation experiments on the sample paths of well-known biological oscillators validate our analyses. The impact of noise that arises from the discrete and random nature of the mechanisms that make up molecular oscillators can be characterized based on the phase computation techniques proposed in this article. The concept of isochrons is the natural choice upon which the phase notion of oscillators can be founded. The isochron-theoretic phase computation methods that we propose can be applied to discrete molecular oscillators of any dimension, provided that the oscillatory behavior observed in discrete-state does not vanish in a continuous-state approximation. Analysis of the full versatility of phase noise phenomena in molecular oscillators will be possible if a proper phase model theory is developed, without resorting to such approximations.

  11. Phase computations and phase models for discrete molecular oscillators

    PubMed Central

    2012-01-01

    Background Biochemical oscillators perform crucial functions in cells, e.g., they set up circadian clocks. The dynamical behavior of oscillators is best described and analyzed in terms of the scalar quantity, phase. A rigorous and useful definition for phase is based on the so-called isochrons of oscillators. Phase computation techniques for continuous oscillators that are based on isochrons have been used for characterizing the behavior of various types of oscillators under the influence of perturbations such as noise. Results In this article, we extend the applicability of these phase computation methods to biochemical oscillators as discrete molecular systems, upon the information obtained from a continuous-state approximation of such oscillators. In particular, we describe techniques for computing the instantaneous phase of discrete, molecular oscillators for stochastic simulation algorithm generated sample paths. We comment on the accuracies and derive certain measures for assessing the feasibilities of the proposed phase computation methods. Phase computation experiments on the sample paths of well-known biological oscillators validate our analyses. Conclusions The impact of noise that arises from the discrete and random nature of the mechanisms that make up molecular oscillators can be characterized based on the phase computation techniques proposed in this article. The concept of isochrons is the natural choice upon which the phase notion of oscillators can be founded. The isochron-theoretic phase computation methods that we propose can be applied to discrete molecular oscillators of any dimension, provided that the oscillatory behavior observed in discrete-state does not vanish in a continuous-state approximation. Analysis of the full versatility of phase noise phenomena in molecular oscillators will be possible if a proper phase model theory is developed, without resorting to such approximations. PMID:22687330

  12. Study of improving signal-noise ratio for fluorescence channel

    NASA Astrophysics Data System (ADS)

    Wang, Guoqing; Li, Xin; Lou, Yue; Chen, Dong; Zhao, Xin; Wang, Ran; Yan, Debao; Zhao, Qi

    2017-10-01

    Laser-induced fluorescence(LIFS), which is one of most effective discrimination methods to identify the material at the molecular level by inducing fluorescence spectrum, has been popularized for its fast and accurate probe's results. According to the research, violet laser or ultraviolet laser is always used as excitation light source. While, There is no atmospheric window for violet laser and ultraviolet laser, causing laser attenuation along its propagation path. What's worse, as the laser reaching sample, part of the light is reflected. That is, excitation laser really react on sample to produce fluorescence is very poor, leading to weak fluorescence mingled with the background light collected by LIFS' processing unit, when it used outdoor. In order to spread LIFS to remote probing under the complex background, study of improving signal-noise ratio for fluorescence channel is a meaningful work. Enhancing the fluorescence intensity and inhibiting background light both can improve fluorescence' signal-noise ratio. In this article, three different approaches of inhibiting background light are discussed to improve the signal-noise ratio of LIFS. The first method is increasing fluorescence excitation area in the proportion of LIFS' collecting field by expanding laser beam, if the collecting filed is fixed. The second one is changing field angle base to accommodate laser divergence angle. The third one is setting a very narrow gating circuit to control acquisition circuit, which is shortly open only when fluorescence arriving. At some level, these methods all can reduce the background light. But after discussion, the third one is best with adding gating acquisition circuit to acquisition circuit instead of changing light path, which is effective and economic.

  13. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FTIR

    EPA Science Inventory


    The paper gives preliminary results from a field evaluation of a new approach for quantifying gaseous fugitive emissions of area air pollution sources. The approach combines path-integrated concentration data acquired with any path-integrated optical remote sensing (PI-ORS) ...

  14. FIELD EVALUATION OF A METHOD FOR ESTIMATING GASEOUS FLUXES FROM AREA SOURCES USING OPEN-PATH FOURIER TRANSFORM INFRARED

    EPA Science Inventory

    The paper describes preliminary results from a field experiment designed to evaluate a new approach to quantifying gaseous fugitive emissions from area air pollution sources. The new approach combines path-integrated concentration data acquired with any path-integrated optical re...

  15. Laboratory earthquakes triggered during the eclogitization of lawsonite bearing blueschist

    NASA Astrophysics Data System (ADS)

    Incel, S.; Hilairet, N.; Labrousse, L.; John, T.; Deldicque, D.; Ferrand, T. P.; Wang, Y.; Renner, J.; Morales, L. F. G.; Schubnel, A.

    2016-12-01

    The origin of intermediate-depth seismicity has been debated for decades. A substantial fraction of these events occur within the upper plane of Wadati-Benioff double seismic zones believed to represent subducting oceanic crust. We deformed natural lawsonite-rich blueschist samples under eclogite-facies conditions (1 < P < 3.5 GPa; 583 K < T < 1121 K), using a D-DIA apparatus installed at a synchrotron beam line continuously monitoring stress, strain, phase content, and acoustic emissions (AEs). Two distinct eclogitization paths were followed: i) a cold path (maximum temperatures of 762 to 927 K), during which lawsonite and glaucophane went gradually unstable at higher pressure; ii) a hot path (maximum temperatures of 1073 to 1121 K) during which the complete breakdown of lawsonite at high temperature was triggered, but glaucophane or amphibole in general remained stable. Brittle failure of the sample, accompanied by the radiation of AEs, occurred for the cold path. In-situ XRD and post-mortem microstructural analysis demonstrate that fractures are topologically related to the growth of omphacite. Amorphous material was detected along the fractures by transmission electron microscopy without evidence for free-water. Since the growth of omphacite is associated with grain-size reduction, we interpret the observed mechanical instability as a transformation-induced thermal runaway under stress (or transformational faulting) triggered during the transition from lawsonite-blueschist to lawsonite-eclogite. In contrast, we find no microstructural evidence that the breakdown of lawsonite, and hence the liberation of water leads to the fracturing of the sample along the hot path, although some AEs were detected during an experiment performed at 1.5 GPa. Our experimental results challenge the concept of "dehydration embrittlement", which ascribes the genesis of intermediate-depth earthquakes to the breakdown of hydrous phases in the subducting oceanic plate. Instead our results demonstrate that grain-size reduction (transformational faulting) during the transition from lawsonite-blueschist to lawsonite-eclogite leads to the brittle failure of the samples.

  16. Peano-like paths for subaperture polishing of optical aspherical surfaces.

    PubMed

    Tam, Hon-Yuen; Cheng, Haobo; Dong, Zhichao

    2013-05-20

    Polishing can be more uniform if the polishing path provides uniform coverage of the surface. It is known that Peano paths can provide uniform coverage of planar surfaces. Peano paths also contain short path segments and turns: (1) all path segments have the same length, (2) path segments are mutually orthogonal at the turns, and (3) path segments and turns are uniformity distributed over the domain surface. These make Peano paths an attractive candidate among polishing tool paths because they enhance multidirectional approaches of the tool to each surface location. A method for constructing Peano paths for uniform coverage of aspherical surfaces is proposed in this paper. When mapped to the aspherical surface, the path also contains short path segments and turns, and the above attributes are approximately preserved. Attention is paid so that the path segments are still well distributed near the vertex of the surface. The proposed tool path was used in the polishing of a number of parabolic BK7 specimens using magnetorheological finishing (MRF) and pitch with cerium oxide. The results were rather good for optical lenses and confirm that a Peano-like path was useful for polishing, for MRF, and for pitch polishing. In the latter case, the surface roughness achieved was 0.91 nm according to WYKO measurement.

  17. Path-Integration Computation of the Transport Properties of Polymers Nanoparticles and Complex Biological Structures

    NASA Astrophysics Data System (ADS)

    Douglas, Jack

    2014-03-01

    One of the things that puzzled me when I was a PhD student working under Karl Freed was the curious unity between the theoretical descriptions of excluded volume interactions in polymers, the hydrodynamic properties of polymers in solution, and the critical properties of fluid mixtures, gases and diverse other materials (magnets, superfluids,etc.) when these problems were formally expressed in terms of Wiener path integration and the interactions treated through a combination of epsilon expansion and renormalization group (RG) theory. It seemed that only the interaction labels changed from one problem to the other. What do these problems have in common? Essential clues to these interrelations became apparent when Karl Freed, myself and Shi-Qing Wang together began to study polymers interacting with hyper-surfaces of continuously variable dimension where the Feynman perturbation expansions could be performed through infinite order so that we could really understand what the RG theory was doing. It is evidently simply a particular method for resuming perturbation theory, and former ambiguities no longer existed. An integral equation extension of this type of exact calculation to ``surfaces'' of arbitrary fixed shape finally revealed the central mathematical object that links these diverse physical models- the capacity of polymer chains, whose value vanishes at the critical dimension of 4 and whose magnitude is linked to the friction coefficient of polymer chains, the virial coefficient of polymers and the 4-point function of the phi-4 field theory,...Once this central object was recognized, it then became possible solve diverse problems in material science through the calculation of capacity, and related ``virials'' properties, through Monte Carlo sampling of random walk paths. The essential ideas of this computational method are discussed and some applications given to non-trivial problems: nanotubes treated as either rigid rods or ensembles worm-like chains having finite cross-section, DNA, nanoparticles with grafted chain layers and knotted polymers. The path-integration method, which grew up from research in Karl Freed's group, is evidently a powerful tool for computing basic transport properties of complex-shaped objects and should find increasing application in polymer science, nanotechnological applications and biology.

  18. The petrologic history of the Sanganguey volcanic field, Nayarit, Mexico: Comparisons in a suite of crystal-rich and crystal-poor lavas

    NASA Astrophysics Data System (ADS)

    Crabtree, Stephen M.; Waters, Laura E.

    2017-04-01

    To evaluate if intermediate magmas erupting from Volcán Sanganguey (Mexico) and the surrounding volcanic field are formed by mixing of basalts and rhyolites or if they initially exist as intermediate liquids, a detailed petrological study is presented for eight andesite and dacite magmas. Six of the samples erupted from the central edifice (four andesites and two dacites) are crystal-rich (≤ 50 vol%), whereas the remaining two samples (one andesite and one dacite) erupted from monogenetic vents in the peripheral volcanic field and are crystal poor (≤ 5 vol%). Despite the variation in crystallinity, all samples are multiply saturated in five to seven mineral phases (plagioclase + orthopyroxene + titanomagnetite + ilmenite + apatite ± clinopyroxene ± hornblende). In all samples, plagioclase spans a 30-40 mol% An range in composition and orthopyroxene spans a range in Mg# of 5-10. Pre-eruptive temperatures and oxygen fugacites (relative to the NNO buffer) range from 853 (± 24) to 1085 (± 16) °C and - 0.1 (± 0.1) to 0.9 (± 0.1) Δ NNO, on the basis of Fe-Ti two oxide thermometry. Application of the plagioclase-liquid hygrometer to the samples reveals maximum H2O contents that range from 1.7-6.2 wt%. Comparison with phase equilibrium experiments demonstrates that all plagioclase and orthopyroxene compositions in the crystal-poor samples could have grown from their respective whole rock compositions. Comparison of crystal rich samples with phase equilibrium experiments reveals the presence of sodic xenocrysts which reflect resorption textures and an estimated excess plagioclase crystal cargo of > 6 vol%. The excess plagioclase crystal cargo is not distinguishable from phenocrystic plagioclase based on composition or texture, suggesting that they were also grown in intermediate melts, and are therefore described as antecrystic. No calcic plagioclase xenocrysts (> An79) typical of hydrous arc basalts are observed, thus it is likely that the excess plagioclase in the crystal-rich samples were originally formed in intermediate magmas. For the crystal-poor samples, we propose that the mechanism producing the complex phenocryst assemblages is degassing (± cooling), as it may shift equilibrium plagioclase compositions, kinetically inhibit crystal-growth, and increase melt viscosity, leading to complex textures. Notably, the hypothesis of degassing (± cooling) induced crystallization requires that the intermediate melts initially exist as liquids, prior to crystallization, supporting the hypothesis that intermediate melts are generated in the deep crust and arrive in the upper crust as liquids. For the crystal-rich samples, degassing (± cooling) may also be the mechanism generating a portion of the compositional and textural variation in the mineral assemblages and some incorporation of antecrysts or xenocrysts must occur as evidenced by an excess plagioclase crystal cargo; however, we find no definitive evidence supporting the incorporation of crystals initially grown in basalts or rhyolites. Given the similarities in phase assemblage, mineral compositions, mineral textures, and intensive variables between the crystal-poor and -rich samples, we conclude that the melts arriving into the upper crust beneath Volcán Sanganguey and the surrounding peripheral volcanic field are intermediate in composition and are initially formed (as liquids) in the deep crust. Plots of plagioclase composition (%An) vs. distance across each grain, XAL-103. Appendix Fig. B.2.3. Plots of plagioclase composition (%An) vs. distance across each grain, XAL-117. Appendix Fig. B.2.4. Plots of plagioclase composition (%An) vs. distance across each grain, XAL-109. Appendix Fig. B.2.5. Plots of plagioclase composition (%An) vs. distance across each grain, XAL-132. Appendix Fig. B.2.6. Plots of plagioclase composition (%An) vs. distance across each grain, XAL-115. Appendix Fig. B.2.7. Plots of plagioclase composition (%An) vs. distance across each grain, XAL-106. Appendix Fig. B.2.8. Plots of plagioclase composition (%An) vs. distance across each grain, XAL-129. Appendix Fig. B.3.2. Plots of pyroxene composition (Mg#) vs. distance across each grain, XAL-103. Appendix Fig. B.3.3. Plots of pyroxene composition (Mg#) vs. distance across each grain, XAL-117 Appendix Fig. B.3.4. Plots of pyroxene composition (Mg#) vs. distance across each grain, XAL-109. Appendix Fig. B.3.5. Plots of pyroxene composition (Mg#) vs. distance across each grain, XAL-132. Appendix Fig. B.3.6. Plots of pyroxene composition (Mg#) vs. distance across each grain, XAL-115. Appendix Fig. B.3.7. Plots of pyroxene composition (Mg#) vs. distance across each grain, XAL-106. Appendix Fig. B.3.8. Plots of pyroxene composition (Mg#) vs. distance across each grain, XAL-129. Appendix Fig. B.4.2. BSE images of plagioclase grains, with traversal path indicated, XAL-103. Appendix Fig. B.4.3. BSE images of plagioclase grains, with traversal path indicated, XAL-117. Appendix Fig. B.4.4. BSE images of plagioclase grains, with traversal path indicated, XAL-109. Appendix Fig. B.4.5. BSE images of plagioclase grains, with traversal path indicated, XAL-132. Appendix Fig. B.4.6. BSE images of plagioclase grains, with traversal path indicated, XAL-115. Appendix Fig. B.4.7. BSE images of plagioclase grains, with traversal path indicated, XAL-106. Appendix Fig. B.4.8. BSE images of plagioclase grains, with traversal path indicated, XAL-129. Appendix Fig. B.5.2. BSE images of pyroxene grains, with traversal path indicated, XAL-103. Appendix Fig. B.5.3. BSE images of pyroxene grains, with traversal path indicated, XAL-117. Appendix Fig. B.5.4. BSE images of pyroxene grains, with traversal path indicated, XAL-109. Appendix Fig. B.5.5. BSE images of pyroxene grains, with traversal path indicated, XAL-132. Appendix Fig. B.5.6. BSE images of pyroxene grains, with traversal path indicated, XAL-115. Appendix Fig. B.5.7. BSE images of pyroxene grains, with traversal path indicated, XAL-106. Appendix Fig. B.5.8. BSE images of pyroxene grains, with traversal path indicated, XAL-129.

  19. Method and apparatus for monitoring characteristics of a flow path having solid components flowing therethrough

    DOEpatents

    Hoskinson, Reed L [Rigby, ID; Svoboda, John M [Idaho Falls, ID; Bauer, William F [Idaho Falls, ID; Elias, Gracy [Idaho Falls, ID

    2008-05-06

    A method and apparatus is provided for monitoring a flow path having plurality of different solid components flowing therethrough. For example, in the harvesting of a plant material, many factors surrounding the threshing, separating or cleaning of the plant material and may lead to the inadvertent inclusion of the component being selectively harvested with residual plant materials being discharged or otherwise processed. In accordance with the present invention the detection of the selectively harvested component within residual materials may include the monitoring of a flow path of such residual materials by, for example, directing an excitation signal toward of flow path of material and then detecting a signal initiated by the presence of the selectively harvested component responsive to the excitation signal. The detected signal may be used to determine the presence or absence of a selected plant component within the flow path of residual materials.

  20. Path lumping: An efficient algorithm to identify metastable path channels for conformational dynamics of multi-body systems

    NASA Astrophysics Data System (ADS)

    Meng, Luming; Sheong, Fu Kit; Zeng, Xiangze; Zhu, Lizhe; Huang, Xuhui

    2017-07-01

    Constructing Markov state models from large-scale molecular dynamics simulation trajectories is a promising approach to dissect the kinetic mechanisms of complex chemical and biological processes. Combined with transition path theory, Markov state models can be applied to identify all pathways connecting any conformational states of interest. However, the identified pathways can be too complex to comprehend, especially for multi-body processes where numerous parallel pathways with comparable flux probability often coexist. Here, we have developed a path lumping method to group these parallel pathways into metastable path channels for analysis. We define the similarity between two pathways as the intercrossing flux between them and then apply the spectral clustering algorithm to lump these pathways into groups. We demonstrate the power of our method by applying it to two systems: a 2D-potential consisting of four metastable energy channels and the hydrophobic collapse process of two hydrophobic molecules. In both cases, our algorithm successfully reveals the metastable path channels. We expect this path lumping algorithm to be a promising tool for revealing unprecedented insights into the kinetic mechanisms of complex multi-body processes.

  1. Semianalytical computation of path lines for finite-difference models

    USGS Publications Warehouse

    Pollock, D.W.

    1988-01-01

    A semianalytical particle tracking method was developed for use with velocities generated from block-centered finite-difference ground-water flow models. Based on the assumption that each directional velocity component varies linearly within a grid cell in its own coordinate directions, the method allows an analytical expression to be obtained describing the flow path within an individual grid cell. Given the intitial position of a particle anywhere in a cell, the coordinates of any other point along its path line within the cell, and the time of travel between them, can be computed directly. For steady-state systems, the exit point for a particle entering a cell at any arbitrary location can be computed in a single step. By following the particle as it moves from cell to cell, this method can be used to trace the path of a particle through any multidimensional flow field generated from a block-centered finite-difference flow model. -Author

  2. Personalized Modeling for Prediction with Decision-Path Models

    PubMed Central

    Visweswaran, Shyam; Ferreira, Antonio; Ribeiro, Guilherme A.; Oliveira, Alexandre C.; Cooper, Gregory F.

    2015-01-01

    Deriving predictive models in medicine typically relies on a population approach where a single model is developed from a dataset of individuals. In this paper we describe and evaluate a personalized approach in which we construct a new type of decision tree model called decision-path model that takes advantage of the particular features of a given person of interest. We introduce three personalized methods that derive personalized decision-path models. We compared the performance of these methods to that of Classification And Regression Tree (CART) that is a population decision tree to predict seven different outcomes in five medical datasets. Two of the three personalized methods performed statistically significantly better on area under the ROC curve (AUC) and Brier skill score compared to CART. The personalized approach of learning decision path models is a new approach for predictive modeling that can perform better than a population approach. PMID:26098570

  3. Paths to nursing leadership.

    PubMed

    Bondas, Terese

    2006-07-01

    The aim was to explore why nurses enter nursing leadership and apply for a management position in health care. The study is part of a research programme in nursing leadership and evidence-based care. Nursing has not invested enough in the development of nursing leadership for the development of patient care. There is scarce research on nurses' motives and reasons for committing themselves to a career in nursing leadership. A strategic sample of 68 Finnish nurse leaders completed a semistructured questionnaire. Analytic induction was applied in an attempt to generate a theory. A theory, Paths to Nursing Leadership, is proposed for further research. Four different paths were found according to variations between the nurse leaders' education, primary commitment and situational factors. They are called the Path of Ideals, the Path of Chance, the Career Path and the Temporary Path. Situational factors and role models of good but also bad nursing leadership besides motivational and educational factors have played a significant role when Finnish nurses have entered nursing leadership. The educational requirements for nurse leaders and recruitment to nursing management positions need serious attention in order to develop a competent nursing leadership.

  4. The relationship between the FFM personality traits, state psychopathology, and sexual compulsivity in a sample of male college students.

    PubMed

    Pinto, Joana; Carvalho, Joana; Nobre, Pedro J

    2013-07-01

    Several studies have advocated a relationship between psychopathological features and sexual compulsivity. Such relationship is often found among individuals seeking help for out of control sexual behavior, suggesting that the association between psychological adjustment and sexual compulsivity may have a significant clinical value. However, a more complete approach to the topic of sexual compulsivity would also include the analysis of nonclinical samples as healthy individuals may be at risk of developing some features of hypersexuality in the future. The aim of this study was to explore the relationship between stable traits of personality, state psychopathology, and sexual compulsivity in a sample of male college students. Furthermore, the potential mediating role of state psychopathology in the relationship between personality traits and sexual compulsivity was tested. Participants completed the following measures: the NEO Five-Factor Inventory, the Brief Symptom Inventory, and the Compulsive Sexual Behavior Inventory-22. The sample included 152 male college students recruited in a Portuguese university using nonrandom methods. The measures were completed individually and anonymously. Findings on state psychopathology suggested that psychoticism may be one of the key dimensions associated with sexual compulsivity in male students. The personality traits of Neuroticism and Agreeableness were also significant predictors of sexual compulsivity. Findings on the mediating effects suggested that state psychopathology mediated the relationship between Neuroticism and sexual compulsivity but not between Agreeableness and sexual compulsivity. A psychopathological path (encompassing Neuroticism and state psychopathology) and a behavioral path (encompassing Agreeableness features) may be involved in sexual compulsivity as reported by a nonclinical sample of male students. © 2013 International Society for Sexual Medicine.

  5. How to Collect National Institute of Standards and Technology (NIST) Traceable Fluorescence Excitation and Emission Spectra.

    PubMed

    Gilmore, Adam Matthew

    2014-01-01

    Contemporary spectrofluorimeters comprise exciting light sources, excitation and emission monochromators, and detectors that without correction yield data not conforming to an ideal spectral response. The correction of the spectral properties of the exciting and emission light paths first requires calibration of the wavelength and spectral accuracy. The exciting beam path can be corrected up to the sample position using a spectrally corrected reference detection system. The corrected reference response accounts for both the spectral intensity and drift of the exciting light source relative to emission and/or transmission detector responses. The emission detection path must also be corrected for the combined spectral bias of the sample compartment optics, emission monochromator, and detector. There are several crucial issues associated with both excitation and emission correction including the requirement to account for spectral band-pass and resolution, optical band-pass or neutral density filters, and the position and direction of polarizing elements in the light paths. In addition, secondary correction factors are described including (1) subtraction of the solvent's fluorescence background, (2) removal of Rayleigh and Raman scattering lines, as well as (3) correcting for sample concentration-dependent inner-filter effects. The importance of the National Institute of Standards and Technology (NIST) traceable calibration and correction protocols is explained in light of valid intra- and interlaboratory studies and effective spectral qualitative and quantitative analyses including multivariate spectral modeling.

  6. Imaging metallic samples using electrical capacitance tomography: forward modelling and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Hosani, E. Al; Zhang, M.; Abascal, J. F. P. J.; Soleimani, M.

    2016-11-01

    Electrical capacitance tomography (ECT) is an imaging technology used to reconstruct the permittivity distribution within the sensing region. So far, ECT has been primarily used to image non-conductive media only, since if the conductivity of the imaged object is high, the capacitance measuring circuit will be almost shortened by the conductivity path and a clear image cannot be produced using the standard image reconstruction approaches. This paper tackles the problem of imaging metallic samples using conventional ECT systems by investigating the two main aspects of image reconstruction algorithms, namely the forward problem and the inverse problem. For the forward problem, two different methods to model the region of high conductivity in ECT is presented. On the other hand, for the inverse problem, three different algorithms to reconstruct the high contrast images are examined. The first two methods are the linear single step Tikhonov method and the iterative total variation regularization method, and use two sets of ECT data to reconstruct the image in time difference mode. The third method, namely the level set method, uses absolute ECT measurements and was developed using a metallic forward model. The results indicate that the applications of conventional ECT systems can be extended to metal samples using the suggested algorithms and forward model, especially using a level set algorithm to find the boundary of the metal.

  7. Validation of paper-based assay for rapid blood typing.

    PubMed

    Al-Tamimi, Mohammad; Shen, Wei; Zeineddine, Rania; Tran, Huy; Garnier, Gil

    2012-02-07

    We developed and validated a new paper-based assay for the detection of human blood type. Our method involves spotting a 3 μL blood sample on a paper surface where grouping antibodies have already been introduced. A thin film chromatograph tank was used to chromatographically elute the blood spot with 0.9% NaCl buffer for 10 min by capillary absorption. Agglutinated red blood cells (RBCs) were fixed on the paper substrate, resulting in a high optical density of the spot, with no visual trace in the buffer wicking path. Conversely, nonagglutinated RBCs could easily be eluted by the buffer and had low optical density of the spot and clearly visible trace of RBCs in the buffer wicking path. Different paper substrates had comparable ability to fix agglutinated blood, while a more porous substrate like Kleenex paper had enhanced ability to elute nonagglutinated blood. Using optimized conditions, a rapid assay for detection of blood groups was developed by spotting blood to antibodies absorbed to paper and eluted with 200 μL of 0.9% NaCl buffer directly by pipetting. RBCs fixation on paper accurately detected blood groups (ABO and RhD) using ascending buffer for 10 min or using a rapid elution step in 100/100 blood samples including 4 weak AB and 4 weak RhD samples. The assay has excellent reproducibility where the same blood group was obtained in 26 samples assessed in 2 different days. Agglutinated blood fixation on porous paper substrate provides a new, simple, and sensitive assay for rapid detection of blood group for point-of-care applications. © 2011 American Chemical Society

  8. Free-end adaptive nudged elastic band method for locating transition states in minimum energy path calculation.

    PubMed

    Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang

    2016-09-07

    A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method.

  9. New generation of universal modeling for centrifugal compressors calculation

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.

    2015-08-01

    The Universal Modeling method is in constant use from mid - 1990th. Below is presented the newest 6th version of the Method. The flow path configuration of 3D impellers is presented in details. It is possible to optimize meridian configuration including hub/shroud curvatures, axial length, leading edge position, etc. The new model of vaned diffuser includes flow non-uniformity coefficient based on CFD calculations. The loss model was built from the results of 37 experiments with compressors stages of different flow rates and loading factors. One common set of empirical coefficients in the loss model guarantees the efficiency definition within an accuracy of 0.86% at the design point and 1.22% along the performance curve. The model verification was made. Four multistage compressors performances with vane and vaneless diffusers were calculated. As the model verification was made, four multistage compressors performances with vane and vaneless diffusers were calculated. Two of these compressors have quite unusual flow paths. The modeling results were quite satisfactory in spite of these peculiarities. One sample of the verification calculations is presented in the text. This 6th version of the developed computer program is being already applied successfully in the design practice.

  10. Optical system and method for gas detection and monitoring

    NASA Technical Reports Server (NTRS)

    Polzin, Kurt A. (Inventor); Sinko, John Elihu (Inventor); Korman, Valentin (Inventor); Witherow, William K. (Inventor); Hendrickson, Adam Gail (Inventor)

    2011-01-01

    A free-space optical path of an optical interferometer is disposed in an environment of interest. A light beam is guided to the optical interferometer using a single-mode optical fiber. The light beam traverses the interferometer's optical path. The light beam guided to the optical path is combined with the light beam at the end of the optical path to define an output light. A temporal history of the output light is recorded.

  11. Multiple Smaller Missions as a Direct Pathway to Mars Sample Return

    NASA Technical Reports Server (NTRS)

    Niles, P. B.; Draper, D. S.; Evans, C. A.; Gibson, E. K.; Graham, L. D.; Jones, J. H.; Lederer, S. M.; Ming, D.; Seaman, C. H.; Archer, P. D.; hide

    2012-01-01

    Recent discoveries by the Mars Exploration Rovers, Mars Express, Mars Odyssey, and Mars Reconnaissance Orbiter spacecraft include multiple, tantalizing astrobiological targets representing both past and present environments on Mars. The most desirable path to Mars Sample Return (MSR) would be to collect and return samples from that site which provides the clearest examples of the variety of rock types considered a high priority for sample return (pristine igneous, sedimentary, and hydrothermal). Here we propose an MSR architecture in which the next steps (potentially launched in 2018) would entail a series of smaller missions, including caching, to multiple landing sites to verify the presence of high priority sample return targets through in situ analyses. This alternative architecture to one flagship-class sample caching mission to a single site would preserve a direct path to MSR as stipulated by the Planetary Decadal Survey, while permitting investigation of diverse deposit types and providing comparison of the site of returned samples to other aqueous environments on early Mars

  12. Modeling the assembly order of multimeric heteroprotein complexes

    PubMed Central

    Esquivel-Rodriguez, Juan; Terashi, Genki; Christoffer, Charles; Shin, Woong-Hee

    2018-01-01

    Protein-protein interactions are the cornerstone of numerous biological processes. Although an increasing number of protein complex structures have been determined using experimental methods, relatively fewer studies have been performed to determine the assembly order of complexes. In addition to the insights into the molecular mechanisms of biological function provided by the structure of a complex, knowing the assembly order is important for understanding the process of complex formation. Assembly order is also practically useful for constructing subcomplexes as a step toward solving the entire complex experimentally, designing artificial protein complexes, and developing drugs that interrupt a critical step in the complex assembly. There are several experimental methods for determining the assembly order of complexes; however, these techniques are resource-intensive. Here, we present a computational method that predicts the assembly order of protein complexes by building the complex structure. The method, named Path-LzerD, uses a multimeric protein docking algorithm that assembles a protein complex structure from individual subunit structures and predicts assembly order by observing the simulated assembly process of the complex. Benchmarked on a dataset of complexes with experimental evidence of assembly order, Path-LZerD was successful in predicting the assembly pathway for the majority of the cases. Moreover, when compared with a simple approach that infers the assembly path from the buried surface area of subunits in the native complex, Path-LZerD has the strong advantage that it can be used for cases where the complex structure is not known. The path prediction accuracy decreased when starting from unbound monomers, particularly for larger complexes of five or more subunits, for which only a part of the assembly path was correctly identified. As the first method of its kind, Path-LZerD opens a new area of computational protein structure modeling and will be an indispensable approach for studying protein complexes. PMID:29329283

  13. Modeling the assembly order of multimeric heteroprotein complexes.

    PubMed

    Peterson, Lenna X; Togawa, Yoichiro; Esquivel-Rodriguez, Juan; Terashi, Genki; Christoffer, Charles; Roy, Amitava; Shin, Woong-Hee; Kihara, Daisuke

    2018-01-01

    Protein-protein interactions are the cornerstone of numerous biological processes. Although an increasing number of protein complex structures have been determined using experimental methods, relatively fewer studies have been performed to determine the assembly order of complexes. In addition to the insights into the molecular mechanisms of biological function provided by the structure of a complex, knowing the assembly order is important for understanding the process of complex formation. Assembly order is also practically useful for constructing subcomplexes as a step toward solving the entire complex experimentally, designing artificial protein complexes, and developing drugs that interrupt a critical step in the complex assembly. There are several experimental methods for determining the assembly order of complexes; however, these techniques are resource-intensive. Here, we present a computational method that predicts the assembly order of protein complexes by building the complex structure. The method, named Path-LzerD, uses a multimeric protein docking algorithm that assembles a protein complex structure from individual subunit structures and predicts assembly order by observing the simulated assembly process of the complex. Benchmarked on a dataset of complexes with experimental evidence of assembly order, Path-LZerD was successful in predicting the assembly pathway for the majority of the cases. Moreover, when compared with a simple approach that infers the assembly path from the buried surface area of subunits in the native complex, Path-LZerD has the strong advantage that it can be used for cases where the complex structure is not known. The path prediction accuracy decreased when starting from unbound monomers, particularly for larger complexes of five or more subunits, for which only a part of the assembly path was correctly identified. As the first method of its kind, Path-LZerD opens a new area of computational protein structure modeling and will be an indispensable approach for studying protein complexes.

  14. Two arm robot path planning in a static environment using polytopes and string stretching. Thesis

    NASA Technical Reports Server (NTRS)

    Schima, Francis J., III

    1990-01-01

    The two arm robot path planning problem has been analyzed and reduced into components to be simplified. This thesis examines one component in which two Puma-560 robot arms are simultaneously holding a single object. The problem is to find a path between two points around obstacles which is relatively fast and minimizes the distance. The thesis involves creating a structure on which to form an advanced path planning algorithm which could ideally find the optimum path. An actual path planning method is implemented which is simple though effective in most common situations. Given the limits of computer technology, a 'good' path is currently found. Objects in the workspace are modeled with polytopes. These are used because they can be used for rapid collision detection and still provide a representation which is adequate for path planning.

  15. Calculation of the Local Free Energy Landscape in the Restricted Region by the Modified Tomographic Method.

    PubMed

    Chen, Changjun

    2016-03-31

    The free energy landscape is the most important information in the study of the reaction mechanisms of the molecules. However, it is difficult to calculate. In a large collective variable space, a molecule must take a long time to obtain the sufficient sampling during the simulation. To save the calculation quantity, decreasing the sampling region and constructing the local free energy landscape is required in practice. However, the restricted region in the collective variable space may have an irregular shape. Simply restricting one or more collective variables of the molecule cannot satisfy the requirement. In this paper, we propose a modified tomographic method to perform the simulation. First, it divides the restricted region by some hyperplanes and connects the centers of hyperplanes together by a curve. Second, it forces the molecule to sample on the curve and the hyperplanes in the simulation and calculates the free energy data on them. Finally, all the free energy data are combined together to form the local free energy landscape. Without consideration of the area outside the restricted region, this free energy calculation can be more efficient. By this method, one can further optimize the path quickly in the collective variable space.

  16. Propensity Scores in Pharmacoepidemiology: Beyond the Horizon.

    PubMed

    Jackson, John W; Schmid, Ian; Stuart, Elizabeth A

    2017-12-01

    Propensity score methods have become commonplace in pharmacoepidemiology over the past decade. Their adoption has confronted formidable obstacles that arise from pharmacoepidemiology's reliance on large healthcare databases of considerable heterogeneity and complexity. These include identifying clinically meaningful samples, defining treatment comparisons, and measuring covariates in ways that respect sound epidemiologic study design. Additional complexities involve correctly modeling treatment decisions in the face of variation in healthcare practice, and dealing with missing information and unmeasured confounding. In this review, we examine the application of propensity score methods in pharmacoepidemiology with particular attention to these and other issues, with an eye towards standards of practice, recent methodological advances, and opportunities for future progress. Propensity score methods have matured in ways that can advance comparative effectiveness and safety research in pharmacoepidemiology. These include natural extensions for categorical treatments, matching algorithms that can optimize sample size given design constraints, weighting estimators that asymptotically target matched and overlap samples, and the incorporation of machine learning to aid in covariate selection and model building. These recent and encouraging advances should be further evaluated through simulation and empirical studies, but nonetheless represent a bright path ahead for the observational study of treatment benefits and harms.

  17. THE CRITICAL-PATH METHOD OF CONSTRUCTION CONTROL.

    ERIC Educational Resources Information Center

    DOMBROW, RODGER T.; MAUCHLY, JOHN

    THIS DISCUSSION PRESENTS A DEFINITION AND BRIEF DESCRIPTION OF THE CRITICAL-PATH METHOD AS APPLIED TO BUILDING CONSTRUCTION. INTRODUCING REMARKS CONSIDER THE MOST PERTINENT QUESTIONS PERTAINING TO CPM AND THE NEEDS ASSOCIATED WITH MINIMIZING TIME AND COST ON CONSTRUCTION PROJECTS. SPECIFIC DISCUSSION INCLUDES--(1) ADVANTAGES OF NETWORK TECHNIQUES,…

  18. Teaching Basic Quantum Mechanics in Secondary School Using Concepts of Feynman Path Integrals Method

    ERIC Educational Resources Information Center

    Fanaro, Maria de los Angeles; Otero, Maria Rita; Arlego, Marcelo

    2012-01-01

    This paper discusses the teaching of basic quantum mechanics in high school. Rather than following the usual formalism, our approach is based on Feynman's path integral method. Our presentation makes use of simulation software and avoids sophisticated mathematical formalism. (Contains 3 figures.)

  19. Paths to Bullying in Online Gaming: The Effects of Gender, Preference for Playing Violent Games, Hostility, and Aggressive Behavior on Bullying

    ERIC Educational Resources Information Center

    Yang, Shu Ching

    2012-01-01

    This study examined a sample of adolescent online game players and explored the relationships between their gender, preference for video games (VG), hostility, aggressive behavior, experiences of cyberbullying, and victimization. The path relationships among the variables were further validated with structure equation modeling. Among the…

  20. Finding Chemical Reaction Paths with a Multilevel Preconditioning Protocol

    PubMed Central

    2015-01-01

    Finding transition paths for chemical reactions can be computationally costly owing to the level of quantum-chemical theory needed for accuracy. Here, we show that a multilevel preconditioning scheme that was recently introduced (Tempkin et al. J. Chem. Phys.2014, 140, 184114) can be used to accelerate quantum-chemical string calculations. We demonstrate the method by finding minimum-energy paths for two well-characterized reactions: tautomerization of malonaldehyde and Claissen rearrangement of chorismate to prephanate. For these reactions, we show that preconditioning density functional theory (DFT) with a semiempirical method reduces the computational cost for reaching a converged path that is an optimum under DFT by several fold. The approach also shows promise for free energy calculations when thermal noise can be controlled. PMID:25516726

Top